Podcasts about elizabeth yeah

  • 14PODCASTS
  • 18EPISODES
  • 31mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 21, 2023LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about elizabeth yeah

Latest podcast episodes about elizabeth yeah

Serious Sellers Podcast: Learn How To Sell On Amazon
#511 - Managing Q4 Amazon PPC Campaigns

Serious Sellers Podcast: Learn How To Sell On Amazon

Play Episode Listen Later Nov 21, 2023 44:16


Are you ready to skyrocket your knowledge of Amazon PPC? In this TACoS Tuesday episode, prepare to be amazed as we bring you the secrets of the trade from none other than Elizabeth Greene, the co-founder of Amazon ads agency Junglr. Dive into the world of data analytics and learn why understanding the numbers behind the numbers is crucial. Whether you're a beginner or a seasoned seller, we've got insights that are bound to give your Amazon PPC game a boost. We talk about the core strategies for launching new products, from using supplementary keywords to strategic ad placements. We uncover the importance of context when branching into new markets and how to leverage different keyword match types to target specific search terms. Learn about optimizing strategies for Black Friday and Cyber Monday, and how to manage your budget effectively during these peak seasons. Lastly, ignite your understanding of advertising for branded products on Amazon. We debate the significance of tracking the share of search and using Search Query Performance reports, and reveal our strategies for advertising for products with only a few relevant keywords. Tune in and take away valuable strategies and insights that will elevate your Amazon advertising game to new heights.   In episode 511 of the Serious Sellers Podcast, Shivali and Elizabeth talk about: 00:00 - It's Time For Another TACoS Tuesday Episode! 05:34 - Evaluating and Auditing PPC Strategy  08:10 - Analyzing Ad Spend Efficiency and Impact 12:34 - Advertising Strategy and Keyword Targeting 17:45 - Advertising Strategy for New Product Launch 25:32 - Keyword Research Using Helium 10 30:51 - Using Keywords and Sales Volume 36:31 - Optimizing Bids for Better Ad Performance 42:22 - Control Ad Spend, Gain Campaign Impressions ► Instagram: instagram.com/serioussellerspodcast ► Free Amazon Seller Chrome Extension: https://h10.me/extension ► Sign Up For Helium 10: https://h10.me/signup  (Use SSP10 To Save 10% For Life) ► Learn How To Sell on Amazon: https://h10.me/ft ► Watch The Podcasts On Youtube: youtube.com/@Helium10/videos Transcript Shivali Patel: Today, on TACoS Tuesday, we answer all of your PPC questions live, as well as discuss what you could be doing in terms of launching and auditing your PPC campaigns during the Q4 season. Bradley Sutton: How cool is that? Pretty cool, I think. Want to enter in an Amazon keyword and then within seconds, get up to thousands of potentially related keywords that you could research. Then you need Magnet by Helium 10. For more information, go to h10.me/magnet. Magnet works in most Amazon marketplaces, including USA, Mexico, Australia, Germany, UK, India and much more. Shivali Patel: All right, hello everyone, and welcome to another episode of the Series Dollars podcast by Helium 10. I'm your host, Shivali Patel, and this is the show that is our monthly TACoS Tuesday presentation, where we talk anything and everything Amazon ads. So today we have a special guest with us, and that is Elizabeth Greene, who is the co-founder of an Amazon ads agency called Junglr. So with that, let's go ahead and bring her up. Hi, Elizabeth, how are you? I'm doing well, how are you? Elizabeth: Very good. Shivali Patel: So, nice to have you on. Thank you for joining us. Elizabeth: Yeah, thanks for having me. These are always, always fun. Shivali Patel: And what an exciting time to be talking about Amazon ads to a fat. It's cute for you. Oh my goodness, you must be slammed. Elizabeth: Life is a little bit crazy right now, but you know it comes with the territory. Shivali Patel: So it does. It is peak season I see we have someone coming, so it's a very exciting time to be in business and I'm looking forward to reading your questions and hopefully having Elizabeth answer them Now. The first question here says what can you suggest for a beginner like me, who is just starting out, and what and where can I learn to grow as much as possible? Elizabeth: I would actually say there's two skills that one, in the beginning, none of us have, and they are skills and they can be learned, even though they're considered more quote, soft skills. Data analytics made it not as much. Shivali Patel: My two things are going to be. Elizabeth: Data analytics and communication skills Community Asian sales are, you're going to find, are quite important when it comes to management of accounts management of accounts that are not your own. So if you are, even if you're a brand manager in a company or, you know, obviously, at an agency seller and a sourcing person, okay then I'm going to go with data analytics. Data analytics are going to be your friend. The things that I've kind of discovered have been, like you know, sort of mind blowing. For me are the numbers behind, the numbers Meaning. So when you're trying to evaluate ACoS, right, a lot of people are like, oh, it costs one up, it costs with down. Great, I know this, I can look at the account. What the heck am I going to do about it? Data analytics really good data analytics not only tell you the what, but the why and then the what next. So you're, if you can get really really good at the why and the what next, that's going to really set you apart and the way that I kind of have come to it. This is my own personal journey. Maybe there's other people who are way smarter than me, have way better journeys, but for me it has been, again understanding the numbers behind the numbers to have, for example, right, you start in a little bit of a way, it's kind of like the matrix. Elizabeth: So when you're breaking down, say ACoS, right, you go, okay, ACoS one up, big, else one down. Why right, what the heck happened? You're like, oh, wait, I can calculate ACoS by ad spend divided by ad sales. Okay, so it's either that ad spend went up and sales remain consistent or went down, or ad sales went down and spend remain consistent. She like, oh, okay, there's those two variables. Okay, now I can say, okay, ad spend increased. And then I can go, okay, ad spend increased. Great, I know that why. And then you're like, okay, so I can calculate my ad spend by my cost per click, by my number of clicks. Elizabeth: So either my cost per click went up or the number of clicks happening in my account went up. And then you can look at those two variables and go, oh, okay, it's the number of clicks. Why? Oh, I just launched a whole bunch of new stuff. Okay, that's why. Or my cost per click went up exponentially. Why? Maybe you know, it's just a natural market change thing. Talking about prime time, peak season, now you're probably going to see cost per clicks going up. It's a market thing. Versus other times you might have aggressively increased a whole bunch of bits in your account and so then you go check back. So data analytics that's the way I view it. I am not classically trained on data analytics, I just have looked at it for over five years now and tried to figure out the what the heck is going on a question and the what to do about it questions, and so those. That's my way of sort of. I've learned to sort of peer into the matrix. So if you can get really good at understanding not just what the data is but what it's telling you, that's really going to get you to the next level. Shivali Patel: Definitely, and I think a lot of people have very different strategies. I think Elizabeth's strategy, you know, is definitely one you should take into consideration. But also, the best way to learn is going to be trial and error and until you're really sifting through your own data, I think it's going to be hard to you know gauge sort of what's happening. I think a lot of things in business are just as they come. Now I want to kind of take the other side of that and go into, let's say, somebody's not a beginner, right, somebody's been selling for a while. They're more established. What do you recommend to somebody who might be evaluating or trying to audit their own PPC strategy? Elizabeth: Next level is going to be evaluating things on a per product level. And let me clarify when I say per product, I mean per listing. The reason why is the data gets kind of funky when you pull it down to a skew level. You definitely can, but there's some nuances that you really want to be aware of that can kind of lead you in the wrong direction if you're looking at a per skewer, per child days and level. But if you can start looking at your ad strategy, your sales growth, everything through the lens of listings, that's really going to take you to the next level. Shivali Patel: So when you see listings, are you talking about maybe like the conversion metrics? Are you looking at the keywords that you're using, sort of what is like the underlying factors? I guess all the above. Elizabeth: Honestly, but to make sense of it all. Because, to your point, like force for the trees, if you look at like everything, then do you walk away being like I have no idea what in the world I'm supposed to focus on? So the way that we've begun looking at it and the reason why we started looking at it like this is because we managed several clothing accounts. Talk about complexity, talk about force for the trees. You're like where in the world do I start? And you want to make impact on these accounts. Right, you can't just like all right, I did my bit, adjustments and call it good. Like you really want to get at our hands dirty and like really start improving the accounts. But you're like where in the world do I focus? So what we've started doing is percentage of total have been a little bit of a game changer. They're not, it's not the newest thing on the block. A lot of people use this percentage of total, but the two things that we look at is the percentage of total sales of each. Again, we're talking about a listing level. Again, reason clothing you have up to hundreds of different SKUs on a per listing level. Like how the heck do you make sense of it. So how do we make sense of it is rolling it up to the parent listing level and then looking at the percentage of total ad spend, again on a per listing. Elizabeth: So this gives you a lot of clarity into what products are driving the most sales for the brand. And then, what products are we spending, are we investing the most ad spend on? And when you look at it this way, it's very common to have these things happen in the account. If you haven't been paying attention to them, you oftentimes will see like oh wow, this product's driving 2% of my total sales volume and I'm spending 10% of my total ad spend here. Like that's probably a discrepancy. Maybe I should go and adjust those ads. So that gives you a lot of clarity. And then to court of gauge because again we're an ad agency, so ads are the thing that we focus on the most to help and drive improvements for the brands is we will look at the impact of the total spend on that per product. So again, percentage of total ad spend, and then we'll look at what we call like quote ad spend efficiencies, meaning ACoS, Total ACoS, ad sale percentage, also the delta between your ad conversion rate and your total conversion rate. Our unit session percentage is actually really helpful gauge. And so we're like, okay, we're investing most of our dollars here. How is our efficiency on that large investment? Elizabeth: And then you can sort of pinpoint like, oh, wow, I'm investing most of my ad spend into this product, to the point of like 5% of total brand sales, 13% of total ad spend investments. And wow, the ad spend investments are really unprofitable. Now, if you're in a launch phase, there might I mean there's context that you need to add to the numbers, to the point of like telling the story with data. And if you're managing the brand, you probably know the context. But at least it goes as okay. So here's two products we should dig into more. Here's two products we need to probably invest more of our ad spend on. And it really starts to clarify things when you really kind of understand how to see the picture in that way. Shivali Patel: To kind of follow up on that how do you really end up deciding which keywords to go after, as well as, maybe, how to really structure them into campaigns in accordance with your budget, because I know that's different for everyone? Elizabeth: Yes, it definitely is. We will always focus on relevancy first in the beginning. Now there are certain times if you're doing like a brand awareness play or you're like, wow, I've really targeted my market and I need to branch out, like what's the next hill? Absolutely go after categories, you know like, go after those brand awareness plays. But if you're in the beginning and you're in a launch, the nuance of Amazon advertising is you're not building, you don't build the audience. Amazon has built the audience for you. Elizabeth: All we're looking to do is use specific keywords or search terms to get in front of the audience that is already existing and that's where relevancy comes in. So you're saying where is my specific shopper? What are they using to search for products like mine? And I need to make sure I'm showing up there. So we're always going to prioritize that. That typically is going to get you better conversions, you know, better clicks, more interactions with your brand and which leads to more sales. And then also on the flip side, and if you're doing this on launch, it is a really good product sort of evaluation, because if you're showing up exactly in front of your target shoppers and your click rate is terrible and your conversion rate is terrible and like nobody's buying, there's probably a signal that maybe there's things to adjust with the listing or other factors that you should look into. Shivali Patel: Do you ever go into, like branch into, I guess, supplementary keywords where maybe it's not exactly for the product but it's maybe like a related product, and where do you really place those sort of ads? Elizabeth: Yeah, so when we'll do it is really dependent on the overall performance and the ads spend or profit goals, right? I mean, it seems so stupid, simple, but if you are advertising more, you're going to be spending more, and if you're struggling to bring down Total ACoS or ACoS again, ad spend divided by ad sales, the one thing you can control with ads is ad spend. So in those cases when we're looking to bring down Total ACoS, we're typically looking at pulling back on ad spend. So if a product or brand is in that phase, I'm not going to be like let's launch all these broad things and we're not quite sure how they're going to convert, right? So context is really key here, but when it comes to branching out, it really is dependent. Elizabeth: You will find certain products on launch where, like, for some reason, it's really difficult to convert on the highly relevant terms but, like adjacent markets or, to your point, like somewhat related keywords or related products, actually work really well. So we're always going to prioritize what's working. So if we're like finding all of these search terms that are popping up through, like, say, broad match or autos or something, wow, we weren't aware that this is actually a really great market for us. But it's very obvious, looking at the data, that's something that we should, that's a direction we should go in. Then obviously we'll push towards that direction. But depending on if we're going to like decide to branch out on our own, it probably is highly dependent on the ad spend and then also sort of the phase of the product, meaning like how we kind of conquered everything and what's our next play. Shivali Patel: And in terms of when you are launching, yes, we're going for the most relevant keywords, right, that are where you can find your target audience. But what about in terms of exact match, like yes, are you going directly into exact match and auto and broad all at the same time? Are you just kind of doing exact first and then branching into auto? Elizabeth: Yeah, so we do like exact first. I'm still a huge fan of like all the above, exact phrase and broad. The one thing that we have found is like within your exact match, you can just be more specific on what search pages you're spending your ad dollars on. So if you, especially if you have limited budgets in the beginning and you're like, hey, I really want to make sure that I hyper target these keywords, exact match makes a lot of sense. Now, if you're talking about you like branching out, we're still going to prioritize putting a higher bids on our exact match keyword. So we're still going to try and have most of. Elizabeth: Let me say this if you're going to be aggressively spending on a specific search page, you're like I've identified this keyword, this is my ranking keyword, I'm going to put a lot of budget behind it. Exact match all the way. Now I don't want anyone to say that clip and be like wow, she hates broad and freight. Like, no, I love all the above. Like we run autos, run multiple autos, category targeting, like all the above, do it. But if you're trying to get really aggressive with something, it's just it's the nature of how the match type works more than like it's quote best, because they don't really think it is. Shivali Patel: Now I do see that we have some new questions, so let me go ahead and pop them up. We have can you give a refresher on how people can do modifiers, since nowadays exact sometimes performs as phrase match and phrase sometimes is like broad. So if someone wants to make sure that an exact is that exact two word phrase is adding plus in the middle self that. Elizabeth: Yes, it does, but caveat, it only officially does in sponsor brand ads. If you look at the document, I mean I gotta go check it because they're like they keep updating the documentation on the slide and like not notifying us. But from my understanding and from the reps I've talked to, and also the search storm reports, I've seen modified broad match I don't believe a hundred percent works all the time in sponsored product ads, which is super annoying. So for those of you listening who are unaware of what a modified broad match is or modified search terms, modified broad match is a thing in sponsor brand ads. So the way that broad match keywords work in sponsored brand ads and they have sense care that over to sponsor product ads is that it cannot only target. You know we do classic broad match, right, you can put keywords in the middle, you can swap stuff around. But like if I had the keyword running shoe, right, both the word running and the word shoe must be present in the search term for your kind of traditional sponsor product broad match. It's not the case anymore. Elizabeth: You can target what's called related keywords. So for example, one would be like sneaker, right, it's kind of related to running shoe. And if you wanna say. I stuck a screenshot out on LinkedIn not that long ago and I was like, how is this relevant? Like one of them, it was like targeting like a bread knife and the search term that it triggered was like ballerina farm, go figure, I don't know, but like, so you can get like this really weird, funky stuff. So what we do to kind of combat that one, just keep up on your negatives these days, like, keep a sharp eye on your search and reports and add those negatives. Elizabeth: But the one thing that you can do is just sort of like to Bradley's point make each those individual words have to show up is if, in front of each of those words that you want to make sure are present in the search term, you can add a little plus symbol. So in the example of like, say running shoes, I would say plus shoes, plus what is our running whatever? Plus running, plus shoes, right, and then that would trigger to the algorithm. Okay, you have to use these things inside of your searches, which again is a factor in sponsored brand ads. If you look at the documentation, they do say that modified broad match is a thing and it's been a thing for a while. I just hasn't been super popular. But I haven't read documentation that they've rolled that over into sponsored product ads. I don't think it's a bad idea to get in the practice of using modified broad match and sponsored product ads though. Shivali Patel: Okay, thank you for answering that question. We also have another one that says I'm going to be launching a brand new store for FBA and Shopify for my own manufactured product. What will you suggest that I do for the first few months? Elizabeth: Well, I'm gonna assume that the question is saying, with ads because that's my area of expertise like new product launches, there's a lot. So definitely follow @HumanTank because they way more than just add advice to offer you. But as far as the advertising, I would prioritize keyword research for the product launches. That actually would be really helpful when you're trying to vet even the space for your particular products. And then I would again, I would hyper focus on relevancy in the beginning. I would run that in exact match, probably high bids. Elizabeth: In the beginning you're looking for two things. You're looking to get eyeballs on your product, ideally those eyeballs conferring to sales that is remain to be seen, based on how appealing your product is to the market and how good your search pages et cetera. But you want to get eyeballs in the product and then you want to use those eyeballs to sort of vet again how much these shoppers like your particular product for purchase. So that's what I do. I would focus on those again for like the first couple weeks is typically what we do, and then you might sort of branch out into phrase match run, auto campaigns et cetera. Now here's a trick is how many keywords you choose in the beginning to launch is actually going to be determined by your budgets. So I have seen so many sellers in the groups like they'll be like oh my gosh, I just launched and launched my ads and I'm spending like $1,000 a day and I can't afford it and I don't know what's going on. Again, it's simple, kind of seems like stupid logic but the more keywords you're advertising on, the more clicks you're gonna get, the more cost per clicks you're gonna pay, the higher ads spent. So you actually want to factor in what you're doing for your launch strategy with your budgets. Elizabeth: Like I just got off a client call and we're like all right, we have these new product launches. Yeah, it's a really competitive space. It's like skincare. We're not gonna have reviews in the beginning. You know what? In the beginning we're gonna keep ad budgets really lean and we have a really good brand recognition. We're just gonna leverage brand recognition because we know the conversion rates are gonna be there. It's gonna help us get the initial products. But we also are understanding that if that's the strategy we're running again a little bit more limited, just leveraging brand lower budgets we're not expecting the sales to be exponential in the beginning. So it's like setting expectations and then kind of understanding what makes sense for you at this stage. Shivali Patel: Okay, and, keeping that in mind, the review portion that you're mentioning, right, yeah, you end up like, let's say, for example I'm not sure if I'll pronounce it right, but in Sweat's example right, his question when he's launching, do you end up waiting for the reviews to file in before you are running those ads or do you end up just kind of going in? And of course, there's many moving components, yeah, there's a lot of moving parts. Elizabeth: It depends on what the brand's wants to do. Typically we will start running stuff out of the gate Again. We just kind of set expectations. The reason why ACoS is so high in the beginning is for two reasons. One, your conversion rate tends to be a little bit lower and then, two, your cost per clicks tend to be a little bit higher because you really are trying to get aggressive to be able to get that visibility on the product and then over time, ideally, conversion rates improve because you get more reviews and then cost per clicks hopefully go down as you optimize. So between those two things, that helps it get better. So we just set expectations with like hey, because conversion rates are low means it takes more clicks to convert, which means ACoS is gonna be a little bit higher and we expect potentially sales not to be still or out of the gate. Sometimes it'll be surprised. Sometimes you launch a product and you're like, wow, this is amazing, this thing just absolutely took off and I hope for all of you listening, that is the case for you and your new products, but it's not always the case. So it's really more setting expectations and then just deciding what makes sense for you. Shivali Patel: Why would someone create like a branded campaign If they've already have their standard stuff? Do you maybe want to talk a little bit about branded campaigns? Elizabeth: Yeah, there's two kinds of branded campaigns. One is considered branded, or maybe brand defense is what you might call it. One of them is you have a whole bunch of products. Which you might do is you would advertise your own products on your other listings. The goal of that is you'd be like, hey, if somebody is going to click off, they might as well click onto my own product. Again, it's called a defensive strategy because you're plugging people off and refer to it. It's like plugging the ad spots. My competition can't get this ad spot on my listing. The other thing that you might do is if you have any branded searches happening so people searching your brand on Amazon then what you can do is you can again advertise your own products. Elizabeth: There's a lot of debate out there. They're like, oh, if I already have people searching for my brand, why in the world would I be spending on it? Because they're going to convert for my brand anyways. Yeah, there's arguments to be made. The things that you can do is you actually track your share of search in using search query performance reports to look at your own branded traffic and be like am I losing out on sales through my branded traffic? That's something you can do if you want to be like, is it worth it for me to run? But the second thing and the one I was referring to when I was talking about that more specific launch that we're doing is if you have great brand recognition meaning there's a lot of people searching for your brand you've already built up a lot of traffic to your current listings and you have a new product that fits very well into that brand. Elizabeth: So example I just gave was we have a brand that has a skincare line. Right, they have their launching complimentary products. They have really good repeat purchase rates. What we can do is for people searching their brand, we can make sure that the new products are then advertised and show up high on their branded search, where they might show up lower before if we weren't leveraging ads for that. And then what happens is someone's typing in the brand like oh, wow, there's a new product from this brand. Awesome, and most likely not always, but of course you know you read the data, but most likely you're going to get people purchasing very similar. You know you can use ads to be able to get visibility again on your own products, but you're using your new offering. So that's kind of a way to like. If you have a good brand, share to be like. Hey, I got a new product. I want to try it out using ads. Shivali Patel: Got it, and I see Sasha has a question here, and it is what's the best way to research Amazon keywords for low competition products? And I'll go ahead and add as well what do you do in the case if, let's say, there is not necessarily a market, maybe it's a brand new product that doesn't end up having any sort of crossover? You're creating a sub niche. Elizabeth: Yes, those are the most difficult. The two most difficult products to advertise for are one to your point of like there really is no relevant traffic for it. Or two, when you only have one keyword that has any search volume and there's like nothing else besides one or two keywords, because every single one of your competitors knows those one or two keywords and there's really not anything else to choose from. So there's not really a way to like play a sophisticated game. You just got to like grin and bear it in those categories, which is like kind of painful sometimes. So reword I mean your keyword research is really going to be exactly the same as for any other product. You're going to be looking at your competitors, seeing what they rank you for. I mean, we use Helium 10, love Helium 10, just did a walkthrough of how we did keyword research using Helium 10. Like it's a really great tool. Elizabeth: The one different way that we have of generating your first keyword. We actually generate two keyword less in the beginning. So what we'll do is we'll use, say, like a commonly searched keyword. So a lot of times people will start with like all right, type in a commonly searched keyword and then like, look at the ranked competitors, choose them, you know, choose the relevant ones and then go through that. What we will do is we will take that first you know pretty general keyword that we're pretty sure is relevant to the products, and what we'll do is we'll type that into. Elizabeth: I'm going to get them mixed up. I'm going to say it's magnet, it's the keyword research tool, so you type it in and then you look at search, so you sort by search volume and what we'll do is we'll actually go down that first list and find what we call our highest search volume, most relevant keyword. So what you're looking for is the intersection between where you actually have good shop or search, and it is also relevant to your product, because the more hyper relevant you get to the product, typically speaking, not always the lower your search volume is going to be. On those keywords You're like all right, what's my top of the mountain? Because oftentimes people will be like, oh, metal cup, that's a great keyword, yes, but it's not highly relevant keyword. So you're looking for, like women's metal cup for running or something like is there a good search volume there? How can I like niche down a set? And then what we'll do is we'll take that search page for a highly relevant keyword and use that as our springboard to find our top competitors. Shivali Patel: So we do also have a question from David where he asks how would you use not sure what that's supposed to say for top competitive keywords when your product have multiple attributes such as gold diamond ring, gold solid hair ring and engagement rings should I run through, bro, on each? I'm assuming that's just supposed to be. How would you search for top competitive keywords? So? Yeah so I would, I would just look for. Elizabeth: I would look for whatever is the highest relevancy, highest search volume, one that's going to give it and you're going to have a lot of applicable keywords. So the walkthrough that I did I think it's just yesterday what we did is we were looking at baby blanket, and what we start doing with our final keyword list when we're looking again we're prioritizing relevancy is you will find what we call buckets of keywords, right. So when I was doing baby blanket, it was like girls receiving blanket, receiving blanket for boys, like some like okay, there's a bunch of girl keywords and their bunch of boy keywords and these are actually a little bit related to specific variations. You can start getting really sophisticated with it. But as you do that keyword research and as you're looking for that relevancy, you're probably going to find a lot of these buckets. So what we'll do on launch is we'll like take our group out and be like okay, so to your point, we have a bunch of diamond keywords. Elizabeth: Oh wait, I have a bunch of solitary keywords, right. So you can actually group those. I can take all my solitary ones and be like hmm, I wonder if the search term solitaire is. I wonder if people like my product in relation to that search. Okay, so let me take that out. Let me put those in their own campaign. I'll label the campaign like solitary keywords or something and then I would advertise the products there or engagement ranks, right, okay, maybe that's applicable to my products. Let me again pull those out and put them in a subgroup and a campaign. The reason why I like doing this is because then I can just scan campaign manager instead of having to like go in and like, look at a campaign with, like the solitaire keywords, engagement ring keywords, gold, diamond keywords. I can be like, oh, these are sub group in campaigns and then when I'm in campaign manager, I can simply look at how each of those three campaigns are performing and be like oh, wow, it seems like gold, diamond ring keywords actually perform best and you still want to analyze at a keyword level. But that makes it a little bit more scalable to like understand shop or search behavior in relation to your product. Shivali Patel: Now I see that David also would like to know about the filter for keyword sales filter, which it is essentially just telling you on average how many sales occur for that particular keyword every single month. So that's really what you're looking at there, but, Elizabeth, maybe you want to expand on whether that's something that you end up looking at when you're doing your keyword research for these different brands that you work with. Elizabeth: I don't really Everything honest. The two things that I look at actually probably three things is I would like to look at. We look at numbers to the count of competitors that are ranking again, because we're doing that whole like find, you know, do the first list to find the second keyword, to find the really really super specific products. So if you can find good super specific products, then you can kind of like use their ranking on the keywords. So actually I love that Helium 10 added in that column because it was one that a lot of us were like calculating. Elizabeth: When I'm like God, I don't have to do the formula, I just already filter for the list, so it's really awesome. So we'll download that list and then you know, we'll just see what's the highly relevant and the kind of cross check that with search volume you can use. I don't think it's a bad idea to use, you know, kind of like the sales volume, because sometimes what you'll find is that even though there's like a high search volume, if the keyword is sort of like a little bit broader keyword, you might actually not have as much sales volume through those keywords as you would think. So it's not a bad idea to analyze it at all. We just find if we're like again, we're super honed in on that relevancy factor, then we tend to come up with the ones that have better sales volume anyways. Shivali Patel: Okay, I think that's really, really insightful. We also have Sergio. Sergio, do you like to use the same keywords for each campaign in broad phrase, and exact campaigns? Elizabeth: I do. I would say the one sort of not qualifier would put on it, the one thing you should be aware of. I would recommend keeping the bids lower in the broad and the phrase match. I don't always agree with Amazon's recommendations, but if you listen to their recommendations on this, they actually recommend that you keep it lower. Shivali Patel: And Sasha has a question. If I was to start selling a product that has a monthly volume of 60,000 units a month, how should I position myself? Should I run out? Elizabeth: I would first want to know how the product performs. That's your first goal. You want to figure out what your average cost per click is and you want to figure out what your actual conversion rate is. Once you have those factors, you can actually start building production models and sales production models and stuff. Actually, it's not hard to build or not search. You want to search traffic production models based on oh, I want to hit $50,000 a month in products, this is my conversion rate. What you need is you need your conversion rates. You really need your conversion rates is the main one, and then you're going to need your cost per clicks in the ads to be like all right, this is what it's going to cost me. Right now, you're going off of nothing. I know I've said it about 20 different times on this live, but I'm going to say it again relevancy, focus on your exact target market, see what your numbers tell you, and then you can build up from there. Shivali Patel: I think that's a good plan, so hopefully that is helpful for you. Sasha, I see we have Sweat's leaving, but he has found the response was informative. Now I wanted to touch on something we talked about at the beginning of this call, which is Q4, right, we've been talking a little bit about auditing your strategy and some general PPC knowledge, but also what about, I'm sure a lot of you guys that are watching? If you're already selling, then you probably aren't full swing. Maybe you've already gone ahead and optimized your listings for Q4. But what happens if maybe somebody is just starting to be like oh no, I completely dropped the ball? Do you have? Hopefully, not Hopefully, none of you guys are in that position, but let's say something like that happens, sort of maybe if you have a take on what somebody can do to make sure that they're still able to tap in on Q4's potential. Elizabeth: Yeah, so we're assuming it's a brand new launch product and we have nothing. Shivali Patel: We can assume that they've been selling for a while, but they haven't changed anything for Q4. Elizabeth: Got it, got it, got it. Ok, no, that's fine. So I would say if you're already selling, most likely you probably have some ad structure. You're not in a bad spot. Ok, q4, right before Black Friday, December and Monday, we're not launching a whole bunch of test campaigns. Don't do it, because what happens is Black Friday, Cyber Mondays Really, what you're doing, you don't get same. Elizabeth: I know there's not really data available, but honestly, nobody's really looking at that. An inside campaign manager. You're not going to be able to say, oh OK, I got 20. My ACOS was so much better this last hour, so let me increase these budgets, right? What you have to do is you have to look back at historical data. So if you want to test anything, do it before this week is out. Get those campaigns up, get that data, because you're going to be completely flying blind If you launched a bunch of stuff a day before. You're completely flying blind on performance metrics and it's so easy because of how many clicks are happening on the platform to really lose your shirt. So I would say, if you're like oh my gosh, I don't have any specific campaign set up for Black Friday, so that's fine, you're actually in a really good spot. So what you want to do these weeks leading up to it you actually still have time you want to go into your account and you want to evaluate what is working now, what is crushing it right now, and then I'm going to make sure, as that traffic comes in, that those have good budgets. I have healthy bids on them. Elizabeth: To be honest, days of for the most part, unless we have a really specific keyword on a very specific brand, they're like we have to be aggressive when we must win top of search for this particular keyword. For the most part, we're adjusting budgets. Day of is our typical optimizations. So what we're doing prior to that is we're like all right, if we're going to be increasing budgets, we want to make sure that all of this is super solid. So you're doing two things. One, you're identifying all the stuff that really works and you're like all right, I need to make sure again, budgets are healthy, bids are healthy, all my optimizations are done. And then the second thing we're doing and this is also very important is what is all the stuff that's not working, meaning Clips with no Sales? Where are all my high costs, low sale keywords going on? Here's a good one. What are all my untested stuff, that I've just been increasing bids. So it's so easy. Elizabeth: If you're like normal optimizations, right, we're going to go in what has no impressions, increase the bids. We do this as well. It is not a bad practice. What often happens, especially if you don't have any caps so we have caps, we're like, all right, we're never going to increase past x amount of dollars or whatever If you don't have any caps. Sometimes what happens is you're like you can end up with like $10 bids. Elizabeth: So what I would recommend doing go into your targeting tab. I would filter for everything with zero orders, or you could just leave it totally blank, sort by the bid what has the highest bid in your account and you might look at it and be like holy crap, I had no idea that was in there. And what you want to do is what we call a bid reset. So you're just looking at all this stuff and you're like, hey, it's not getting any impressions. Anyways, it's not going to hurt me if I lower my bids, but then at least I know when that traffic hits all of a sudden that random keyword that didn't have any search volume, that I had like $10 bid on. It's not going to like pop off and waste all of my ad budgets. Elizabeth: There's another filter that is really helpful to identify the irrelevant stuff. I'm not saying pause all these things. I'm saying use this filter to bring to the top everything that you're like how the heck did that get in there? Because it's super easy. When we're looking in our search term reports we're like, oh, this converted once. Let me go test it Again. Great practice. What happens is sometimes you get these random things in the account so easy for it to happen. So what you do is you go again. Targeting tab is going to be your friend here. You're going to want to filter for anything that has what is it? Zero clicks, zero, maybe once, two clicks. Elizabeth: We're looking for impressions. It has probably at least 1,000 impressions on it and you want to filter the click-through rate by anything that is lower than maybe a 0.2 or 0.15. So this says it's got a lot of impressions, it's not really doing anything in terms of sales volume and it's got really bad click-through rates. And then sort that by either your click-through rates highest or lowest to highest, or you can maybe start by impressions, highest to lowest. So what you're trying to do is what it has a bunch of eyeballs that nobody cares about and what you're doing is that brings up. Elizabeth: So a lot of people saw it. Not. A lot of people clicked on it, which oftentimes means irrelevant stuff, and because it's only got a couple clicks, there's not a lot of data, so it hasn't moved into our optimization sequences. So again, it's just a once over of the account. The first time you do this you'll probably be like what the heck, why is that there? And then, if you find that great pause, it put low bids on it, just kind of. Again, we're doing clean up. If you don't find anything that doesn't make sense for you, conkudos to. You're doing really, really good targeting. But either way, it's a really good thing to give it a once over before again traffic hits and things kind of go crazy. Shivali Patel: Now we do also have your keyword sale filter. Says 89 with low search volume, and another keyword has 20 keyword sales but a higher search volume. Is there one that you would kind of opt for? I know you said you don't typically look at the keyword sales Filter. Elizabeth: Yeah. So the two things I would look for is one I'm gonna say again, relevancy. I believe in it so strongly, I'm gonna say it again. And then the other thing that you would look at is, you know, the Helium sandwich. Again, another thing that I appreciate that you guys have added to the download keyword reports is the Recommended bits. Now, again, you guys are pulling them direct from the API, like Amazon does provide the recommended bits. However, as we all know, like if you go in you launch campaign, you like add different products, the recommended bids change, so their benchmarks don't take them as gospel, but they are really helpful to again kind of help you identify how competitive a particular keyword is over the other. So, like a budget's were concerned, you're like, well, you know, this one has like 20 sale, like the sales volume is pretty good, but like, wow, that one's Really competitive. I got to pay two dollars cost per click versus the other one where I'm like, well, I only have to pay like 50 cents cost per click. That probably would play into my decision. Shivali Patel: Okay, all right, there's. I know I said to, but let's just do this last one and then we'll. We'll call it. And so how do you structure your top keyword campaigns versus your complementary keywords? I know we briefly touched on this earlier. Elizabeth: Yeah, so I will cash with. So I saying I'm not a huge fan of doing everything as a single keyword campaign. I think it's way too overkill. You end up getting way more confused than you do in sight From doing it like that. That being said, if we do, I definitely have like a top keyword. We are going to put that in a single keyword, exact, match, specific campaign. The sort of it depends Questions and answers that I always give is the more the higher amount of Control I need over where I'm going to be directing my ad spend, the less keywords I want to have. Then more important it is for me to gain impressions on this keyword. For, again, for my campaign strategy, the less keywords I'm going to have. So if it is a top keyword, if it's my main ranking keyword, if it's super, super important to me, single keyword campaign right, because that's I need to control ad spend. I need a lot of impressions on this and super, super important versus another keyword set, right. Maybe I don't really have it. So the other, very other end of the spectrum is going to be like a whole bunch of a Campaign that actually works really well. Elizabeth: For us is single word meaning, like you know, cup bowl dish In broad match low bits. Do not put high-pits on these. Even if you have great ACoS, don't put high bits. Not a good idea. But we'll run these all the time. But what happens is because we cap our bids at, say, I think it's from 25 cents, maybe 30 cents, maybe in 15 cents. We never intend to grow our bids past that, right. Elizabeth: So how is it important for me to control ad spend at the campaign level? Not really because I'm controlling it at my bid level, right. How important is it for me to gain impressions? Not really because I'm expecting half of these keywords to not get impressions whatever. So I would be fine with putting, you know, say, 50, 100 keywords in that campaign, right, because for me it makes no sense to create 10 different campaigns that I have to like keep an eye on, versus just one important like oh yeah, that's that strategy and that's kind of like my background thing, right. So I would look at it through that lens again. How important is it for me to control spend at the campaign level? And then, how important is it for me to gain Impressions on these particular keywords? The more infatily you answer yes to those two questions, the less keywords you should have in that campaign. The more you don't really care about those two things, or they don't really matter as much then I would be okay with a lot more keywords. Shivali Patel: Alright, well, wonderful. Thank you so much, Elizabeth, for your time and your information, your knowledge. We appreciate it. I know a lot of people learned quite a bit. Sasha says thank you. We have sweat who says you know he was also waiting on those other questions that you were answering. That was very informative, so we do appreciate it so much. And yeah, that is it for today. You guys will catch you on the next TACoS Tuesday. Thank you! Elizabeth: Awesome! Thanks, I appreciate it.

Mason Vera Paine
Google Trends: Halloween Edition

Mason Vera Paine

Play Episode Listen Later Oct 31, 2023 8:25


Halloween has arrived. Find out what people are Googling from costumes to parties and food from Google Trends Expert, Elizabeth Howard. For the latest trends from Google visit: Trends.Google.com/TrendsFollow Twitter on Google at: Twitter.com/GoogleLike Google on Facebook at: Facebook.com/GoogleLike and Follow Google on Instagram at: Instagram.com/Google https://75dc83.p3cdn1.secureserver.net/wp-content/uploads/2023/10/15.-Elizabeth-Howard-Halloween-Trends.mp3 Google Halloween Trends Transcription Announcer: Mason Vera Paine. 00:01 – Mason:  Halloween is upon us and people are running to Google for help. Joining me to speak about some of the top things people are searching for is Google trends expert, Elizabeth Howard. Thanks for joining me, Elizabeth. 00:12 – Elizabeth: Thanks for having me. 00:13 – Mason:  So Halloween, it's one of my favorite holidays. And I'm just so curious, what was trending so far in Google? 00:19 – Elizabeth: Well, Halloween in October, we have so many fun trends this month. Our top trends were costumes, makeup and DIY effects, food, and decor. Of course, we had some local searches, which were really fun this month to see as well. Should we get into the costumes first? 00:37 – Mason:  Absolutely. That's the best part. Are people looking to make their own costumes or they're buying costumes? 00:44 – Elizabeth: We definitely had some people looking for top DIY costumes. Rosie the Riveter is the most search last-minute costume of all time in the United States, and Pirate is the most search DIY costume. But the breakout search that we saw this year is Barbie is the top trending costume of the year across every single state, which really never happens in our search trends. There were probably lots of Barbies out this weekend, and there's going to be lots of Barbies out on Halloween night. We also saw a kin dog costume is the breakout pet costume search of the year, which I want to see pictures of these Barbie and kin dog costumes. Yeah. 1:30 – Mason:  That sounds so cute. I'm wondering, what is the dog going to wear? Is it going to wear a wig, a little hat, a cowboy hat, the whole theme? How's it going to go? 1:40 – Elizabeth: I know. I also wonder the fur jacket that ken wears when he's in his mojo jojo, cata house, maybe that is what the dog is going. To wear. 1:50 – Mason:  That would be so funny. You're right. This will be absolutely fabulous. Was there any makeup ideas? I'm sure probably the low hanging fruit is to just get a costume. But I have to know, is there any makeup ideas for people? 2:05 – Elizabeth: Yes. Every October we do see face paint and theatrical blood's bike. But this year we saw Carrie as the top training in costume search in conjunction with theatrical blood. We also saw yellow as the top trending contact lens color. A couple of months ago, we saw Barbie nails as a top trending nail design. Then this month we saw that change to ghosts. People were headed to the nail salon to get little ghosts on their fingernails. 2:35 – Mason:  That's cute. I like that. I think you could do that year round, though. It's not too big where it's going to be noticeable, right? 2:41 – Elizabeth: Yeah, exactly. 2:43 – Mason:  Now, for kids, are there any craft things that were for children? 2:47 – Elizabeth: We did see some decor and how to make things, decorations-wise. We had dry eyes for smoke effects. A pumpkin candy bucket was a breakout search. We also saw people trying to make spiderwebs, graveyard fences, ghost for yard, skeletons, and scarecrows. There was definitely some DIY going on in household this month. 3:14 – Mason:  Yeah, we have a thing in our family where all the little kids will make Halloween decorations for the windows. It'll just be a paper with a Frankenstein face on it, and they'll either color it in or if they're super good, they'll try to cut it out of little pieces of paper and they'll make something fancy. Of course,

Law Firm Marketing Catalyst
Episode 99: Become a Stronger Writer with Tips from an Expert Writing Coach, Elizabeth Danziger

Law Firm Marketing Catalyst

Play Episode Listen Later Apr 13, 2022 24:00


What you'll learn in this episode: Why getting your message across is the most important goal of writing How strong writing skills help people move up in their careers  How to remove filler words from your writing Why proofreading is necessary, even if it's not important to you personally Elizabeth's top three tips for clearer writing About Elizabeth Danziger Elizabeth Danziger, the founder of Worktalk Communications Consulting, is a seasoned written communications expert with over 30 years of experience. She has a longstanding reputation for training people to become compelling, confident writers. Danziger is the author of four books published by major publishers, including Get to the Point!, a text on business writing initially published by Random House. Her work has also appeared in many magazines, including Personnel Journal, Journal of Accountancy, and other national publications. She enables people to wield the power of words to enhance their credibility and catapult ahead in their careers. Additional resources: Facebook is www.facebook.com/upworktalk LinkedIn: www.linkedin.com/elizabethdanziger Twitter: www.twitter.com/writaminlady Love it or hate it, writing is a daily part of our lives. And according to author, writing consultant and communications expert Elizabeth Danziger, people who write well are more likely to advance in their careers. That's why she founded Worktalk Communications Consulting, a firm that trains professionals to write clearly and confidently. She joined the Law Firm Marketing Catalyst Podcast to talk about the importance of rereading; the power of language; and her tips for stronger writing. Read the episode transcript here.    Sharon: Welcome to The Law Firm Marketing Catalyst Podcast. Today, my guest is Elizabeth Danziger, head of Worktalk Communications. Worktalk prepares teams to write clearly and confidently so they can strengthen their credibility, increase their influence and generate new possibilities. Liz is also the author of the book “Get to the Point! Painless Advice for Writing Memos, Letters and Emails Your Colleagues and Clients Will Understand.” Worktalk also has a very interesting newsletter called “Writamins,” and it's chock full of interesting information you'll want to know. Make sure to sign up for it. We'll have a link at the end of the program. Today, Liz will be talking about how we can make the best use of language. Liz, welcome to the program.   Elizabeth: Thank you. It's a pleasure to be here.   Sharon: So glad to have you. Every time I read what you've written, I go, “Oh my god, it's so useful.” I have to say, I took a course from Liz years ago and the one thing I always do—Liz, I don't know if you still have my emails, but you did get me to reread my emails before I sent them.   Elizabeth: Great!   Sharon: I still do that. I always remember that, because you're right. You catch things you didn't realize were there.    Elizabeth: Oh, that's wonderful.    Sharon: Tell us about your career path. Were you always into words and grammar? Was that always of interest to you?   Elizabeth: When I was a child, I wanted to be a doctor, actually. I wanted to be a physician, but I also always loved to read. I remember my mother yelling at me, like, “Why don't you go out to play?” and I'd be like, “No, I want to read.” I've always been a great reader. Then, when I got to college and hit organic chemistry and calculus, I thought, “Well, maybe my skills are better suited elsewhere,” and I became a writer.    My first book was published when I was 25, and it did well domestically and internationally. Then I wrote two more books, including “Winning by Letting Go,” published by Harcourt Brace Jovanovich. I wrote for all the women's magazines, and then I decided I wanted to work with people who were doing real things in the real world and making life happen, and not necessarily the editors of Cosmo. I also realized there's a huge need. People suffer over their writing. They suffer personally and internally, and they suffer bad consequences from lost business, lost relationships, lost possibilities. So, I founded Worktalk to support people in making themselves understood.   Sharon: How do you do this? We took a class with you, but do you work with people individually? Is it sessions? How do you do that?   Elizabeth: I work with people however they want to be worked with. Notice that I ended a sentence with a preposition, which is totally O.K. Most of our work takes the form of webinars and training sessions. We customize every one of our webinars to our clients. We get writing samples. It's like sending a blood test to the doctor. You send me your writing sample and I see what's going on. So, it's mostly trainings and webinars.    We also do writing labs, which are much smaller. Each person brings one writing sample and we workshop each other's work in the lab. Of course, I do one-on-one coaching, but mostly it's trainings and webinars. Ultimately, we work with people in whatever way they need.   Sharon: I think there are a lot of people who have a love of reading, but how did your love of reading translate into understanding grammar? It seems like that's a different thing in a sense.   Elizabeth: Truthfully, people think of me as a person associated with grammar. I didn't really study grammar until I started teaching writing. I was a writer, and I was edited by book publishers, by Hartcourt Brace Jovanovich and Random House and by the editors of Cosmo and the editors of Glamour and the editors of all these magazines. They edited me. When I decided to start doing writing training, I think a lot of it came to me intuitively. Then, when I started teaching it, I realized I had to get the rules down. That's why I tell people grammar is extremely important, of course, but getting your message across is the most important thing.   Sharon: I'm not trying to put words in your mouth, but when we work with lawyers, they go to school to learn how to write in a certain way. Is there resistance, or is it more difficult to untrain them to write for the normal person?   Elizabeth: It is a little more difficult. With respect, lawyers really think they know a lot about a lot, and they're trained to argue; they're trained to think you're wrong. So, there is a little more resistance, but at the same time, I've worked with law firms. I've worked with associates who are getting dinged for the writing. Their writing's not clear; their writing's not to the point; their writing doesn't catch the issue. When I work with associates, they end up getting that taken off their performance review and they turn into good writers. I've also worked with legal firms on other things, but I love working with lawyers because they're smart. Not that people who aren't smart shouldn't call me—not that anyone would identify themselves at not smart. It's fun to work with people who learn quickly. It's fun.    Sharon: That's interesting, because it seems like if you're working with associates, there are people higher up, perhaps partners, who aren't—and once again, as you say, with respect—aren't as good a writer or as to-the-point, and they're evaluating somebody else.   Elizabeth: I'm not sure about that. My experience in all fields, in accounting, business, finance and law, is that the people at the top, they're almost always good writers, I would say. Good writing and good thinking go hand in hand, and you cannot rise to the top if you're not a really good thinker, hopefully. People who write well tend to get promoted in professional service firms. Very often, the managing partner is an exceptional writer, but the managing partner, believe it or not, has other things to do than to edit the crappy writing of the people who work for them. They need to be managing the firm. That's why they outsource to me if their associates are not up to snuff, but the top people are often good writers.   Sharon: That makes a lot of sense. They have to be persuasive, and they have to get their clients' attention, which means being to the point.   Elizabeth: Right.   Sharon: How is what you do changing today? When people are texting and abbreviating every other word—Liz is rolling her eyes here. I find myself doing that, or I'll make a mistake and think, “Well, nobody's going to notice that or know that's a mistake,” and then I say, “Sharon, you can't do that. It's not right.” How do you deal with that?   Elizabeth: That's an excellent question, and I can look at it in a couple of ways. One is that I am fighting the good fight. Like Winston Churchill said, “Fight the good fight.” Although there is a lot of texting, Slack, Whatsapp, whatever, the thing is that—and this is getting a little philosophical—if we think about it, what is the function of language? I'm sure we all love dolphins and pot-bellied pigs and whales, but they're not building legal systems; they're not building cultures; they're not doing what humans are doing. We are doing it because we have language, really sophisticated, nuanced language that can create a future and a past. It's powerful.    Language conveys meaning, but why bother to get something from my head into your head? How do we get this from my head into your head? Because we have a set of agreements. We agree. The sounds I'm making mean something. The scribbles on the page mean something, and you can make a certain number of errors in those agreements. Grammar is just a set of things we agree on. When I say, “I was,” it means it happened already. We agree on that. But if you break too many of those agreements of grammar, it creates friction in the system, and your meaning starts to fall apart. You literally lose meaning, and that's why I know the work I do is evergreen. In every class, I ask people, “Have you ever gotten an email from someone that had so many grammar and punctuation errors that you literally didn't know what the person was talking about?” and everybody says, “Yes.” It's true that people are more casual about it, and the winners, the people who end up on top, are going to be the people who communicate with a nuance and a correctness and a sophistication.   Sharon: Do you find yourself texting and abbreviating things?   Elizabeth: No, I never do. I dictate my texts, and I usually proofread them. I just don't do that. Maybe it's because I'm a boomer. I also tell people not to do it, so I don't do it.   Sharon: It's interesting to me how the world has changed. I do have to throw this out: I'm flabbergasted that they're not teaching cursive writing in some areas.   Elizabeth: I know. What's sad is that there's a lot of research on the whole process of writing by hand, the neurology and neuroscience, and there is an additional layer of writing in cursive. When you take notes by hand or when you write in cursive, different things are happening inside your brain that are enabling you to process that information at a deeper level. On a simple level, I wonder how those people are going to sign their names when they grow up. If you've never learned cursive, what is your signature going to look like? I don't know. But you're right. Of course, I have to deal with people texting and Slacking and this and that, but in the end, the bottom line of language is the same: get your message across. That's what we aim for.   Sharon: When you're teaching a class of law firm associates or younger people, let's say, do you hear more, “Oh, Liz, that's not important”?   Elizabeth: I do. What's interesting is in my section on proofreading, I always ask people, “When you receive a document that's not carefully proofread, how does it affect your opinion of the person who sent it? Positively, negatively or no impact?” I talk to people all over the country, and in most cases, the majority of people say it has a negative impact on their opinion of the person who sent it. Yet there are certain cultures and certain groups and subgroups where a lot of people will say it makes no impact on them. They don't care if somebody doesn't proofread. What I tell those people is, “O.K., so the person on your team, that person may not care at all if you proofread. Knock yourself out. But I promise you, if you write to a CEO or the government or the executive vice president or the division manager, that person will care.” Many people still do care, and we have to take care of that. We have to write for the top, not to the least common denominator.   Sharon: That's a good way to put it. I think certain professions care more. We were the recipient of this, because a firm that became our client, they switched firms because they said their other firm wasn't proofreading.   Elizabeth: Oh my gosh! I saw this in a client, a regional accounting firm that had been approached by the client of another regional accounting firm. The other firm was a very reputable firm, a good firm, and I asked my client, “Did you ask them why they are talking to you? This is like somebody who already has a girlfriend going on a date. Why are they talking to you if they already have an accounting firm?” He asked them, and what they told him was that their firm consistently misspelled their name.   Sharon: That would be a zinger, let's say.   Elizabeth: Yeah.   Sharon: Tell us some of your top secrets or your words of advice for us to keep in mind.   Elizabeth: There are three things I would suggest. The first is that you think about your reader before you write. It sounds very simple, but it astounds me sometimes how rarely people do that. They sit down and think, “Tap, tap, tap,” and they're not visualizing the living, breathing human being who's on the receiving end of that. What do they care about? What are their hot buttons? What are they wondering? What are their questions? Write for the reader. That's the first thing.    Second, write shorter sentences. Your average sentence range should be around 20 words. That doesn't mean every one should be boom, boom, boom, 20, 20, 20. Maybe some 15, maybe some 25, maybe some 30, but if you have a 30 or 35-word sentence, I want you to put two 10-word sentences around it. Microsoft Word's check readability statistics function will calculate your average sentence. That's the second thing, to write shorter sentences, and a whole cascade of good things will happen.   The third, as you remember from when we talked years ago, Sharon, is to always, always reread. You've got to reread what you wrote and make sure you didn't write something incredibly dumb, especially for attorneys. Attorneys are held to a higher standard. The scary thing about not proofreading is that people generalize. They think if you're careless at this, you're careless at that. If there's a typo in the cover letter you send to your client, “Here's the contract you asked me to draw up,” and you write “contact” instead of “contract,” and it goes straight through spell check because contact is also a word, I promise you they are going to have less confidence in the validity of the contract because there was a typo in the cover letter. That's just how we roll. It's crucial to reread and proofread everything no matter how hurried you are. The time it takes to backtrack and grovel and apologize and try and make it right is so time-consuming that it makes the time that we spend proofreading seem very, very short.   Sharon: That's a good point, what you say about lawyers being held to a higher standard. If I got a cover letter or a document from a lawyer where there was a typo, I would think, “Oh, my god, what kind of work am I going to get from this person, exactly?”    Elizabeth: It's terrifying   Sharon: Yes, it is.   Elizabeth: It's truly terrifying.   Sharon: That's true. If I got a typo in a cover letter, it would reflect poorly on the person, but if it came from the guy who's going to paint my house, I don't think I'd be thinking in the same way.   Elizabeth: Exactly, that's a great point. We have different expectations from different people. People have the highest expectations of lawyers because they associate them with precision and language, and because they rely on them to use language to plead their case.    Sharon: That's true. Rely—that word really hit me. They're advocates.   Elizabeth: Exactly, good point.   Sharon: I want to ask you two things. Are you going to be writing another follow-up to the second edition of your book “Get to the Point”?   Elizabeth: I've already done a second edition. I thought about doing a third edition, but I'm very busy with work right now, and it's a huge time commitment. I think I keep people posted by keeping up the Writamins. If you subscribe to Writamins, you'll get all the latest.   Sharon: Yes, and we'll have the link in the podcast description when we post it. One of the latest versions was talking about filler words. As I was writing something the other day, I thought, “Wait, that's a filler word,” and I took it out.   Elizabeth: Great! It really affects you. I'm so gratified.    Sharon: I never thought about it, but it's something I use all the time. Give us examples on how we get rid of them.   Elizabeth: A lot of it is just thought and self-discipline. I wish I could say, “Give me $29.95 and I'll slice and dice and microwave and cut and reduce filler words.” That would be really nice. I would be a millionaire, a multimillionaire, if I could do that. A lot of it goes back to rereading. We also need to be aware of words like “just.” I'm sorry to say this happens more often with women than with men. Men and women both do it, but women are particularly prey to “just” or “sorry.” I would like to bury these words. In other words, if I say to you, “I'm very, very sorry,” do I sound sorrier than if I said, “I'm sorry?”   Sharon: That's a good point, yeah.   Elizabeth: To my ears, the person who says, “I'm very, very sorry,” I would not necessarily say that person is twice as sorry as the person who says, “I'm sorry.” I wrote a Writamin about this. You probably remember. I was about “I would like to,” and “I wanted to.” Oh my gosh! Please read the Writamin, everyone. It's on the website. It's at Worktalk.com.   Sharon: Great information. It's so much to remember. Liz, thank you very much. Whether it's a certain rule or whether it's knowing that we need to get to the point faster, that's the most important thing you're talking about. Thank you so much for talking with us today. It's really been great. I don't know how many filler words I'm using there.   Elizabeth: No, you're doing great.   Sharon: Really, really great.   Elizabeth: Really, really, really, really, so, so, so great.   Sharon: Thank you so much for being with us.   Elizabeth: You're very, very welcome. Thank you for letting me be on this show. I appreciate it.   Sharon: It's great to talk with you.  

Break Things On Purpose
Elizabeth Lawler: Creating Maps for Code

Break Things On Purpose

Play Episode Listen Later Apr 5, 2022 15:56


In this episode, we cover: Introduction (00:00) Elizabeth, AppLand, and AppMap (1:00) Why build AppMap (03:34) Being open-source (06:40) Building community  (08:50) Some tips on using AppMap (11:15) Links Referenced: VS Code Marketplace: https://marketplace.visualstudio.com/items?itemName=appland.appmap JetBrains Marketplace: https://plugins.jetbrains.com/plugin/16701-appmap AppLand: https://appland.com TranscriptElizabeth: “Whoa.” [laugh]. That's like getting a map of all of the Planet Earth with street directions for every single city, across all of the continents. You don't need that; you just want to know how to get to the nearest 7/11, right? Like, so just start small. [laugh]. Don't try and map your entire universe, galaxy, you know, out of the gate. [laugh].Jason: Welcome to another episode of Build Things on Purpose, part of the Break Things on Purpose podcast. In our build episodes, we chat with the engineers and developers who create tools that help us build and operate modern applications. In this episode, Elizabeth Lawler joins us to chat about the challenges of building modern, complex software, and the tool that she's built to help developers better understand where they are and where they're going.Jason: Today on the show, we have Elizabeth Lawler who's the founder of a company called AppLand, they make a product called AppMap. Welcome to the show, Elizabeth.Elizabeth: Thank you so much for having me, Jason.Jason: Awesome. So, tell us a little bit more about AppLand and this product that you've built. What did you build?Elizabeth: Sure. So, AppMap is a product that we're building in the open. It's a developer tool, so it's free and open-source. And we call it Google Maps for code. You know, I think that there has been a movement in more assistive technologies being developed—or augmenting technologies being developed for developers, and with some of the new tools, we were looking to create a more visual and interactive experience for developers to understand the runtime of their code better when they code.So, it's interesting how a lot of the runtime of an application when you're writing it or you're actually crafting it is sort of in your imagination because it hasn't yet been. [laugh]. And so, you know, we wanted to make that information apparent and push that kind of observability left so that people could see how things were going to work while they're writing them.Jason: I love that idea of seeing how things are working while you're writing it because you're so right. You know, when I write code, I have a vision in mind, and so, like, you mentally kind of scaffold out here are the pieces that I need and how they'll fit together. And then as you write it, you naturally encounter issues, or things don't work quite as you expect, and you tweak those. And sometimes that idea or the concept in your head gets a little fuzzy. So, having a tool that actually shows you in real-time seems like an extremely valuable tool.Elizabeth: Thank you. Yes. And I think you've nailed how it's not always the issue of dependency, it's really the issue of dependent behavior. And that dependent behavior of other services or code you're interacting with is the hardest thing to imagine while you're writing because you're also focusing on feature and functionality. So, it's really a fun space to work in, and crafting out that data, thinking about what you would need to present, and then trying to create an engaging experience around that has been a really fun journey that the team has been on since 2020. We announced the project in 2021 in March—I think almost about this time last year—and we have over 13,000 users of AppMap now.Jason: That's incredible. So, you mentioned two things that I want to dive into. One is that it's open-source, and then the second—and maybe we'll start there—is why did you build this? Is this something that just was organic; you needed a tool for yourself, or… what was the birth of AppMap?Elizabeth: Oh, I think that's such a great question because I think it was—this is the third startup that I've been in, third project of this kind, building developer tooling. My previous company was a cybersecurity company; before that, I helped build applications in the healthcare sector. And before that, I worked in government and healthcare. And—also, again, building platforms and IT systems and applications as part of my work—and creating a common understanding of how software operates—works—understanding and communicating that effectively, and lowering that kind of cognitive load to get everybody on the same page is such a hard problem. I mean, when we didn't all work from home, we had whiteboards [laugh] and we would get in the room and go through sprint review and describe how something was working and seeing if there was anything we could do to improve quality, performance, reliability, scalability, functionality before something shipped, and we did it as a group, in-person. And it's very difficult to do that.And even that method is not particularly effective because you're dealing with whiteboards and people's mental models and so we wanted to, first of all, create something objective that would show you really how things worked, and secondly, we wanted to lower the burden to have those conversations with yourself. Or, you know, kind of rubber ducky debugging when something's not working, and also with the group. So, we created AppMaps as both interactive visualizations you could use to look at runtime, debug something, understand something better, but also something that could travel and help to make communication a lot easier. And that was the impetus, you know, just wanting to improve our own group understanding.Jason: I love that notion of not just having the developer understand more, but that idea of yeah, we work in teams and we often have misalignment simply because people on different sides of the application look at things differently. And so this idea of, can we build a tool that not only helps an individual understand things, but gets everybody on the same page is fantastic.Elizabeth: And also work in different layers of the application. For example, many observability tools are very highly focused on network, right? And sometimes the people who have the view of the problem, aren't able to articulate it clearly or effectively or expeditiously enough to capture the attention of someone who needs to fix the problem. And so, you know, I think also having—we've blended a combination of pieces of information into AppMap, not only code, but also web services, data, I/O, and other elements and so that we can start to talk more effectively as groups.Jason: That's awesome. So, I think that collaboration leads into that second thing that I brought up that I think is really interesting is that this is an open-source project as well. And so—Elizabeth: It is.Jason: Tell me more about that. What's the process? Because that's always, I think, a challenge is this notion of we love open-source, but we're also—we work for companies, we like to get paid. I like to get paid. [laugh]. So, how does that work out and what's that look like as you've gone on this journey?Elizabeth: Yeah. You know, I think we think quietly working are certainly looking for other fellow travelers who are interested in this space. We started by creating an open data framework—which AppMap is actually both the name of a code editor extension you can install and use to see the runtime of your code to understand issues and find a fix them faster, but it also is a data standard. And with that data standard, we're really looking to work with other people. Because, you know, I think this type of information should be widely accessible for people and I think it should be available to understand.I think, you know, awareness about your software environment is just kind of like a basic developer right. And so, [laugh] you know, the reason why we made the tools free, and the reason why we've made the data structure open-source is to be able to encourage people to get the kind of information that they need to do their job better. And by making our agents open-source, by making our clients open-source, it simply allows people to be able to find and adopt this kind of tooling to improve their own job performance. And so, you know, that was really kind of how we started and I think, ultimately, you know, there are opportunities to provide commercial products, and there will be some coming down the road, but at the moment, right now we're really interested in working with the community and, you know, understanding their needs better.Jason: That's awesome. Number one, I love the embrace of, you know, when you're in the startup land, there's the advice, have never tried to monetize too early, right? Build something that's useful that people enjoy and really value, and then it'll naturally come. The other question that I had is, I'm assuming you eat your own dog food, slash drink your own champagne. So, I'm really curious, like, one of the problems that I've had in open-source is the onboarding of new community members, right? Software is complex, and so people often have troubles, and they're like, how do I fix this? They file an issue on GitHub or whatever system you're using, and there's sometimes a notion with open-source of like, that's a good thing that you called out. You can fix that because it's open-source, but people are like, “I don't know how.”Elizabeth: Yeah.Jason: Does AppMap actually help in enabling AppMap open-source contributors? Like, have you seen that?Elizabeth: So, we've had issues filed. I would say that most of the fixes still come from us. If people wanted to run AppMap on AppMap to identify the bug, [laugh] that would be great, but it doesn't really work that way. So, you know, for us at this time, most of it is community filed issues and that we are working to resolve. But I do think—and I will say—that we have actually used AppMap on open-source projects that we use, and we've found [laugh] flaws and bugs using AppMap with those projects, and have filed issues with them. [laugh].Jason: That's awesome. I love that. I mean, that's what it means to be an open-source, right, and to use open-source is that notion of, like—Elizabeth: Right.Jason: Contribute wherever you can.Elizabeth: Yeah. And if that's the way, you know, we can contribute, you know—and I think similarly, I mean, our relationship to open-source is very strong. So, for example, you know, we came from the Ruby community and there's lots of different kinds of open-source projects that are commonly used for things like security and authentication and we've done a lot of work in our own project to tag and label those commonly-used libraries so that they can be—when you pop open an AppMap everything is all beautiful and tagged and, you know, very nicely and neatly organized for you so you can find everything you're looking for. Similarly, we're working with open-source communities in Python and Java and now JavaScript to do the same thing, which is, you know, to make sure that important information, important commonly used libraries and tools are called out clearly.Jason: So, as you're adding more languages, you're going to get more users. So, that brings me to our final question. And that's, as you get all these new users, they probably need some guidance. So, if you were to give some users tips, right? Someone goes out there, like, “I want to use AppMap,” what's some advice that you'd give them related to reliability? How can they get the best experience and build the best code using AppMap?Elizabeth: Yes. So, this has actually been a key piece of feedback, I think, from the community for us, which is, we released this tool out to the world, and we said, “We're going to bring here; we come with gifts of observability in your code editor.” And people have used it for all kinds of different projects: They've used it for refactoring projects, for debugging, for onboarding to code, for all of these different use cases, but one of the things that can be overwhelming is the amount of information that you get. And I think this is true of most kinds of observability tools; you kind of start with this wall of data, and you're like, “Where am I going to start?”And so my recommendation is that AppMap is best used when you have a targeted question in mind, not just kind of like, you know, “I'd like to understand how this new piece of the codebase works. I've shifted from Team A to Team B, and I need to onboard to it.” “I'd like to figure out why I've got a slow—you know, I've been told that we've got a slowdown. Is it my query? Is it my web service? What is it? I'd like to pinpoint, find, and fix the issue fast.”One of the things that we're doing now is starting to leverage the data in a more analytic way to begin to help people focus their attention. And that's a new product that we're going to be bringing out later this spring, and I'm very, very excited about it. But I think that's the key, which is to start small, run a few test cases that are related to the area of code that you're interested in if that's an onboarding case, or look for areas of the code you can record or run test cases around that is related to the bug you have to fix. Because if you just run your whole test suite, you will generate a giant amount of data. Sometimes people generate, like, 10,000 AppMaps on the first pass through. And they're like, “Whoa.” [laugh]. That's like getting a map of all of the Planet Earth with street directions for every single city, across all of the continents. You don't need that; you just want to know how to get to the nearest 7/11, right? Like, so just start small. [laugh]. Don't try and map your entire universe, galaxy, you know, out of the gate. [laugh].Jason: That's fantastic advice, and it sounds very similar to what we advise at Gremlin for Chaos Engineering of starting small, starting very specific, really honing in on sort of a hypothesis, “What do I think will happen?” Or, “How do I think I understand things?” And really going from there?Elizabeth: Yeah. It does, it focuses the mind to have a specific question as opposed to asking the universe what does it all mean?Jason: Yeah. Well, thanks for being a guest on the show today. Before we go, where can people find AppMap if they're interested in the tool, and they want to give it a try?Elizabeth: So, we are located in the VS Code Marketplace if you use the VS Code editor, and we're also located in JetBrains Marketplace if you use any of the JetBrains tools.Jason: Awesome. So yeah, for our VS Code and JetBrains users, go check that out. And if you're interested in more about AppMap or AppLand, where can folks find more info about the company and maybe future announcements on the analysis tooling?Elizabeth: That would be appland.com A-P-P-L-A-N-D dot C-O-M. And our dev docs are there, new tooling is announced there, and our community resources are there, so if anyone would like to participate in either helping us build out our data model, feedback on our language-specific plans or any of the tooling, we welcome contributors.Jason: Awesome. Thanks again for sharing all of that info about AppMap and AppLand and how folks can continue to build more reliable software.Elizabeth: Thank you for having me, Jason.Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.

ClickAI Radio
CAIR 62: Overcome The 4 Pitfalls To AI Ethics !!

ClickAI Radio

Play Episode Listen Later Jan 29, 2022 36:12


Grant Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable? Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth. Elizabeth Hey, Grant. Thanks for having me back. Grant Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that. Elizabeth So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls. Grant It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right. Elizabeth So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish. Grant Yeah, so I was gonna ask you on that, you know, in the world of generative AI, where that's becoming more and more of a thing, and in the app, the appetite for sort of generating or producing that test data is the premise that because I've heard some argue, wait, generative AI actually helps me to overcome and avoid some of the bias issues, but it sounds like you might be proposing just the opposite. Elizabeth It actually works both ways. So um, so creating synthetic data can really help when you want to avoid data bias, and you don't have enough production data to, to do that well. And so you can do, you can, you can do that in a number of different ways. data augmentation is one way so taking your original data and say, flipping it, or changing the colors in it, etc. So taking an original dataset and trying to make it more diverse and kind of cover more cases than you maybe would originally to make your model more robust. Another another kind of way of doing that is synthetic data creation. So an example there would be, you have a 3d environment, in one of these, you know, game engine type things like Unreal or blender, you know, there's, there's a few, and you have, say, I want to detect something, and it's usually in a residential setting, right. So you can have a whole environment of different, you know, housing types, and it would be really hard to get that data, you know, without having generated it, right, because you don't have cameras in everybody's houses, right. So in those cases, what we encourage is, pilots, so you before, really, you know, deploying this thing, and, and letting it free in the world, you you use that synthetic data, but then you make sure that you're piloting that in your set in your real world setting as long as possible to, you know, sets out any issues that you might come across. Grant So let's go back to that first example you shared where you got the crosswalk, you have the pedestrians, and now you need to make sure you've got different poses, like you said, someone you know, sitting down on the road or laying on the rug, certainly using generative AI to create different postures of those. But But what about, hey, if the introduction, is something brand new, such as, like you said, the wheelchair or some other sort of foreign object? Is the generative AI going to help you solve for that? Or do you need to you need to lead lead it a bit? Elizabeth It absolutely can. Right? So yeah, it's, it's basically anything that you can model in a 3d environment. And so you can definitely model someone in a wheelchair in a 3d environment. And, and Tesla uses this method really often because it's hard to simulate every kind of crash scenario, right? I mean, sorry, it's hard to have real data from every kind of crash scenario. And so they're trying to model again, they're trying to model their problem as robustly as possible. And so in some of those cases, they are like, you know, all of these types of things could happen, let's get more data around that the most efficient, and kind of most possible way of doing that is with synthetic data. Grant Awesome. Awesome. Okay. So that's a key approach for addressing the this bias problem. Are there any other techniques besides this generative, you know, training data approach? What else could you use to overcome the bias? Elizabeth Yeah, so. So another type kind of is when you have, like I was saying a mismatch in test and production data. So a lot of people even you know, computer vision, people sometimes don't know how much this matters. When it's things like, for example, working with a live video. So in those cases, bitrate matters, FPS matters, your resizing algorithm and your image encoding. And so you'll have, in many cases, you're collecting data in the first place for your test data differently than it's going to run in production. And people can forget about that. And so this is a place where, you know, having a platform like plain sight, can really help because that process is standardized, right? So the way you're pulling in that data, that is the same data that you're labeling, and it's the same data that you're, then you know, inferencing on, because you're pulling live data from those cameras, and it's all it's all managed in one place and to end. So that's, that's another strategy. And another thing that happens is when there are researchers that will be working on a model for like, two years, right, and they have this corpus of test data, but something happens in the meantime, right? So it's like, phone imaging has advanced in those in that time, so then your your input is a little different, or like the factory that they were trying to model, the the floor layout changed, right. And they didn't totally realize that the model had somewhat memorized that floor layout. And so you'll get these problems where you have this, you know, what you think is a really robust model, you drop it into production, and you don't know you have a problem until you drop it into production. So that's another reason that we really emphasize having pilots, and then also having a lot of different perspectives on vetting those pilots, right. So you, ideally, you can find a subject matter expert in the area outside of your company to, you know, take take a look at your data and what's coming out of it. And you have kind of a group of people really thinking deeply about, you know, the consistency of your data, how you're modeling your problem, and making sure that kind of all of those, all of those things are covered? Grant Well, in reducing cycle time from this initial set of training, to, to sort of validation of that pilot is crucial to this because as you're pointing out, even even if you even if you keep that cycle time short, and you do lots of iterations on it, some assumptions may change. How do you help? How to me what's the techniques for, you know, keeping someone looking at those assumptions? Like you said, maybe it's a change in camera phone technology, or it's a change of the layout? Like I said, as technology people, Einsteins we get so focused on oh, we're just pushing towards the solution, we sort of forget that part. How do you how do you get someone? Is that just a cultural thing? Is it a AI engineering thing, that someone's got a, you know, a role in the process? To do that? Elizabeth I think it's both. So I think the first thing is organizations really need to think deeply about their process for computer vision and AI. Right. And, and some of the things that I've already mentioned, need to be part of that process, right? So you want to research your users in advance, or your use cases in advance and try to think through that full Problem Set holistically. You want to you want to be really, really clear about your labeling, right? So you can introduce bias, just through your labeling process if humans themselves are introducing it, right? Exactly. If you have some people labeling something a little bit differently than other people. So like on the edge of an image, if you have a person on the edge, do you count that as a person? Or is it or you know, or as another person? Or is it not counted? How far in the view do they have to be? So there's, there's all a lot of gray area where you really just need to be very familiar with your data. And, and be really clear, as a company on how you're going to process that. Grant So this labeling boundaries, but then backing up, there's the label ontology or taxonomy itself, right, which is, yeah, that itself could just be introducing bias also, right. Elizabeth Yeah. And then back to kind of what we're saying about how to ensure how to really think through some of these problems, is you can also make sure that that as a as a company, you have a process where you, you have multi passes, multiple passes on, on that annotated data, and then multiple passes on the actual inference data, right. So you have a process where you're really checking. Another thing that we've talked about internally, recently is you know, we have a pipeline for deploying your computer vision. And one of the things that can be really, really important in a lot of these cases is making sure that there is a human in the loop that there is some human supervision. To make sure that you're, you're, again, you weren't servicing bias that you didn't under your you didn't anticipate, or your your model hasn't drifted over time, things like that. And so something we've considered is being able to kick off just in that process, have it built in that you can kick off a human, like a task for a person, right? So it's, it's just built in. Grant And so it no matter what you do that thing is this, it's just as a governance function, is that what you're getting? Elizabeth Kind of so it's like, it's like a processing pipeline for your data. And, and so you can have things like, Alright, at this step, I'm gonna augment my data, and at this step, I'm gonna, you know, run an inference on it, or flip it or whatever it is, right? And so, in that you could make sure that you kick off a task for a human to check, right, or, or whatever the case may be. Yep. Yep. So there's several good, so good process maturity, is another technique for how do we help overcome bias as well as inaccurate models? And I'm assuming you're, you're almost bundling both of those into that right? In Yeah, both right. And, and like you said, they're the another way is reducing that time, and also making sure that you're working on production data whenever possible. So reducing the this, this is where the platform can help as well. Because when you you know, you aren't off in research land, without production data for two years, but you have a system where it makes it really easy to connect cameras, and just work on on real production data, then two things, you're, you're reducing the time that it takes to kind of go full circle on on labeling and training and testing. And then also you you have it all in one place. And that's that's one of the problems that we solve, right? Because, in many cases, computer vision engineers or, or data scientists, they're kind of working on the they don't have the full pipeline to work on the problem. So they have this test dataset, and they're working on it somewhat separately from where it will be deployed in production. And so we try to join those two things. Grant Yeah, I think that's one of the real strengths of the platform of your platform, the plain side platform is this reduction of the cycle, so that I can actually be testing and validating against production scenarios, and then take that feedback. And then augmenting that with the great governance processes you talked about. Both of those are critical. Let's let's talk a little bit and talk about fraud is, you know, certainly in this in computer vision, holy smokes, fraud has been probably one of the key areas that, you know, the bad guys have gone after, right? All right, what what can you do to overcome this and deal with this? Elizabeth You know, it can really become a cat and mouse game. And I think the conversation about fraud boils down to, it's not clear, it boils down to is it better than the alternative? Right? So it's not clear that just because there could be some fraud in the computer vision solution, it may or may not be true that there could be more fraud and another solution, right. So so the example is, technically, you used to be able to and I think with some phones, you still can 3d print a face to defraud your facial detection to unlock your phone. Yeah. And there is and so then they've, you know, done a lot of things, advancements, so this is harder to do, which, like there's a liveliness detector, I think they use their eyes, your eyes for that. And then you know, there's a few but you could still use a mask. So again, it's it's this cat and mouse game. And another place is is you know, there are models that can understand text to speech. And then there are models that you can put on top of that, that can make that speech sound like other voices, right? So the the big category here is deep fakes. But it's, you know, you can you can make your voice sound like someone else's voice. And there are banks and other things like that, that use voice as a as a method for authentication. Right, right. Grant I'm sure I'm sure we've all seen the the Google duplex demo or scenarios right. says a few years from now, right? I mean, that technology obviously continues to mature. Elizabeth Exactly. And so, so then the question is Okay, if I can 3d print a face and or a mask and unlock someone's phone, is that is that is that harder than actually someone just finding my, you know, four to six digit phone, you know, numerical code to unlock my phone. So, you know, so I think there it really becomes a balance of which thing is is harder to defraud and in fraud in general, you know, if you think about cybersecurity, and, and everywhere that you're trying to combat this, it's a it's a cat and mouse game, right? People are getting, you know, people are figure out the vulnerabilities in what exists and then and then people have to get better at defending it. So well. So the argument is, if I if I can say back to the argument is, yeah, it exists. But hey, how's this different from so many other technologies or techniques, where again, you got fraudsters trying to break in? This is just part of the business today? Right. That's where it is? Grant Yeah, I think it becomes a, an evaluation of is it? Does it cause more or less of a fraud problem? And then it's, it's really just about evaluating the use of technology on an even plane? Right. So it's not it's not about should you use AI? Because it causes fraud? It's should you use any particular method or technology because there's a fraud issue and what's gonna cause the least fraud? Right, a more specific use case? Elizabeth Yeah. Grant Yep. Okay, so So fraud. So, uh, you and I had talked about some potential techniques out there. Like there's a Facebook Instagram technology algorithm. Right. I think it's called seer. I think it came out not too long ago. It's a it's an ultra large vision model. It takes in more than a billion variables. P believe that. That's, that's a lot. A lot of massive. I mean, I've built some AI models, but not with a billion. That's incredible. So are you familiar with that? Have you looked into that at all SEER itself? Elizabeth Yeah, so So this, basically, this method where you can look, basically to try to address bias through distorting of images? Yeah, yeah. So I can give you a good example of something that actually we've worked on, I'm going to chase change the case a little bit to kind of anonymize it. But so in a lab setting, we were working on some special imaging to detect whether there was a bacteria in, in in samples, or not, right. And in this case, we were collecting samples from many labs across the country. And one thing that could be different in them was the color of kind of the substrate that the sample was just in, it was essentially a preservative. Wow. And so but but those, there are a few different colors. And they were used kind of widely. And so it wasn't generally thought that, you know, this would be a problem. But so the model was built and all the data was processed. And there was a really high accuracy. But what happened, and what they found out was that the, there was a correlation with the color and whether the bacteria was present or not. And it was just a kind of a chance correlation, right. But if you had had something like that, that image distortion, so if you took the color out automatically, or you mess with the color, then that would have taken that bias out of that model. And then as a second thing happened, actually, which was when the, the the people in the lab, were taking the samples out of the freezer, they would take all of them at once. And they were just kind of bordered. And so they would do all of the positives first and all of the negative second. And machine learning is just it's a really amazing pattern detector, right? Like that is that is what it is about. Yeah. And so again, they were finding a correlation just between the weather it was hot, more thawed or not. And that was correlating with whether it was positive or not. So, you know, some of this really comes back to what you learn in science fair and putting together a really Your robust scientific method and making sure you're handling all of your very variables really carefully. And, and, and, and clearly and you know what's going into your model. And you can control for that as much as possible. So, so yeah, that I mean that Facebook method is, can be really valuable in a lot of cases to suss out some of these correlations that you may just not know are there. Grant Yeah, I think what's cool is they open source that right, I think it's called swag SwaaV. Yeah. Which is awesome. The they figured that out and made that open source so that obviously, the larger community needs something like this course help deal with some of this, this bias challenge. Interesting. Okay, that's cool. So all right. I was I was I really wanted to ask you about your thoughts on that approach. So I'm glad to hear you validate that. Elizabeth Yeah, no, it's great. I mean, there really has to be a process, especially in a in a model like that, where you try to break it in any possible way that you can, right, there has to be a whole separate process where you think through any variable that there could be and so if there's a model that's, that has, you know, so many just out of the box, that's a really good, great place to start. Grant Yeah, awesome. Awesome. Okay. And then the last category here, around ethical violations, any thoughts on that? Elizabeth Addressing that overcoming that, you know, I think that really just comes down to when you need permission to be doing something, I need to make sure that you're doing it right, or you're getting it. And that, you know, obviously that happens in cases where there's facial recognition and making sure that people know that that's going on, and that's similar to being kind of videotaped at all right. And so that one's fairly straightforward. But sometimes people need to, you know, when you're putting together your ethics position, you need to make sure that you're really remembering that that's there. And you're checking every single time that you don't have an issue. Grant Yeah, permissions. And there's this notion, I'll come up with a term that feels like permission creep, right. It's called scope, right? It's like, well, you may have gotten permission to do this part of it. But you kind of find yourself also using the data stuff over here right to maybe solve other other problems, and that that's a problem in some some people's minds for sure. I was very good point. Yeah, various articles, people out there talk about that part of it sort of creeping along, and how do you help ensure that what it is I gave you the data for what we're using it for? Is just for its, you know, you know, permitted intended purpose, right? That was a challenge for sure. Okay, so you've been more than fair with your time here today with us, Elizabeth, gay, dry, any conclusions? What's the top secret answer to the overcoming the four pitfalls here of AI ethical? Elizabeth So one thing I have to add, we would be remiss if we didn't talk about data bias without talking about data diversity in data balance, right. And so, you know, obviously, the, the simple example there is fruit. So if you are looking at if you have a dataset with seven apples, one banana, and seven oranges, it's going to be worse at detecting the banana. But the more real world example that happens is in hospitals, right? So they, in the healthcare system, in general, we have a problem with being able to share data, even even anonymized data. So when a hospital is doing is building a model, there have been problems where a can be they, they have bias in their dataset, right. So in in a certain location, you can have something like if you're coming in with a cough in one area, it may be most likely that you have a cold, cold, but in another area, it may be more accurate to start evaluating for asthma, right. Grant So that kind of thing can come up so it if you if you take a model that's done in one hospital and try to apply it elsewhere, then again, that's a place where you can visit, is that kind of like a form of confirmation bias, meaning, you know, you have the same symptom, but you come into two different parts of the hospital and, well, this person's coughing and you know, you're in the respiratory area. So they immediately think it's one thing but now you go to another part of the hospital. Well, yeah, a cough is a symptom for that to suddenly you know, that's what they think you have. Elizabeth That's a great point. It really it's sort of the machine learning version. that? Grant Yeah, that's right. Yeah, it's a confirmation bias sort of view. It's like yeah, oh, this is, uh, but it how many variables does it take for you to actually have true confirmation? Right? But with this example from Facebook a billion, but how many do you need to have? Elizabeth I think it's really it's less about the variables. And it's more about your data balance and making sure that you're training on the same data that's going to be used in production. So it you know, it's less of a problem, if you are, you know, only deploying that model at one hospital. But if you want to deploy it elsewhere, you need data from everywhere, right? Or, or wherever you're, you're planning to deploy it. So So again, it really comes back to that data balance and making sure your test data and your production data are kind of in line. Grant Are there any of these ethical biases we've talked about that are not solvable? Elizabeth Um, that's a good question. I think Ah, maybe dancer, are you? Are you running? I think there are definitely some that can be really hard. So, so something that we touched on, you talked about, you know, is there inherently a, are our supervised models more inherently more biased than unsupervised? And like, the answer there is, is probably yes. Because you're T you're a human is explicitly teaching a model what's important in that image? And so you know, that that thing can be exactly what you're looking for. Right? You want to make sure there's not a safety issue or whatever it is. But, but, but just it's a human process. So there can be things there that you don't catch. Grant Yeah, yeah. Yeah, that's that's been a question on my mind for a while, which is the implicit impact of bias on supervised versus non supervisory, or work with another group called Aible, have you run into Aible, they're one of the AutoML providers out there. And more on sort of the predictive analytics side of AI, right. They're not doing anything with with computer vision, they have this capability, where they'll look at, but it's always supervised data, but what they're trying to the problem you're trying to solve is, okay, you got a lot of data. Just give me tone, give me signal. In other words, before I spend too much time, trying to, you know, do some training and guiding the model, just do a quick look into that data set and tell me, is there any toner signal where these particular supervised elements, they can draw early correlation to outcome or predictive capabilities. And the idea is that as the world of data keeps getting larger and larger, our time as humans doesn't keep getting larger and larger. So we need to reduce what's the total set of stuff we're looking at, dismiss these other pieces, they're irrelevant to, you know, being predictive. And then you can focus on the things that are important. Anything like that in the computer vision world? Elizabeth So So I was thinking I was trying so unsupervised learning is less common in, in computer vision. But, but, but one of the things that can happen is just the data that exists in the world is bias. Right? So So an example is say you want to predict what a human might do at any one time. And you want to use an unsupervised method for that. So say you want to scrape the internet of videos. If you look at the videos on YouTube, the videos that people upload are inherently biased. So if you look at security view videos, they're like, almost all fights, right. So your model, because that's what humans think, is interesting. And as you know, uploaded it in a security video. And so I mean, not almost all but a lot of Yeah, yeah, he's inherently what humans think are interesting. And so there are places like that where just inherently your data set is kind of biased because we're human. So So again, it's another place that you have to be pretty careful. Grant Yeah. Okay, so sounds like the problems are I'm gonna say I'm doing Air quotes. These are solvable, but it takes some discipline and rigor. Elizabeth Yeah, okay. And and it's just so important for organizations to kind of sit down and really think through their, their ethical use of of AI and how they're going to approach that and get a policy together and make sure they're really kind of living those policies. Grant Excellent. Okay. Elizabeth, thank you for your time today. Any final comments? Any parting shots? Elizabeth Um, no, I think I appreciate you having me on. That was a really fun conversation. And yeah, I always enjoy chatting with you. Grant Likewise, Elizabeth, thank you for your time. Thank you everyone for joining and this episode. Until next time, get some ethics for your AI. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.  

Financial Investing Radio
FIR 143: Overcoming The 4 Pitfalls Of AI Ethics !!

Financial Investing Radio

Play Episode Listen Later Jan 29, 2022 36:12


Grant Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable? Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth. Elizabeth Hey, Grant. Thanks for having me back. Grant Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that. Elizabeth So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls. Grant It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right. Elizabeth So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish. Grant Yeah, so I was gonna ask you on that, you know, in the world of generative AI, where that's becoming more and more of a thing, and in the app, the appetite for sort of generating or producing that test data is the premise that because I've heard some argue, wait, generative AI actually helps me to overcome and avoid some of the bias issues, but it sounds like you might be proposing just the opposite. Elizabeth It actually works both ways. So um, so creating synthetic data can really help when you want to avoid data bias, and you don't have enough production data to, to do that well. And so you can do, you can, you can do that in a number of different ways. data augmentation is one way so taking your original data and say, flipping it, or changing the colors in it, etc. So taking an original dataset and trying to make it more diverse and kind of cover more cases than you maybe would originally to make your model more robust. Another another kind of way of doing that is synthetic data creation. So an example there would be, you have a 3d environment, in one of these, you know, game engine type things like Unreal or blender, you know, there's, there's a few, and you have, say, I want to detect something, and it's usually in a residential setting, right. So you can have a whole environment of different, you know, housing types, and it would be really hard to get that data, you know, without having generated it, right, because you don't have cameras in everybody's houses, right. So in those cases, what we encourage is, pilots, so you before, really, you know, deploying this thing, and, and letting it free in the world, you you use that synthetic data, but then you make sure that you're piloting that in your set in your real world setting as long as possible to, you know, sets out any issues that you might come across. Grant So let's go back to that first example you shared where you got the crosswalk, you have the pedestrians, and now you need to make sure you've got different poses, like you said, someone you know, sitting down on the road or laying on the rug, certainly using generative AI to create different postures of those. But But what about, hey, if the introduction, is something brand new, such as, like you said, the wheelchair or some other sort of foreign object? Is the generative AI going to help you solve for that? Or do you need to you need to lead lead it a bit? Elizabeth It absolutely can. Right? So yeah, it's, it's basically anything that you can model in a 3d environment. And so you can definitely model someone in a wheelchair in a 3d environment. And, and Tesla uses this method really often because it's hard to simulate every kind of crash scenario, right? I mean, sorry, it's hard to have real data from every kind of crash scenario. And so they're trying to model again, they're trying to model their problem as robustly as possible. And so in some of those cases, they are like, you know, all of these types of things could happen, let's get more data around that the most efficient, and kind of most possible way of doing that is with synthetic data. Grant Awesome. Awesome. Okay. So that's a key approach for addressing the this bias problem. Are there any other techniques besides this generative, you know, training data approach? What else could you use to overcome the bias? Elizabeth Yeah, so. So another type kind of is when you have, like I was saying a mismatch in test and production data. So a lot of people even you know, computer vision, people sometimes don't know how much this matters. When it's things like, for example, working with a live video. So in those cases, bitrate matters, FPS matters, your resizing algorithm and your image encoding. And so you'll have, in many cases, you're collecting data in the first place for your test data differently than it's going to run in production. And people can forget about that. And so this is a place where, you know, having a platform like plain sight, can really help because that process is standardized, right? So the way you're pulling in that data, that is the same data that you're labeling, and it's the same data that you're, then you know, inferencing on, because you're pulling live data from those cameras, and it's all it's all managed in one place and to end. So that's, that's another strategy. And another thing that happens is when there are researchers that will be working on a model for like, two years, right, and they have this corpus of test data, but something happens in the meantime, right? So it's like, phone imaging has advanced in those in that time, so then your your input is a little different, or like the factory that they were trying to model, the the floor layout changed, right. And they didn't totally realize that the model had somewhat memorized that floor layout. And so you'll get these problems where you have this, you know, what you think is a really robust model, you drop it into production, and you don't know you have a problem until you drop it into production. So that's another reason that we really emphasize having pilots, and then also having a lot of different perspectives on vetting those pilots, right. So you, ideally, you can find a subject matter expert in the area outside of your company to, you know, take take a look at your data and what's coming out of it. And you have kind of a group of people really thinking deeply about, you know, the consistency of your data, how you're modeling your problem, and making sure that kind of all of those, all of those things are covered? Grant Well, in reducing cycle time from this initial set of training, to, to sort of validation of that pilot is crucial to this because as you're pointing out, even even if you even if you keep that cycle time short, and you do lots of iterations on it, some assumptions may change. How do you help? How to me what's the techniques for, you know, keeping someone looking at those assumptions? Like you said, maybe it's a change in camera phone technology, or it's a change of the layout? Like I said, as technology people, Einsteins we get so focused on oh, we're just pushing towards the solution, we sort of forget that part. How do you how do you get someone? Is that just a cultural thing? Is it a AI engineering thing, that someone's got a, you know, a role in the process? To do that? Elizabeth I think it's both. So I think the first thing is organizations really need to think deeply about their process for computer vision and AI. Right. And, and some of the things that I've already mentioned, need to be part of that process, right? So you want to research your users in advance, or your use cases in advance and try to think through that full Problem Set holistically. You want to you want to be really, really clear about your labeling, right? So you can introduce bias, just through your labeling process if humans themselves are introducing it, right? Exactly. If you have some people labeling something a little bit differently than other people. So like on the edge of an image, if you have a person on the edge, do you count that as a person? Or is it or you know, or as another person? Or is it not counted? How far in the view do they have to be? So there's, there's all a lot of gray area where you really just need to be very familiar with your data. And, and be really clear, as a company on how you're going to process that. Grant So this labeling boundaries, but then backing up, there's the label ontology or taxonomy itself, right, which is, yeah, that itself could just be introducing bias also, right. Elizabeth Yeah. And then back to kind of what we're saying about how to ensure how to really think through some of these problems, is you can also make sure that that as a as a company, you have a process where you, you have multi passes, multiple passes on, on that annotated data, and then multiple passes on the actual inference data, right. So you have a process where you're really checking. Another thing that we've talked about internally, recently is you know, we have a pipeline for deploying your computer vision. And one of the things that can be really, really important in a lot of these cases is making sure that there is a human in the loop that there is some human supervision. To make sure that you're, you're, again, you weren't servicing bias that you didn't under your you didn't anticipate, or your your model hasn't drifted over time, things like that. And so something we've considered is being able to kick off just in that process, have it built in that you can kick off a human, like a task for a person, right? So it's, it's just built in. Grant And so it no matter what you do that thing is this, it's just as a governance function, is that what you're getting? Elizabeth Kind of so it's like, it's like a processing pipeline for your data. And, and so you can have things like, Alright, at this step, I'm gonna augment my data, and at this step, I'm gonna, you know, run an inference on it, or flip it or whatever it is, right? And so, in that you could make sure that you kick off a task for a human to check, right, or, or whatever the case may be. Yep. Yep. So there's several good, so good process maturity, is another technique for how do we help overcome bias as well as inaccurate models? And I'm assuming you're, you're almost bundling both of those into that right? In Yeah, both right. And, and like you said, they're the another way is reducing that time, and also making sure that you're working on production data whenever possible. So reducing the this, this is where the platform can help as well. Because when you you know, you aren't off in research land, without production data for two years, but you have a system where it makes it really easy to connect cameras, and just work on on real production data, then two things, you're, you're reducing the time that it takes to kind of go full circle on on labeling and training and testing. And then also you you have it all in one place. And that's that's one of the problems that we solve, right? Because, in many cases, computer vision engineers or, or data scientists, they're kind of working on the they don't have the full pipeline to work on the problem. So they have this test dataset, and they're working on it somewhat separately from where it will be deployed in production. And so we try to join those two things. Grant Yeah, I think that's one of the real strengths of the platform of your platform, the plain side platform is this reduction of the cycle, so that I can actually be testing and validating against production scenarios, and then take that feedback. And then augmenting that with the great governance processes you talked about. Both of those are critical. Let's let's talk a little bit and talk about fraud is, you know, certainly in this in computer vision, holy smokes, fraud has been probably one of the key areas that, you know, the bad guys have gone after, right? All right, what what can you do to overcome this and deal with this? Elizabeth You know, it can really become a cat and mouse game. And I think the conversation about fraud boils down to, it's not clear, it boils down to is it better than the alternative? Right? So it's not clear that just because there could be some fraud in the computer vision solution, it may or may not be true that there could be more fraud and another solution, right. So so the example is, technically, you used to be able to and I think with some phones, you still can 3d print a face to defraud your facial detection to unlock your phone. Yeah. And there is and so then they've, you know, done a lot of things, advancements, so this is harder to do, which, like there's a liveliness detector, I think they use their eyes, your eyes for that. And then you know, there's a few but you could still use a mask. So again, it's it's this cat and mouse game. And another place is is you know, there are models that can understand text to speech. And then there are models that you can put on top of that, that can make that speech sound like other voices, right? So the the big category here is deep fakes. But it's, you know, you can you can make your voice sound like someone else's voice. And there are banks and other things like that, that use voice as a as a method for authentication. Right, right. Grant I'm sure I'm sure we've all seen the the Google duplex demo or scenarios right. says a few years from now, right? I mean, that technology obviously continues to mature. Elizabeth Exactly. And so, so then the question is Okay, if I can 3d print a face and or a mask and unlock someone's phone, is that is that is that harder than actually someone just finding my, you know, four to six digit phone, you know, numerical code to unlock my phone. So, you know, so I think there it really becomes a balance of which thing is is harder to defraud and in fraud in general, you know, if you think about cybersecurity, and, and everywhere that you're trying to combat this, it's a it's a cat and mouse game, right? People are getting, you know, people are figure out the vulnerabilities in what exists and then and then people have to get better at defending it. So well. So the argument is, if I if I can say back to the argument is, yeah, it exists. But hey, how's this different from so many other technologies or techniques, where again, you got fraudsters trying to break in? This is just part of the business today? Right. That's where it is? Grant Yeah, I think it becomes a, an evaluation of is it? Does it cause more or less of a fraud problem? And then it's, it's really just about evaluating the use of technology on an even plane? Right. So it's not it's not about should you use AI? Because it causes fraud? It's should you use any particular method or technology because there's a fraud issue and what's gonna cause the least fraud? Right, a more specific use case? Elizabeth Yeah. Grant Yep. Okay, so So fraud. So, uh, you and I had talked about some potential techniques out there. Like there's a Facebook Instagram technology algorithm. Right. I think it's called seer. I think it came out not too long ago. It's a it's an ultra large vision model. It takes in more than a billion variables. P believe that. That's, that's a lot. A lot of massive. I mean, I've built some AI models, but not with a billion. That's incredible. So are you familiar with that? Have you looked into that at all SEER itself? Elizabeth Yeah, so So this, basically, this method where you can look, basically to try to address bias through distorting of images? Yeah, yeah. So I can give you a good example of something that actually we've worked on, I'm going to chase change the case a little bit to kind of anonymize it. But so in a lab setting, we were working on some special imaging to detect whether there was a bacteria in, in in samples, or not, right. And in this case, we were collecting samples from many labs across the country. And one thing that could be different in them was the color of kind of the substrate that the sample was just in, it was essentially a preservative. Wow. And so but but those, there are a few different colors. And they were used kind of widely. And so it wasn't generally thought that, you know, this would be a problem. But so the model was built and all the data was processed. And there was a really high accuracy. But what happened, and what they found out was that the, there was a correlation with the color and whether the bacteria was present or not. And it was just a kind of a chance correlation, right. But if you had had something like that, that image distortion, so if you took the color out automatically, or you mess with the color, then that would have taken that bias out of that model. And then as a second thing happened, actually, which was when the, the the people in the lab, were taking the samples out of the freezer, they would take all of them at once. And they were just kind of bordered. And so they would do all of the positives first and all of the negative second. And machine learning is just it's a really amazing pattern detector, right? Like that is that is what it is about. Yeah. And so again, they were finding a correlation just between the weather it was hot, more thawed or not. And that was correlating with whether it was positive or not. So, you know, some of this really comes back to what you learn in science fair and putting together a really Your robust scientific method and making sure you're handling all of your very variables really carefully. And, and, and, and clearly and you know what's going into your model. And you can control for that as much as possible. So, so yeah, that I mean that Facebook method is, can be really valuable in a lot of cases to suss out some of these correlations that you may just not know are there. Grant Yeah, I think what's cool is they open source that right, I think it's called swag SwaaV. Yeah. Which is awesome. The they figured that out and made that open source so that obviously, the larger community needs something like this course help deal with some of this, this bias challenge. Interesting. Okay, that's cool. So all right. I was I was I really wanted to ask you about your thoughts on that approach. So I'm glad to hear you validate that. Elizabeth Yeah, no, it's great. I mean, there really has to be a process, especially in a in a model like that, where you try to break it in any possible way that you can, right, there has to be a whole separate process where you think through any variable that there could be and so if there's a model that's, that has, you know, so many just out of the box, that's a really good, great place to start. Grant Yeah, awesome. Awesome. Okay. And then the last category here, around ethical violations, any thoughts on that? Elizabeth Addressing that overcoming that, you know, I think that really just comes down to when you need permission to be doing something, I need to make sure that you're doing it right, or you're getting it. And that, you know, obviously that happens in cases where there's facial recognition and making sure that people know that that's going on, and that's similar to being kind of videotaped at all right. And so that one's fairly straightforward. But sometimes people need to, you know, when you're putting together your ethics position, you need to make sure that you're really remembering that that's there. And you're checking every single time that you don't have an issue. Grant Yeah, permissions. And there's this notion, I'll come up with a term that feels like permission creep, right. It's called scope, right? It's like, well, you may have gotten permission to do this part of it. But you kind of find yourself also using the data stuff over here right to maybe solve other other problems, and that that's a problem in some some people's minds for sure. I was very good point. Yeah, various articles, people out there talk about that part of it sort of creeping along, and how do you help ensure that what it is I gave you the data for what we're using it for? Is just for its, you know, you know, permitted intended purpose, right? That was a challenge for sure. Okay, so you've been more than fair with your time here today with us, Elizabeth, gay, dry, any conclusions? What's the top secret answer to the overcoming the four pitfalls here of AI ethical? Elizabeth So one thing I have to add, we would be remiss if we didn't talk about data bias without talking about data diversity in data balance, right. And so, you know, obviously, the, the simple example there is fruit. So if you are looking at if you have a dataset with seven apples, one banana, and seven oranges, it's going to be worse at detecting the banana. But the more real world example that happens is in hospitals, right? So they, in the healthcare system, in general, we have a problem with being able to share data, even even anonymized data. So when a hospital is doing is building a model, there have been problems where a can be they, they have bias in their dataset, right. So in in a certain location, you can have something like if you're coming in with a cough in one area, it may be most likely that you have a cold, cold, but in another area, it may be more accurate to start evaluating for asthma, right. Grant So that kind of thing can come up so it if you if you take a model that's done in one hospital and try to apply it elsewhere, then again, that's a place where you can visit, is that kind of like a form of confirmation bias, meaning, you know, you have the same symptom, but you come into two different parts of the hospital and, well, this person's coughing and you know, you're in the respiratory area. So they immediately think it's one thing but now you go to another part of the hospital. Well, yeah, a cough is a symptom for that to suddenly you know, that's what they think you have. Elizabeth That's a great point. It really it's sort of the machine learning version. that? Grant Yeah, that's right. Yeah, it's a confirmation bias sort of view. It's like yeah, oh, this is, uh, but it how many variables does it take for you to actually have true confirmation? Right? But with this example from Facebook a billion, but how many do you need to have? Elizabeth I think it's really it's less about the variables. And it's more about your data balance and making sure that you're training on the same data that's going to be used in production. So it you know, it's less of a problem, if you are, you know, only deploying that model at one hospital. But if you want to deploy it elsewhere, you need data from everywhere, right? Or, or wherever you're, you're planning to deploy it. So So again, it really comes back to that data balance and making sure your test data and your production data are kind of in line. Grant Are there any of these ethical biases we've talked about that are not solvable? Elizabeth Um, that's a good question. I think Ah, maybe dancer, are you? Are you running? I think there are definitely some that can be really hard. So, so something that we touched on, you talked about, you know, is there inherently a, are our supervised models more inherently more biased than unsupervised? And like, the answer there is, is probably yes. Because you're T you're a human is explicitly teaching a model what's important in that image? And so you know, that that thing can be exactly what you're looking for. Right? You want to make sure there's not a safety issue or whatever it is. But, but, but just it's a human process. So there can be things there that you don't catch. Grant Yeah, yeah. Yeah, that's that's been a question on my mind for a while, which is the implicit impact of bias on supervised versus non supervisory, or work with another group called Aible, have you run into Aible, they're one of the AutoML providers out there. And more on sort of the predictive analytics side of AI, right. They're not doing anything with with computer vision, they have this capability, where they'll look at, but it's always supervised data, but what they're trying to the problem you're trying to solve is, okay, you got a lot of data. Just give me tone, give me signal. In other words, before I spend too much time, trying to, you know, do some training and guiding the model, just do a quick look into that data set and tell me, is there any toner signal where these particular supervised elements, they can draw early correlation to outcome or predictive capabilities. And the idea is that as the world of data keeps getting larger and larger, our time as humans doesn't keep getting larger and larger. So we need to reduce what's the total set of stuff we're looking at, dismiss these other pieces, they're irrelevant to, you know, being predictive. And then you can focus on the things that are important. Anything like that in the computer vision world? Elizabeth So So I was thinking I was trying so unsupervised learning is less common in, in computer vision. But, but, but one of the things that can happen is just the data that exists in the world is bias. Right? So So an example is say you want to predict what a human might do at any one time. And you want to use an unsupervised method for that. So say you want to scrape the internet of videos. If you look at the videos on YouTube, the videos that people upload are inherently biased. So if you look at security view videos, they're like, almost all fights, right. So your model, because that's what humans think, is interesting. And as you know, uploaded it in a security video. And so I mean, not almost all but a lot of Yeah, yeah, he's inherently what humans think are interesting. And so there are places like that where just inherently your data set is kind of biased because we're human. So So again, it's another place that you have to be pretty careful. Grant Yeah. Okay, so sounds like the problems are I'm gonna say I'm doing Air quotes. These are solvable, but it takes some discipline and rigor. Elizabeth Yeah, okay. And and it's just so important for organizations to kind of sit down and really think through their, their ethical use of of AI and how they're going to approach that and get a policy together and make sure they're really kind of living those policies. Grant Excellent. Okay. Elizabeth, thank you for your time today. Any final comments? Any parting shots? Elizabeth Um, no, I think I appreciate you having me on. That was a really fun conversation. And yeah, I always enjoy chatting with you. Grant Likewise, Elizabeth, thank you for your time. Thank you everyone for joining and this episode. Until next time, get some ethics for your AI. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.  

Financial Investing Radio
FIR 137: Interview - How AI Ethics Affects Your Business !!

Financial Investing Radio

Play Episode Listen Later Dec 14, 2021 36:48


Welcome to ClickAI Radio. In this episode I have a conversation with some AI experts on how AI ethics affect your business. Grant Okay, welcome, everybody to another episode of clique AI radio. Well in the house today I have got a return visitor very excited to have him and a brand new person. I'm excited to introduce you to Carlos Anchia. I've been practicing that I get that right, Carlos. Carlos Sounds great. Good to see you again. Grant Even old dogs can learn new tricks. There we go. All right, and Elizabeth Spears. Now I got that one easily. Right, Elizabeth? Elizabeth You did it? Yeah. I'm really happy to be here. Grant This is exciting for me to have them here with me today. They are co founders of plain sight AI. And a few episodes ago, I had an opportunity to speak with Carlos and he laid the foundation around the origin story of AI for vision, some of the techniques and the problems they're solving. And I started to nerd out on all of the benefits. In fact, you know what, Carlos, I need to tell you, since our last conversation, I actually circled back to your team and had a demo of what you guys are doing. And yeah, I think it was very impressive, very impressive, you know, a guy like me, where I've coded this stuff. And I was like, Oh, wow, you just took a lot of a lot of pain out of the process. You know, one of the pains that I saw come out of the process was the reduction in time, right that how long it would take for me to cycle another model through it. That was incredible, right? I can't remember the actual quantification of time, but it was at least a 50% of not even 80% reduction of cycle time is what I saw come through, there's even model versioning sort of techniques, there's just, you know, there's another really cool technique in there that I saw. And it had to do with this ability to augment or, or approximate test data, right, this ability to say, but but without creating more test data, it could approximate and create that for you. So now, your whole testing side just got a lot easier without building up, you know, those massive test cases and test basis for for doing the stuff So, alright, very impressive product set. And let's see, Elizabeth can you explain that? Elizabeth That's right, Chief Product Officer. So basically, kind of the strategy around what we're building and how we build it. And the order in which we build it is is kind of under my purview. Grant Okay, very good. Awesome. Well, it's so great to have both of you here today. So after I spoke with Carlos, last time, after we finished the recording, I said, You know what, I want to talk to you about ethics about AI ethics. And so as you heard in my previous podcast, I sort of laid the foundation for this conversation. And it's not the only areas of ethics around AI, but it's a place to start. And so we want to build on this. And we're gonna talk about these sort of four or five different areas just to begin the conversation. And I think this could translate certainly into other conversations as well. But to do that, could could one or both of you spend a little time giving the foundation of what is AI ethics as a relates to computer vision itself? What are some of the challenges or problems or misunderstandings that you see in this specific area of AI? Carlos Sure, I can take that one. So I think really, when we're talking around ethics, we're bowling any sort of technology, we're talking around how that technology is implemented, and the use of that, right and what's acceptable. So in the case of this technology, we're talking around computer vision and artificial intelligence, and how those things go into society. And it's really through its intended use on how we evaluate the technology. And I think really, computer vision continues to provide value to allow us to get through this digital transformation piece. As a technology, right? And, you know, once we start with, yes, this is a valuable technology, the conversation really now shifts to how do we use that technology for good, some cases bad, right? Where this is where that conversation arises around, you know, having the space to share what we believe is good or bad, or the right uses or the wrong usage just right. And it's a very, very gray area, when we look to address technology and advancement in technology against a black and white good or bad kind of a situation, we get into a lot of issues where, you know, there's a lot of controversy around some of these things, right, which is really, you know, as we started discussing it after the last podcast, it was really, you know, man, I really need to have a good podcast around this, because there's a lot to it. And you clearly said there was a previous one. And now there's this one, you know, I hope that there's a series of these so we can continue to express value and just have a free conversation around ethics in artificial intelligence. But really, what I'm trying to do is set the context, right. So like technology works great from the just the science of the application of that technology. And if you think of something like super controversial facial recognition, now, absolutely. I don't want people to look at my face when I'm standing on a corner. But if there's, you know, child abduction cases, yes, please use all the facial recognition you can I want that to succeed really well. And we've learned that technology works. So it's not the solution itself. It's how we're applying that solution. Right. And there's there's a lot of new ones to that. And, you know, Elizabeth can help shed a little bit of light here, because this is something that we evaluate on a constant basis and have really free discussions around. Grant Yeah, I would imagine you have to even as you take your platform into your into your customer base, even understanding what their use cases are imagined, at times, you might have to give a little guidance on the best way to apply our uses. What have you seen with this, Elizabeth? Elizabeth Yeah, you know, there's, it's interesting what Carlos is saying there's a lot of the same themes for evaluating the ethics and technology in general are similar ones that come up with when AI is applied. So things like fraud or bias is actually more can be more uniquely AI. But that absolutely exists in other technologies. And then inaccuracy and how that how that comes up in AI, and then things like consent and privacy. So a lot of the themes and we can we can talk about how AI applies to these, but a lot of the themes that come up, are really similar. And so one of the things that we try to do for our customers is, especially kind of your listener base, that's that small, medium businesses is take a lot of that complexity out of the, like, Hey, I just want to apply, you know, I just want to solve this one problem with AI, what are all of these concerns that I, you know, I may or may not know about. So, we try to do things like build, build things into the platform that make it so something like bias. So, for example, is usually comes down to data balance. So if we provide tools that really clearly show your data balance, then it helps people make unbiased models, right, and be confident that they're going to be using AI ethically, Grant So that I'm sure you're aware of this Harrisburg University in Pennsylvania case where they ended up using AI to predict criminality using image processing, right. And, of course, that it failed, right? Because it you know, looking at a an image of someone and saying, Oh, that person is a criminal, or that person's not a criminal. That's using some powerful technology, but in ways that, of course, has some strong problems or challenges around that. How do you help prevent something like this? Or how do you guide people to use this, this kind of tool or technology and in ways that are beneficial? Elizabeth Yeah. What's interesting about this one is that the same technology that causes the problem can also help solve the problem. So so when you're looking at your corpus of data, you can use AI to find places where you have data in balance and and just to kind of re explain the what happened in that case, right. So they had a data imbalance where it was Miss identifying Different races that they had less data for. So, you know, a less controversial example is if we're talking about fruit, right, so if we have a dataset that has 20, oranges, two bananas and 20, apples, then it's just going to be worse than identifying bananas, right? So one of the things that that can be done is you apply AI to automatically look at your data balance and say, and surface those issues where it can say, hey, you have less of this thing, you probably want to label more of that thing. Grant So I'll try to manage the data set better in terms of proper representation. And try finding, finding bias is a real challenge for organizations. And I think one of the things that your platform would allow or unable to do is if you can take off the pain of all of the machinery of just getting through this and and free up organizations time to be more strict in terms of evaluating and finally, oh, taking the time to do those kinds of things, I think you might have an opportunity to improve in that area meeting customers might be able to improve in there, would that be a fair takeaway? Elizabeth Yeah, it's something that that we're really passionate about trying to provide tools around. And, and we're kind of prioritizing these these tools. The other one is, is that has to do with your data is as well is finding inaccuracies in, in your models. So the one example is X ray machines. So they did. They've basically they had an inaccuracy in a model that was finding a correlation, I think it was for it was for the disease detection. So it was finding a correlation just when the X ray was mobile, versus when they went into kind of a hospital to get the X ray. And so, you know, these models are in many cases, really just very strong pattern detectors, right. And so one of the things that can really help to, you know, to prevent something like that is to make it easy to slice and dice your data in as many ways as possible, and then run models that way. And make sure that you aren't finding the same correlation, or the same sort of accuracy with a different data set, or a different running of the model in a different data set. So said, in other words, you would be able to say, I'm going to run all of the portable X-ray machines versus all of the hospital ones, and see if I'm getting the same correlation as I am with, you know, cancer versus not cancer, or whatever they were looking for. Grant A quick question for you on this. So in my experience with AI, I have found sort of two things to consider. One is the questions that I'm trying to get answered guides me in terms of, you know, how I prepare the model, right? I'm gonna first lean towards certain things, obviously, if I want to know that this is a banana, right, or an apple or what have you. So the kind of question that when I answer leads me to how I prepare the model, which means it leads me to the data that I select. And the question is, is do I do I? Should I spend the time really putting together the strong set of questions? And rather, rather than do that, just gather my data? And then and then execute that data, the build a model? And then Ben, try to answer some questions out of that, you see what I'm saying that way, maybe I'm not going to introduce any bias into it. Elizabeth So we we encourage a very clear sort of understanding of the questions that you want to answer, right? Because that helps you do a few things, it helps you craft a model that's really going to answer that question, as opposed to accidentally answering some other questions, right. But it also helps you right size the technology so far, for example, if you're doing if you're trying to answer the question of how many people are entering this building, because you want to understand, you know, limits of how many people can be in the building or, you know, COVID restrictions or whatever it is, that that solution doesn't need to have facial recognition, right. So to answer that question, you don't need you know, lots of other technologies included in there. And so yeah, So, so defining those questions ahead of time can really help in sort of a more ethical use of the technology. Grant So one of the first jobs we would then have a small medium business do would be get clarity around those questions that actually can help us take some of the bias out. Is that a fair takeaway from what you share? Elizabeth Exactly like the questions you're trying to answer. And the questions you aren't not trying to answer can also be helpful. Grant Oh, very good. Okay. All right. So all right, the opposite of that as well. All right. So while we could keep talking about bias, let's switch to something that is that I think comes right out of the movie iRobot, right. It's robot rights. Is this, is this a fluke? Or what, you know, is this for real? I mean, what do you think? Is there really an ethical thing to worry about here? Or what? What are your thoughts? Elizabeth You know, in most of the cases that I've seen, it's really more like, it comes down to just property, like treating property correctly, you know, like don't kick the robots because it's private property. So not really around sort of the robot rights but you know, some already established rules be in for the most part, I see this as kind of a Hollywood problem, more than a practical problem. Grant Maybe it makes good Will Smith movies. But other than that, yeah, fighting for rights, right. Now that seems like it's way out there in terms of terms of connection reality. Okay, so we can tell our listeners, don't worry about that for right now. Did you add something back there? Carlos Just an interesting point on the robot rights, right. While while it's far in the future, I think for robot rights, we are seeing a little bit of that now today. Right? When like Tesla AI day, when they came out, they decided that the robot shouldn't run too fast that the robot shouldn't be too strong. I think it's a bit. It's a bit interesting that, you know, we're also protecting the human race from for us building, you know, AI for bad and robots for bad in this case. So I think it's, it's, it's on both sides of that coin. And those are, those are product decisions that were made around. Let's make sure we can run that thing later. So I think I think as we continue to explore robots AI, the the use of that together, this topic will be very important, but I think it's far far away. Grant I'm wondering is that also blends into the next sort of ethical subtopic we talked about, which is threat to human dignity. And it might even crossed into that a little bit, right, which is, are we developing AI in a way that's going to help? protect the dignity of human certainly in health care situations? That certainly becomes important, right? You probably heard on the previous podcasts that I did, I played a little snippet from Google's duplex technology that was three year old technology, and those people had no idea. They're talking and interacting with AI. And so there's that aspect of this. So where's the line on this? When? When is it that someone needs to know that what you're interacting with is actually not human? And then does this actually mean there's a deeper problem that we're trying to solve in the industry, which is one of identity, we've got to actually create a way to know what it is that we're interacting with. And we have strong identity? Can you speak to that? Yeah, Elizabeth I think I think the there's two things that kind of come into play here. And the first is transparency, and the second is consent. So in this case, it really comes down to transparency, like it would be very simple in that example, for that bot to say, Hey, I'm a bot on behalf of, you know, Grant Larsen, and I'm trying to schedule a hair appointment, right, and then going from there. And that makes it a much more transparent and easy interaction. So I think in a lot of cases, really paying attention to transparency and consent can go a long way. Grant Yeah, absolutely. All right, that that that makes a lot of sense. It seems like we can get around some of these pieces fairly, fairly simply. All right, Carlos, any other thoughts on that one? Carlos The only thing there and then touches on the stuff you guys were talking about on the bias piece, right? We're really talking about visibility and introspection into the process. Right. And with bias, you have that in place, right? We can detect when you know there's a misrepresentation of classes within the the model. In some cases, there's human bias that you can get that right but it's it's having that visibility in the same case with the threat to human data. With that visibility comes the introspection where you can make those decisions. You see more about the problem. Grant Mm hmm. Yeah, yeah. So if we were to to be able to determine we have a bad actor, if there's not transparency, that would be a way that we could help protect the dignity of humans through this. Alright. That's reasonable. All right. So let's move on to again, sounds Hollywood ish, but I'm not sure it is weaponization of AI. Right? What are the ethics around this? I'll just throw that one on the table. Where do you what do you want to take that? Carlos, you wanna start with that one? Carlos Sure. I mean, so weaponization and and I think when we talk about AI and, and the advancements of it, you quickly go to weaponization. But really, weaponization has two different pieces to it, right? It's obviously it depends on which side of that fence you're on, on whether you view that technology is beneficial or detrimental. But in some cases, that AI that same technology that is helping a pilot navigate, it also helps for a guided missile system or something like that. So we really have to balance and it goes back to use cases, and how we apply that technology as a people. But you know, weaponization, the rise against the machines, these kind of questions. While they're kind of out there. They're affecting society today. And we have to be able to have productive conversation around what we believe is good and bad around this while still allowing technology to succeed. So there's a lot of advancements in the weaponization and AI in that space, but it's really, I think we have to take it on a case by case basis, and not like a blanket statement, we can't use technology in these ways. Grant Interesting thoughts? What are your thoughts there? Elizabeth? Elizabeth Yeah, you know, I it makes me think of sort of turning it on its head is, is when is it? You know, when is it unethical not to use AI, right. And so, some of those questions come up when we are talking about weaponization, you can also be talking about saving human lives and making it safer for them to do some of these operations. And and that same question can come up in some of like, the medical use cases, right? So here in the US, we have a lot of challenges around being able to use AI in medical use cases, and there's, and there's some where you can have really good human oversight of the cases, you can have sort of reproducibility of those models, they can be as explainable as possible. But it's still really, really difficult to get FDA approval there. So I, again, I think there's two sides to that coin. Grant And, yeah, it's it's an interesting conversation have stuff wrong, because like, in that medical case, you talked about, you could see the value of using the same kind of technology that would be used to identify a human target, and then attack it, you could take that same capability, and instead use it in a search and rescue sort of scenario, right? Where you're flying something overhead, and you're trying to find, you know, pictures or images of people that might be lost out there. Same kind of thing, right, so, so where how, go ahead, you're gonna say something was, but I can see. Elizabeth And there's even simpler cases in medical, where it's like, you know, there's a shortage of radiologists right now in the US and, and you can use, you can use AI to be able to triage some of that imaging. So, because right now, people are having to, in some cases, wait a really long time to get their sort of imaging reviewed. And so can can, can, and should AI help there. There's also another one along those same lines, where, with things like CT scans, you can use what's called super resolution or de noising the image. And basically, you can use much less radiation in the first place to take the imaging and then use AI on top of it to be able to essentially enhance the image. So again, you know, ultimately exposing the patient to less less radiation. So yeah, there's it's pretty interesting when when we can and can't use it. Mm hmm. Carlos Yeah. And I think just to add a little bit to the one we can and can't right, so, advancements through drug discovery have largely been driven through AI in the same fashion weaponization of various all drugs or other types of drugs have also benefited from Ai. So, I mean, from a society's perspective, you know, you really have to Evaluate not only greater good, but that that ultimate use case like, where where do you want to make a stance around that technology piece. And understanding both sides really provides that discussion space that's needed, you have to be able to ask really honest questions to problems that are, you know, what you can see in the future. Grant So is the safeguard through all of this topic around ethics? Is the safeguard, basically, the moral compass that's found in the humans themselves? Or do we need to have less, you know, legislative or policy bodies? Right, that puts us together? Or is it a blending? What do you what's your take? Elizabeth Um, it's interesting, the UK just came out with a national AI strategy. And they are basically trying to build an entire AI assurance industry. And, and their approach is, so they want to make sure that they're make, they're keeping it so that you can be innovative in the space, right? They don't want to make it so regulatory, that you can't innovate. But they also want to make sure that there's consumer trust in in AI. So they're putting together from a, you know, a national perspective, a guidelines and tests and, and ways to give consumers confidence in whether a model is you know, reproducible, accurate, etc, etc, while at the same time not stifling innovation, because they know, you know, how important that AI is for a essentially a country's way to compete and and the opportunities for GDP that it provides as well. Grant Hmm, absolutely. Yeah, I can go ahead, Carlos. Carlos No, I think it's his it's your question. Left alone, should we got kind of govern ourselves? I think, I think we've proven that we can't do that as a people, right. So we need to have some sort of regulatory, and committee around the review of these things. But it has to be in the light of, you know, wanting to provide a better experience higher quality, deliver value, right. And I think I think when you start with, how do we get the technology adopted and in place and deployed in a fashion where society can benefit, you start making your decisions around, you know, what the good pieces are, and you'll start your start really starting to see the outliers around Hey, wait a second, that doesn't kind of conform to the guidelines that we wanted to get this implemented with? Elizabeth And I think also to your question, I think it's happening at all a lot of levels. Right. So there's, you know, state regulation around privacy and use of AI and facial recognition. And, and there's, you know, some the FDA is putting together some regulation, and then also individual companies, right, so people like Microsoft, etc, have have big groups around, you know, ethics and how AI should be used for, you know, them as a company. So I think it's happening at all levels. Grant Yeah, like we said, that is a people we need to have some level of governing bodies around this to, and of course, that's never the end all protection, for sure. But it is, it is a step in the right direction to to help monitoring and governance. Okay, so last question, right? This is gonna sound a little bit tangential, if I could use that word tangential. It's given the state of AI where it is today. Is it artificial intelligence? Or is it augmented intelligence? Carlos I can go with that. So I think it's a little bit of both. So I think the result is, is to augment our intelligence, right? We're really trying to make better decisions. Some of those are automated, some of those are not we're really trying to inform a higher quality decision. And yes, it's being applied in an artificial intelligence manner, because that's the technology that we're applying, but it's really to augment our lives. Right. And, and we're using it in a variety of use cases. We've talked about a lot of them here. But there's 1000s of use cases in AI that we don't even see today that are very easy. Something as simple as searching on the internet. That's helping a lot from you know, misspelling things and, you know, not not identifying exactly what you want and recommendation engines come and say, you know, I think I'm looking for this instead. It's like, Absolutely, thanks for saving me the frustration. We're really augmenting Life in that point. Grant The reason why I asked that as part of this ethics piece is one of the things I noticed. And as I work with the organizations, there's a misunderstanding of how far and what AI can do at times. And and there's this misunderstanding of therefore, what's my responsibility in this. And my argument is, it's augmented intelligence in terms of its outcome, and therefore, we can't absolve the outcomes and pass that off to AI and say, Oh, well, it told me to do this, just in the same breath, we can't absolve and say, We're not responsible for the use cases either. And the way in which we use it, so we own as a human human race, we own the responsibility to pick an apply the right use cases, to even be able to challenge the AI, insights and outcomes from that, and then to take the ownership of that in what the impacts are. Agree, disagree. Carlos Yeah, I would really agree with that. And if you if you think about how it's implemented, in many cases, right now, the best use of AI is with human oversight, right. So it's sort of, you know, AI is maybe making initial decision, and then the human is reviewing that, or, you know, making a judgment call based on that input. So it's, it's sort of helping human decisioning instead of replacing human decisioning. And I think that's a pretty important kind of guiding principle where, where, wherever that may be necessary, we should do it. There's one, you know, the Zillow case that that happened recently, where they were using machine learning to automatically buy houses, and it was not. There was not enough human oversight in that, and I think they ended up losing something like $500 million in the case, right. So it's not really an ethics thing, but but it's just an example where in a lot of these cases, the best scenario is to have aI paired with human oversight. Yeah, yeah. Great, I think. Grant Yeah, no, go right ahead. Yeah. Elizabeth You mentioned you mentioned being able to challenge the AI, right, and that that piece is really important, in most of the cases, especially in the one that he just mentioned, around that. That Zillow case, right, without the challenging piece, you don't have a path to improvement, you just kind of assume the role, and you get into deep trouble, like you saw there. But that challenging piece is really where innovation starts, you need to be able to get back and question kind of, you know, is this exactly what I want? And if it's not, how do I change it? Right. And that's how we drive kind of innovation in the space? Grant Well, and I would say that that comes full circle to the platform, I saw that your organization's developing, which is to reduce the time and effort it takes to be able to cycle on that right to build the model, get the outcome, evaluate, oh, challenge, make adjustments, but don't make the effort to recast and rebuild that model such that it becomes unaffordable or too much time, I need to be able to iterate on that quickly. And I think as a platform you developed and others that I've seen, you know, continue to reduce that I think it makes it easier for us to do that in it from a from a financially responsible and beneficial perspective. 100% Elizabeth Yeah, one of the one of the features that you mentioned was the versioning. And that really ties into a guiding principle of ethical use as well, which is reproducibility. So if you are, if you want to use a model, you need to be able to reduce, reproduce it reliably. And so you're getting the same kind of outputs. And so that's one of the features that we've put in there to help people that versioning feature to help people, you know, comply with that type of a regulation. Grant I've built enough AI models to know it's tough to go back to a particular version of an AI model and have reproducibility accountability. I mean, there's a whole bunch of LEDs on that. That's exceedingly valuable. That's right. Yeah. Okay, any any final comments for me there? Yeah. Carlos I think for my side, I'm really interested to see where we go as people with ethics in AI. I think we've touched on the transparency and visibility required to have these conversations around ethics and our ethical use of AI. But really, in this case, we're gonna start seeing more and more use cases and solutions in their lives or we're gonna butt up against these ethical questions, and being able to have an open forum where we can discuss this. That's really up to us. To provide we have to provide the space to have these conversations, and in some cases, arguments around the use of the technology. And I'm really looking forward to, you know, what comes out of that, you know, how long does it take for us to get to that space where, you know, we're advancing in technology and addressing issues while we advanced the technology. Grant Excellent. Thanks, Carlos. Elizabeth. Elizabeth Yeah, so for me as a as a product person in particular, I'm really interested in these the the societal conversation that we're having, and the regulations that are starting to be put together and kind of the guidelines from larger companies and companies like ours that are, you know, contributing to this thought leadership. And so what's really interesting for me is being able to take those, that larger conversation and that larger knowledge base and distill it down into simple tools for people like small and medium businesses that can then feel confident using AI and these things are just built in sort of protecting them from making some mistakes. So I'm really interested to see sort of how that evolves and how we can productize it to make it simple for people. Grant Yeah, yeah. Bingo. Exactly. Okay, everyone. I'd like to thank Carlos Elizabeth, for joining me here today. Wonderful conversation that I enjoyed that a lot. Thanks, everyone for listening. And until next time, get some AI with your ethics. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

ClickAI Radio
CAIR 56: Interview - How AI Ethics Affects Your Business !!

ClickAI Radio

Play Episode Listen Later Dec 14, 2021 36:48


Welcome to ClickAI Radio. In this episode I have a conversation with some AI experts on how AI ethics affect your business. Grant Okay, welcome, everybody to another episode of clique AI radio. Well in the house today I have got a return visitor very excited to have him and a brand new person. I'm excited to introduce you to Carlos Anchia. I've been practicing that I get that right, Carlos. Carlos Sounds great. Good to see you again. Grant Even old dogs can learn new tricks. There we go. All right, and Elizabeth Spears. Now I got that one easily. Right, Elizabeth? Elizabeth You did it? Yeah. I'm really happy to be here. Grant This is exciting for me to have them here with me today. They are co founders of plain sight AI. And a few episodes ago, I had an opportunity to speak with Carlos and he laid the foundation around the origin story of AI for vision, some of the techniques and the problems they're solving. And I started to nerd out on all of the benefits. In fact, you know what, Carlos, I need to tell you, since our last conversation, I actually circled back to your team and had a demo of what you guys are doing. And yeah, I think it was very impressive, very impressive, you know, a guy like me, where I've coded this stuff. And I was like, Oh, wow, you just took a lot of a lot of pain out of the process. You know, one of the pains that I saw come out of the process was the reduction in time, right that how long it would take for me to cycle another model through it. That was incredible, right? I can't remember the actual quantification of time, but it was at least a 50% of not even 80% reduction of cycle time is what I saw come through, there's even model versioning sort of techniques, there's just, you know, there's another really cool technique in there that I saw. And it had to do with this ability to augment or, or approximate test data, right, this ability to say, but but without creating more test data, it could approximate and create that for you. So now, your whole testing side just got a lot easier without building up, you know, those massive test cases and test basis for for doing the stuff So, alright, very impressive product set. And let's see, Elizabeth can you explain that? Elizabeth That's right, Chief Product Officer. So basically, kind of the strategy around what we're building and how we build it. And the order in which we build it is is kind of under my purview. Grant Okay, very good. Awesome. Well, it's so great to have both of you here today. So after I spoke with Carlos, last time, after we finished the recording, I said, You know what, I want to talk to you about ethics about AI ethics. And so as you heard in my previous podcast, I sort of laid the foundation for this conversation. And it's not the only areas of ethics around AI, but it's a place to start. And so we want to build on this. And we're gonna talk about these sort of four or five different areas just to begin the conversation. And I think this could translate certainly into other conversations as well. But to do that, could could one or both of you spend a little time giving the foundation of what is AI ethics as a relates to computer vision itself? What are some of the challenges or problems or misunderstandings that you see in this specific area of AI? Carlos Sure, I can take that one. So I think really, when we're talking around ethics, we're bowling any sort of technology, we're talking around how that technology is implemented, and the use of that, right and what's acceptable. So in the case of this technology, we're talking around computer vision and artificial intelligence, and how those things go into society. And it's really through its intended use on how we evaluate the technology. And I think really, computer vision continues to provide value to allow us to get through this digital transformation piece. As a technology, right? And, you know, once we start with, yes, this is a valuable technology, the conversation really now shifts to how do we use that technology for good, some cases bad, right? Where this is where that conversation arises around, you know, having the space to share what we believe is good or bad, or the right uses or the wrong usage just right. And it's a very, very gray area, when we look to address technology and advancement in technology against a black and white good or bad kind of a situation, we get into a lot of issues where, you know, there's a lot of controversy around some of these things, right, which is really, you know, as we started discussing it after the last podcast, it was really, you know, man, I really need to have a good podcast around this, because there's a lot to it. And you clearly said there was a previous one. And now there's this one, you know, I hope that there's a series of these so we can continue to express value and just have a free conversation around ethics in artificial intelligence. But really, what I'm trying to do is set the context, right. So like technology works great from the just the science of the application of that technology. And if you think of something like super controversial facial recognition, now, absolutely. I don't want people to look at my face when I'm standing on a corner. But if there's, you know, child abduction cases, yes, please use all the facial recognition you can I want that to succeed really well. And we've learned that technology works. So it's not the solution itself. It's how we're applying that solution. Right. And there's there's a lot of new ones to that. And, you know, Elizabeth can help shed a little bit of light here, because this is something that we evaluate on a constant basis and have really free discussions around. Grant Yeah, I would imagine you have to even as you take your platform into your into your customer base, even understanding what their use cases are imagined, at times, you might have to give a little guidance on the best way to apply our uses. What have you seen with this, Elizabeth? Elizabeth Yeah, you know, there's, it's interesting what Carlos is saying there's a lot of the same themes for evaluating the ethics and technology in general are similar ones that come up with when AI is applied. So things like fraud or bias is actually more can be more uniquely AI. But that absolutely exists in other technologies. And then inaccuracy and how that how that comes up in AI, and then things like consent and privacy. So a lot of the themes and we can we can talk about how AI applies to these, but a lot of the themes that come up, are really similar. And so one of the things that we try to do for our customers is, especially kind of your listener base, that's that small, medium businesses is take a lot of that complexity out of the, like, Hey, I just want to apply, you know, I just want to solve this one problem with AI, what are all of these concerns that I, you know, I may or may not know about. So, we try to do things like build, build things into the platform that make it so something like bias. So, for example, is usually comes down to data balance. So if we provide tools that really clearly show your data balance, then it helps people make unbiased models, right, and be confident that they're going to be using AI ethically, Grant So that I'm sure you're aware of this Harrisburg University in Pennsylvania case where they ended up using AI to predict criminality using image processing, right. And, of course, that it failed, right? Because it you know, looking at a an image of someone and saying, Oh, that person is a criminal, or that person's not a criminal. That's using some powerful technology, but in ways that, of course, has some strong problems or challenges around that. How do you help prevent something like this? Or how do you guide people to use this, this kind of tool or technology and in ways that are beneficial? Elizabeth Yeah. What's interesting about this one is that the same technology that causes the problem can also help solve the problem. So so when you're looking at your corpus of data, you can use AI to find places where you have data in balance and and just to kind of re explain the what happened in that case, right. So they had a data imbalance where it was Miss identifying Different races that they had less data for. So, you know, a less controversial example is if we're talking about fruit, right, so if we have a dataset that has 20, oranges, two bananas and 20, apples, then it's just going to be worse than identifying bananas, right? So one of the things that that can be done is you apply AI to automatically look at your data balance and say, and surface those issues where it can say, hey, you have less of this thing, you probably want to label more of that thing. Grant So I'll try to manage the data set better in terms of proper representation. And try finding, finding bias is a real challenge for organizations. And I think one of the things that your platform would allow or unable to do is if you can take off the pain of all of the machinery of just getting through this and and free up organizations time to be more strict in terms of evaluating and finally, oh, taking the time to do those kinds of things, I think you might have an opportunity to improve in that area meeting customers might be able to improve in there, would that be a fair takeaway? Elizabeth Yeah, it's something that that we're really passionate about trying to provide tools around. And, and we're kind of prioritizing these these tools. The other one is, is that has to do with your data is as well is finding inaccuracies in, in your models. So the one example is X ray machines. So they did. They've basically they had an inaccuracy in a model that was finding a correlation, I think it was for it was for the disease detection. So it was finding a correlation just when the X ray was mobile, versus when they went into kind of a hospital to get the X ray. And so, you know, these models are in many cases, really just very strong pattern detectors, right. And so one of the things that can really help to, you know, to prevent something like that is to make it easy to slice and dice your data in as many ways as possible, and then run models that way. And make sure that you aren't finding the same correlation, or the same sort of accuracy with a different data set, or a different running of the model in a different data set. So said, in other words, you would be able to say, I'm going to run all of the portable X-ray machines versus all of the hospital ones, and see if I'm getting the same correlation as I am with, you know, cancer versus not cancer, or whatever they were looking for. Grant A quick question for you on this. So in my experience with AI, I have found sort of two things to consider. One is the questions that I'm trying to get answered guides me in terms of, you know, how I prepare the model, right? I'm gonna first lean towards certain things, obviously, if I want to know that this is a banana, right, or an apple or what have you. So the kind of question that when I answer leads me to how I prepare the model, which means it leads me to the data that I select. And the question is, is do I do I? Should I spend the time really putting together the strong set of questions? And rather, rather than do that, just gather my data? And then and then execute that data, the build a model? And then Ben, try to answer some questions out of that, you see what I'm saying that way, maybe I'm not going to introduce any bias into it. Elizabeth So we we encourage a very clear sort of understanding of the questions that you want to answer, right? Because that helps you do a few things, it helps you craft a model that's really going to answer that question, as opposed to accidentally answering some other questions, right. But it also helps you right size the technology so far, for example, if you're doing if you're trying to answer the question of how many people are entering this building, because you want to understand, you know, limits of how many people can be in the building or, you know, COVID restrictions or whatever it is, that that solution doesn't need to have facial recognition, right. So to answer that question, you don't need you know, lots of other technologies included in there. And so yeah, So, so defining those questions ahead of time can really help in sort of a more ethical use of the technology. Grant So one of the first jobs we would then have a small medium business do would be get clarity around those questions that actually can help us take some of the bias out. Is that a fair takeaway from what you share? Elizabeth Exactly like the questions you're trying to answer. And the questions you aren't not trying to answer can also be helpful. Grant Oh, very good. Okay. All right. So all right, the opposite of that as well. All right. So while we could keep talking about bias, let's switch to something that is that I think comes right out of the movie iRobot, right. It's robot rights. Is this, is this a fluke? Or what, you know, is this for real? I mean, what do you think? Is there really an ethical thing to worry about here? Or what? What are your thoughts? Elizabeth You know, in most of the cases that I've seen, it's really more like, it comes down to just property, like treating property correctly, you know, like don't kick the robots because it's private property. So not really around sort of the robot rights but you know, some already established rules be in for the most part, I see this as kind of a Hollywood problem, more than a practical problem. Grant Maybe it makes good Will Smith movies. But other than that, yeah, fighting for rights, right. Now that seems like it's way out there in terms of terms of connection reality. Okay, so we can tell our listeners, don't worry about that for right now. Did you add something back there? Carlos Just an interesting point on the robot rights, right. While while it's far in the future, I think for robot rights, we are seeing a little bit of that now today. Right? When like Tesla AI day, when they came out, they decided that the robot shouldn't run too fast that the robot shouldn't be too strong. I think it's a bit. It's a bit interesting that, you know, we're also protecting the human race from for us building, you know, AI for bad and robots for bad in this case. So I think it's, it's, it's on both sides of that coin. And those are, those are product decisions that were made around. Let's make sure we can run that thing later. So I think I think as we continue to explore robots AI, the the use of that together, this topic will be very important, but I think it's far far away. Grant I'm wondering is that also blends into the next sort of ethical subtopic we talked about, which is threat to human dignity. And it might even crossed into that a little bit, right, which is, are we developing AI in a way that's going to help? protect the dignity of human certainly in health care situations? That certainly becomes important, right? You probably heard on the previous podcasts that I did, I played a little snippet from Google's duplex technology that was three year old technology, and those people had no idea. They're talking and interacting with AI. And so there's that aspect of this. So where's the line on this? When? When is it that someone needs to know that what you're interacting with is actually not human? And then does this actually mean there's a deeper problem that we're trying to solve in the industry, which is one of identity, we've got to actually create a way to know what it is that we're interacting with. And we have strong identity? Can you speak to that? Yeah, Elizabeth I think I think the there's two things that kind of come into play here. And the first is transparency, and the second is consent. So in this case, it really comes down to transparency, like it would be very simple in that example, for that bot to say, Hey, I'm a bot on behalf of, you know, Grant Larsen, and I'm trying to schedule a hair appointment, right, and then going from there. And that makes it a much more transparent and easy interaction. So I think in a lot of cases, really paying attention to transparency and consent can go a long way. Grant Yeah, absolutely. All right, that that that makes a lot of sense. It seems like we can get around some of these pieces fairly, fairly simply. All right, Carlos, any other thoughts on that one? Carlos The only thing there and then touches on the stuff you guys were talking about on the bias piece, right? We're really talking about visibility and introspection into the process. Right. And with bias, you have that in place, right? We can detect when you know there's a misrepresentation of classes within the the model. In some cases, there's human bias that you can get that right but it's it's having that visibility in the same case with the threat to human data. With that visibility comes the introspection where you can make those decisions. You see more about the problem. Grant Mm hmm. Yeah, yeah. So if we were to to be able to determine we have a bad actor, if there's not transparency, that would be a way that we could help protect the dignity of humans through this. Alright. That's reasonable. All right. So let's move on to again, sounds Hollywood ish, but I'm not sure it is weaponization of AI. Right? What are the ethics around this? I'll just throw that one on the table. Where do you what do you want to take that? Carlos, you wanna start with that one? Carlos Sure. I mean, so weaponization and and I think when we talk about AI and, and the advancements of it, you quickly go to weaponization. But really, weaponization has two different pieces to it, right? It's obviously it depends on which side of that fence you're on, on whether you view that technology is beneficial or detrimental. But in some cases, that AI that same technology that is helping a pilot navigate, it also helps for a guided missile system or something like that. So we really have to balance and it goes back to use cases, and how we apply that technology as a people. But you know, weaponization, the rise against the machines, these kind of questions. While they're kind of out there. They're affecting society today. And we have to be able to have productive conversation around what we believe is good and bad around this while still allowing technology to succeed. So there's a lot of advancements in the weaponization and AI in that space, but it's really, I think we have to take it on a case by case basis, and not like a blanket statement, we can't use technology in these ways. Grant Interesting thoughts? What are your thoughts there? Elizabeth? Elizabeth Yeah, you know, I it makes me think of sort of turning it on its head is, is when is it? You know, when is it unethical not to use AI, right. And so, some of those questions come up when we are talking about weaponization, you can also be talking about saving human lives and making it safer for them to do some of these operations. And and that same question can come up in some of like, the medical use cases, right? So here in the US, we have a lot of challenges around being able to use AI in medical use cases, and there's, and there's some where you can have really good human oversight of the cases, you can have sort of reproducibility of those models, they can be as explainable as possible. But it's still really, really difficult to get FDA approval there. So I, again, I think there's two sides to that coin. Grant And, yeah, it's it's an interesting conversation have stuff wrong, because like, in that medical case, you talked about, you could see the value of using the same kind of technology that would be used to identify a human target, and then attack it, you could take that same capability, and instead use it in a search and rescue sort of scenario, right? Where you're flying something overhead, and you're trying to find, you know, pictures or images of people that might be lost out there. Same kind of thing, right, so, so where how, go ahead, you're gonna say something was, but I can see. Elizabeth And there's even simpler cases in medical, where it's like, you know, there's a shortage of radiologists right now in the US and, and you can use, you can use AI to be able to triage some of that imaging. So, because right now, people are having to, in some cases, wait a really long time to get their sort of imaging reviewed. And so can can, can, and should AI help there. There's also another one along those same lines, where, with things like CT scans, you can use what's called super resolution or de noising the image. And basically, you can use much less radiation in the first place to take the imaging and then use AI on top of it to be able to essentially enhance the image. So again, you know, ultimately exposing the patient to less less radiation. So yeah, there's it's pretty interesting when when we can and can't use it. Mm hmm. Carlos Yeah. And I think just to add a little bit to the one we can and can't right, so, advancements through drug discovery have largely been driven through AI in the same fashion weaponization of various all drugs or other types of drugs have also benefited from Ai. So, I mean, from a society's perspective, you know, you really have to Evaluate not only greater good, but that that ultimate use case like, where where do you want to make a stance around that technology piece. And understanding both sides really provides that discussion space that's needed, you have to be able to ask really honest questions to problems that are, you know, what you can see in the future. Grant So is the safeguard through all of this topic around ethics? Is the safeguard, basically, the moral compass that's found in the humans themselves? Or do we need to have less, you know, legislative or policy bodies? Right, that puts us together? Or is it a blending? What do you what's your take? Elizabeth Um, it's interesting, the UK just came out with a national AI strategy. And they are basically trying to build an entire AI assurance industry. And, and their approach is, so they want to make sure that they're make, they're keeping it so that you can be innovative in the space, right? They don't want to make it so regulatory, that you can't innovate. But they also want to make sure that there's consumer trust in in AI. So they're putting together from a, you know, a national perspective, a guidelines and tests and, and ways to give consumers confidence in whether a model is you know, reproducible, accurate, etc, etc, while at the same time not stifling innovation, because they know, you know, how important that AI is for a essentially a country's way to compete and and the opportunities for GDP that it provides as well. Grant Hmm, absolutely. Yeah, I can go ahead, Carlos. Carlos No, I think it's his it's your question. Left alone, should we got kind of govern ourselves? I think, I think we've proven that we can't do that as a people, right. So we need to have some sort of regulatory, and committee around the review of these things. But it has to be in the light of, you know, wanting to provide a better experience higher quality, deliver value, right. And I think I think when you start with, how do we get the technology adopted and in place and deployed in a fashion where society can benefit, you start making your decisions around, you know, what the good pieces are, and you'll start your start really starting to see the outliers around Hey, wait a second, that doesn't kind of conform to the guidelines that we wanted to get this implemented with? Elizabeth And I think also to your question, I think it's happening at all a lot of levels. Right. So there's, you know, state regulation around privacy and use of AI and facial recognition. And, and there's, you know, some the FDA is putting together some regulation, and then also individual companies, right, so people like Microsoft, etc, have have big groups around, you know, ethics and how AI should be used for, you know, them as a company. So I think it's happening at all levels. Grant Yeah, like we said, that is a people we need to have some level of governing bodies around this to, and of course, that's never the end all protection, for sure. But it is, it is a step in the right direction to to help monitoring and governance. Okay, so last question, right? This is gonna sound a little bit tangential, if I could use that word tangential. It's given the state of AI where it is today. Is it artificial intelligence? Or is it augmented intelligence? Carlos I can go with that. So I think it's a little bit of both. So I think the result is, is to augment our intelligence, right? We're really trying to make better decisions. Some of those are automated, some of those are not we're really trying to inform a higher quality decision. And yes, it's being applied in an artificial intelligence manner, because that's the technology that we're applying, but it's really to augment our lives. Right. And, and we're using it in a variety of use cases. We've talked about a lot of them here. But there's 1000s of use cases in AI that we don't even see today that are very easy. Something as simple as searching on the internet. That's helping a lot from you know, misspelling things and, you know, not not identifying exactly what you want and recommendation engines come and say, you know, I think I'm looking for this instead. It's like, Absolutely, thanks for saving me the frustration. We're really augmenting Life in that point. Grant The reason why I asked that as part of this ethics piece is one of the things I noticed. And as I work with the organizations, there's a misunderstanding of how far and what AI can do at times. And and there's this misunderstanding of therefore, what's my responsibility in this. And my argument is, it's augmented intelligence in terms of its outcome, and therefore, we can't absolve the outcomes and pass that off to AI and say, Oh, well, it told me to do this, just in the same breath, we can't absolve and say, We're not responsible for the use cases either. And the way in which we use it, so we own as a human human race, we own the responsibility to pick an apply the right use cases, to even be able to challenge the AI, insights and outcomes from that, and then to take the ownership of that in what the impacts are. Agree, disagree. Carlos Yeah, I would really agree with that. And if you if you think about how it's implemented, in many cases, right now, the best use of AI is with human oversight, right. So it's sort of, you know, AI is maybe making initial decision, and then the human is reviewing that, or, you know, making a judgment call based on that input. So it's, it's sort of helping human decisioning instead of replacing human decisioning. And I think that's a pretty important kind of guiding principle where, where, wherever that may be necessary, we should do it. There's one, you know, the Zillow case that that happened recently, where they were using machine learning to automatically buy houses, and it was not. There was not enough human oversight in that, and I think they ended up losing something like $500 million in the case, right. So it's not really an ethics thing, but but it's just an example where in a lot of these cases, the best scenario is to have aI paired with human oversight. Yeah, yeah. Great, I think. Grant Yeah, no, go right ahead. Yeah. Elizabeth You mentioned you mentioned being able to challenge the AI, right, and that that piece is really important, in most of the cases, especially in the one that he just mentioned, around that. That Zillow case, right, without the challenging piece, you don't have a path to improvement, you just kind of assume the role, and you get into deep trouble, like you saw there. But that challenging piece is really where innovation starts, you need to be able to get back and question kind of, you know, is this exactly what I want? And if it's not, how do I change it? Right. And that's how we drive kind of innovation in the space? Grant Well, and I would say that that comes full circle to the platform, I saw that your organization's developing, which is to reduce the time and effort it takes to be able to cycle on that right to build the model, get the outcome, evaluate, oh, challenge, make adjustments, but don't make the effort to recast and rebuild that model such that it becomes unaffordable or too much time, I need to be able to iterate on that quickly. And I think as a platform you developed and others that I've seen, you know, continue to reduce that I think it makes it easier for us to do that in it from a from a financially responsible and beneficial perspective. 100% Elizabeth Yeah, one of the one of the features that you mentioned was the versioning. And that really ties into a guiding principle of ethical use as well, which is reproducibility. So if you are, if you want to use a model, you need to be able to reduce, reproduce it reliably. And so you're getting the same kind of outputs. And so that's one of the features that we've put in there to help people that versioning feature to help people, you know, comply with that type of a regulation. Grant I've built enough AI models to know it's tough to go back to a particular version of an AI model and have reproducibility accountability. I mean, there's a whole bunch of LEDs on that. That's exceedingly valuable. That's right. Yeah. Okay, any any final comments for me there? Yeah. Carlos I think for my side, I'm really interested to see where we go as people with ethics in AI. I think we've touched on the transparency and visibility required to have these conversations around ethics and our ethical use of AI. But really, in this case, we're gonna start seeing more and more use cases and solutions in their lives or we're gonna butt up against these ethical questions, and being able to have an open forum where we can discuss this. That's really up to us. To provide we have to provide the space to have these conversations, and in some cases, arguments around the use of the technology. And I'm really looking forward to, you know, what comes out of that, you know, how long does it take for us to get to that space where, you know, we're advancing in technology and addressing issues while we advanced the technology. Grant Excellent. Thanks, Carlos. Elizabeth. Elizabeth Yeah, so for me as a as a product person in particular, I'm really interested in these the the societal conversation that we're having, and the regulations that are starting to be put together and kind of the guidelines from larger companies and companies like ours that are, you know, contributing to this thought leadership. And so what's really interesting for me is being able to take those, that larger conversation and that larger knowledge base and distill it down into simple tools for people like small and medium businesses that can then feel confident using AI and these things are just built in sort of protecting them from making some mistakes. So I'm really interested to see sort of how that evolves and how we can productize it to make it simple for people. Grant Yeah, yeah. Bingo. Exactly. Okay, everyone. I'd like to thank Carlos Elizabeth, for joining me here today. Wonderful conversation that I enjoyed that a lot. Thanks, everyone for listening. And until next time, get some AI with your ethics. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

Business Built Freedom
180|Defining Your Message With Lisa McLeod

Business Built Freedom

Play Episode Listen Later Mar 4, 2021 31:12


Defining Your Message With Lisa McLeod Focus on what you really need Are you being caught in a trap of spending heaps of marketing and getting no traction? We’ve got Lisa McLeod here from Selling with Noble Purpose and she's going to talk about how to make sure that you have a clear cut and defined message. So Lisa, what are some of the main blunders when people start marketing and trying to sell in a new business? What are the things you think they need to focus on?  Focus on these three areas Elizabeth: There are three main things that get in peoples’ way. The first thing is, what they think it means to sell. This is over-describing what they do. Secondly is their expertise. Most people start a business because there are some customers out there who are not getting their needs met. What gets in their way when they are trying to sell is that they are too deep in their expertise. Lastly, they don’t have clarity and purpose. They think their purpose is to sell, but the purpose is to make a difference to your customers. This should be the center of your marketing message and not your product. How do you make a difference to customers? What is your impact?  I’ve definitely fallen into the trap of doing that previously. I’m an engineer who thinks very much in detail. I was in a spot where I knew I had a great product, but I thought, what if I have the cure for cancer but I don’t have the voice and clear message to tell everyone about it. Do you have an example of seeing businesses that highlight what they do instead of why they do it? You need a clear voice Elizabeth: That's right. What they do versus why they do it. Let’s say with the cure for cancer, the fact whether it's injectable or it's a pill, all we care about is the cure for cancer. We need to think this way as sellers. We had an IT company we were working with and it's an American based company. They do IT services and you can outsource all your IT to them. So when I started working on it, we said, what impact do you have on customers? One guy in the room stood up and said, we help small businesses be more successful. That’s what happens when you have that clarity of purpose.  Every time you interact with a customer, that's what you're trying to do, is to help them be more successful. And if you're a business owner one of your challenges is getting your people to have the right behaviors with customers. Absolutely. We changed our marketing messages around after addressing the question of what are we actually doing? We've redefined the message of what we do in business and what we do for people's lives to challenge their operations through creating kindhearted personal relationships driven by cutting edge advancement. We changed our marketing message and said we guarantee your uptime and if you go down, we pay you. The message is very clear, we guarantee that your business will run perfectly with technology and we're happy enough to put our money where our mouth is. Elizabeth:  The exercise you have done is really important. Finding your why can be easy if you have a small business with a handful of employees, but if you go to a mid-size business, you need to be explicit. Why? Because you want a competitive differentiation.  Even if what you are selling is not unique you’ve got to show that you do your business differently. Second, you need an emotional engagement with your people. You have to drive emotional engagement with your team to motivate them to try new innovation. As a leader, you have to articulate the impact you have on customers and make that the north star of the business. Differentiation is Key As your business becomes bigger, the message shouldn’t get watered out. What is the differentiator between copycat-like businesses such as McDonalds and Hungry Jacks?  Elizabeth: You’ve got two key ways to differentiate yourself. Number one is your product, and number two is the experience of doing business with you. I was running the session for a group of leaders and we were talking about this. McDonalds and Hungry Jacks can’t be differentiated from each other. What interests me during the session is, I asked who stands out among these organisations? There is an east coast of America company called Chick Fil-A and a west coast company called In-N-Out Burger. I thought people were going to come to blows arguing about which one is better. The reality is, is the food better at either one of these fast food chains? The place of true differentiation is the experience that you were creating for your customers.  Don’t change your business only in response to your competitors If you've got a business and you're trying to try and work out how to make sure that you are targeting your audience appropriately and being separate from your competitors, what's the best way to do that? Elizabeth: If response to your competitors is the basis of what you're doing for your business, you are not going to create a differentiated experience. If you want to create true differentiation and be clear with your message, you need to find your purpose in the way you do it. You have to answer these three questions:    How do we make a difference to our customers?  How do we do it differently in the competition?  What do we love about what we're doing?    When you answer these questions, this creates the story of your business and that's the story you go out to the market with. Talk to your customers and get honest feedback I always say the best way to find out how you make a difference to your customers is to ask them, would that be fair?  Elizabeth: Totally. If people are buying your products or renewing their contracts with you, this means that you are doing something as simple as creating a great experience. Or maybe helping them to be more successful because as an IT company, you are doing all their back up IT. Company owners can sleep at night knowing that you take care of the technical side of their business. It’s not just about the product but the impact you are making on your customers.   Find out the “why” We've had a customer who had to leave and so I asked them, what could we have done differently? So if you're selling something that's not very much a commodity and not like a burger, like an IT service, How do you make sure that you understand your competitors? How do you make sure that you know them well enough to know that you're doing the right thing and you're definitely doing it differently to the competition?   Elizabeth: Instead of asking your customers why did you buy from us, instead, how did working with us impact you? They can say, you were cheaper, you were the first one here or you had more widgets, but what matters is to know the impact you have on them. This is how you will differentiate your business in the competition.  One of the products we sell was by far the cheapest in the industry, it is selling us as the hook to get you in the door. It is not a high-profit product. It is up to think about possibly what if their current provider isn't doing what they meant to be doing and starting to dive in to see how we are different from their current provider. In B2B, target businesses you are excited about  Elizabeth: I once asked the customer, “why did you pick us”? They said, we picked you because we could tell that you were really excited about our business. And so I started saying that we only go for businesses we're really excited about. You have to be specific on what you sell and make sure you have a lens on it.  It’s the frame of mind. When you are excited that rubs off on other people. Once you have purpose in business, what's then the connection to profit? Elizabeth: Companies with a purpose bigger than money outperform their competition by over three hundred and fifty percent. People that sell with purpose, whose purpose is to improve life for customers, outsell people focused on targets and quotas. And this is important if you're in any kind of a sales function or if you're a leader in the business, the reason why is flip it. Who would you rather have calling on you? Someone whose purpose is to help you or someone who's just trying to close you? It shows up in every aspect of the business because you've got to have really clear systems and processes just to be a successful business.  But you will not be a differentiated business which is the most profitable business, you will not be differentiated unless you have clarity about how you and your team make a difference to customers if you're just sort of running your business in that transactional way. Profit is the test of your validity.The purpose of a business is to improve life for customers. Profit tells you whether you're doing it or not. Create a tribe of true believers So you're focusing on the right information. A good example is Apple. What they're selling is the experience of selling the support. They're selling a beautifully crafted product. Their message is clear.   Elizabeth: That's right. Steve Jobs was very famous. He had a conversation with John Sculley and he was trying to persuade John Sculley, who was the CEO of Pepsi, to come to work for him. And as the CEO of Apple with the great excitement and honor, I got to interview John Sculley a couple of weeks ago for a piece I did for Forbes. And he said, I remember Steve Jobs saying you want to sell sugar water for the rest of your life or you want to change the world?  Apple is a good example. And you might be listening to this and you might sell ice cream or concrete, but there is innovation in every space. And the reason Apple has innovated, the reason the customer experience is amazing. The reason the products, they're always on the cutting edge, the reason they look beautiful, the reason that out of the box experience is great is because they're not selling technology. They're selling making a difference to you. They're selling you on having a beautiful experience. And that's where everyone's eyes are pointed to. And where that comes from is the language of the leader.You point the team and if you point your team towards revenue targets.You'll only be mediocre if you point your team toward something bigger and then use those commonplace metrics as a way to measure your progress. You'll create what I call the tribe of true believers, which is definitely what the people at Apple are also places. So I understand that you've got a special code that we can copy for any of the listeners now? Elizabeth: Yeah. So if you want to buy the book, you can enter your receipt number, which I love. But if you don't, just for your listeners, just enter the code BBF and you'll get the assessment. There's no reason why people can't jump across it and get that happening. I'm going to ask you a question. It's probably going to be an easy one to answer, but we ask most of our guests, what's your favorite book? Check out Selling With Noble Purpose Elizabeth: So my latest book is Selling With Noble Purpose. I will tell you a book that has influenced me greatly, which is Viktor Frankl's Man's Search For Meaning. And there's a connection between that and the work that I do. Some people don't think a man searches for meaning is about finding something to tether yourself to during challenging times. And he was a victim of the Nazi concentration camps. The thing that I realised in reading that book years ago was people need to tether themselves to something bigger than themselves and that that was the key to surviving a challenging time. While my circumstances are not as dramatic, there's a story in selling with noble purpose about when my husband and I lost a business and I had to dig deep and find a way to come back from bankruptcy. Selling With Noble Purpose is a lot of how I did it. I'm not comparing myself to Viktor Frankl. I'm saying I was inspired by him and I thought about him a lot. I thought about what I didn't know ten years ago when I was having to come back from the recession, but I realised in hindsight that tethering yourself to helping your customers vs tethering yourself to your revenue number, that will give you the tenacity to prevail.  That's good advice. I know that I found that a long time ago that I love helping people with technology. But what I really love doing is making a difference in their lives. And the book that changed my life is the Go-giver by Bob Burg. He's changed my mindset about business and how people hold your information.   Elizabeth: There is now reviewed data that says that everything that you've just described about helping others and putting them first results in you. Winning your market, having more profitable business and enjoying your life a lot more. Living for something bigger than yourself. Hopefully that means you're leaving a legacy behind or you have set a good example. If anyone is looking to better their business and make sure that you are selling with noble purpose, you can jump across sellingwithnoblepurpose.com and jump onto the assessment. Otherwise, stay good and stay healthy out there. 

This Rural Mission
20 Years of Rural Medical Education

This Rural Mission

Play Episode Listen Later Feb 17, 2021 22:49


Unfortunately your browser, (Chrome 79), is not supported by the Rev Transcription Editor. In order to edit your transcriptions, please update your browser. Update   20 Years of Rural Medical Education_WAV_Final_01 arrow_backMy Files   All changes saved on Rev 2 minutes ago. more_horiz DownloadShare               00:00 00:00 22:50 Play replay_5 Back 5s 1x Speed   volume_up Volume   NOTES   Julia Terhune   This Rural Mission is a podcast brought to you by Michigan State University College of Human Medicine, the Herbert H. and Grace A. Dow Foundation, and the Michigan State University College of Human Medicine Family Medicine Department. We are so excited to bring you Season Three. I'm your host, Julia Terhune, and I hope you enjoy this episode.   Julia Terhune   When I first started this job, I was overcome with the needs of rural communities and the wonderful things that doctors get to do in their professions. I was, I guess you could say, fangirling a little about rural doctors. And I told my spouse that this was really what I wanted to do, that I think I wanted to be a doctor. So I had it all figured out. I was going to go to Michigan State University College of Human Medicine. I was going to do the TIP program at the Midland Family Medicine Residency, and then when it was all said and done, I was going to set up a practice with Scheurer Hospital in the Thumb.   Julia Terhune   Now, I have to tell you two very important things that came out of this conversation with my spouse. One, he instantly reminded me that I can barely handle a paper cut, let alone a surgery rotation, and he also reminded me that I would hysterically cry before every anatomy, biology, physiology and chemistry test that I took in college. He also reminded me that my GRE examination for grad school almost killed me with stress, so medical school is not in my future and I will stick to making rural doctors out of the likes of all of you.   Julia Terhune   But one subtle thing that also came out of this conversation was how much I love the Thumb community. Prior to starting with the College of Human Medicine I had never even been to the Thumb, but after six years of working with the Scheurer Hospital and the health departments and other agencies in these communities, I am smitten. I love the people from the Thumb. I love the history, I love the coastline, I love these communities. A story, not unlike many of our medical students, including Shelby Walker.   Shelby Walker   Yeah, so when I found out I would be going to Pigeon, I had never been there before. I don't think I had ever been to the actual Thumb before, maybe close to it but I don't think it was within what they count as the Thumb. And so I had my boyfriend at the time drive me out there just so I could see where I'd be going. So I thought it would make me feel a little bit more comfortable, and we got there and everything was so small. It was such a small town that I almost didn't believe that there was a hospital and a health system there that could especially accommodate students, so it was kind of an odd like, "What am I going to do for two years with a lot of time out in Pigeon?" It was a very odd feeling.   Shelby Walker   And so when we started, my first rotation of third year actually started in the Scheurer Health system with Dr. Scaddan in Sebewaing, and everyone was so welcoming and nice, and who let me do things, which as a third year medical student I was like, "Wait, am I qualified to do actual things?" And I think I had so many unique experiences out there because of where I was at. With Dr. Scaddan I got to be introduced to the ER, maybe a little bit earlier, and their definition of an ER was not what I had seen in the past but they still had some pretty intense situations and things that really were true emergencies that maybe you wouldn't expect in the middle of nowhere in, I think, a five-bed ER situation.   Shelby Walker   We went to the prison to do some healthcare with the inmates. That was an interesting experience that I wasn't really expecting when I had first pulled up into Pigeon. And from there I got to meet so many other amazing physicians and EPPs and just everyone there has been so nice [inaudible 00:05:05] Oh gosh, the administrative staff knows who you are when you show up to their meetings in the morning, because the physicians invite you to go with them to all of these meetings that you feel like you have no business really knowing what's going on, but they bring you to these meetings and the administration staff, they know who you are. They ask how you're doing, they asked how you're liking it. It was such an odd thing to, I guess, stumble into kind of on accident. I'm really grateful that I got that chance.   Julia Terhune   And if that's not enough anecdotal evidence to prove that Pigeon will win you over, well listen to this.   Shelby Walker   So I was talking to Chad about this, and then with Dr. Wendling actually, how odd this all turned out that I didn't want to go to Pigeon and I wanted to go to [Alma 00:05:57] and then I was like, "Okay, I'll do the nice thing." And Chad and I got engaged in Caseville. We went to Caseville on the beach.   Julia Terhune   Our rural medical affiliation with the Scheurer Hospital network didn't start just six years ago. We have a much longer history with the hospital and have been training students in Pigeon for more than 20 years. I sat down with the former CEO, Dwight Gascho, and the current CEO, Terry Lerash, who served and serve the Scheurer Health Network and learned just how it all got started.   Terry Lerash   Well, interesting story. I was working in Saginaw. I had a good position, felt satisfied, but my wife and I were on a Saturday afternoon or morning, we were standing in a field on an Amish farm in Gaylord or near Gaylord attending a wedding of a daughter of my CFO at the time, a guy that worked with me over many, many years. We were good friends so we got invited to the wedding and we're standing in this field and across the field walks Dwight and Theresa. And we had known each other for some time, Dwight and I had, probably over the last 20 years, involvement in hospital council, and health care executives, it's a pretty small circle in the State of Michigan. Most of us know each other.   Terry Lerash   Anyways, I said hello to Dwight. He says hello to me, and I said to Dwayne, "Well, I hear you are interested in retiring," and Dwight said, "Yes, I am. Would you like my job?" And I was a little bit stunned. Said, "Well, geez, I don't know." My wife was looking at me weird and I said, "Well, are you serious?" And he says, "Absolutely am serious." And he said, "Why don't you do me a favor? Why don't you come to Pigeon and just visit with me for a day? That's all I'm asking. No commitment, no strings attached, just come up and visit with me for a day."   Terry Lerash   And out of our friendship, I said, "Okay, I can do that. I can spare a day and run up to Pigeon. This is my old stomping ground anyways. I was born and raised in Bad Axe." So I had been away for probably 40 plus years from my hometown of Bad Ax and it was a chance for me to just get reacquainted with Huron County. So I drove up and I think within the first hour I was so enchanted with Scheurer Hospital because of its culture, friendliness, cleanliness, organization, and clearly Dwight's leadership was a big plus.   Terry Lerash   And as I talked with Dwight through the course of that day and learned more about Scheurer, I understood that the core values of the organization really matched me, kind of fit my dress code, if you will. And so I was intrigued and left and then made a subsequent visit and met with the board and long story short, here I am and I couldn't be happier. This was really a great opportunity for me [inaudible 00:09:29]   Dwight Gascho   And as I reflect on that side of the story, my story would match it almost exactly. I was born and raised in the Pigeon area. I was on a farm, left for a few years for school and the service, et cetera. Came back in 1972 and in 1987, I was invited to serve on the Scheurer Hospital Board of Trustees. And we were having some issues at the time, and in 1990, the board asked if I would take the leadership position in the hospital as the CEO. And I agreed to do that on an interim basis saying, "I'll give it a shot, but if it doesn't work maybe I could help find the next leader." Well, after just a matter of a few months, the board took the interim assignment away and gave me the full-time assignment and so I worked here from 1990 until July of 2016, 26 years plus.   Dwight Gascho   Obviously the hospital was struggling early on. The hospital became more profitable as years went by. We became more successful at recruiting young physicians. And there had been a gentlemen that had served on the board by the name of Loren Gettel. Loren Gettel was a farmer in this area and had a very strong interest in seeing students find opportunities to train in some rural community, and he put that bug in my ear. As a matter of fact, Julie, when I was being asked to serve, Loren asked the board chair if he could spend a day with me. And I'm fully aware of what it was. It was part of a program to see once if I passed the exam, so I think I was being vetted by Loren Gettel.   Dwight Gascho   So we jumped in the car. We drove to MSU and we walked the campus of MSU. He's a very, very strong MSU campaign leader. I mean, he loves that organization. He was grinning away. And he showed me places that were memorable to him and he showed me plaques on walls where he had made contributions to the organization and he said, "Dwight, some how, some way we have got to find ways to introduce medical students to rural communities because I've lived in a rural community all my life." This is Loren speaking, "And candidly, it's a great place to live. It's a great place to raise kids and we've got great schools, great churches. There's all sorts of things that you can do around here and we've got to find ways to do this."   Dwight Gascho   And that was something he just kept putting into my head. Unfortunately, he passed away from cancer just few years after I became CEO here in 1990 but his daughter, Peggy McCormick, continues to serve on the board of directors and she has a very similar burning desire to see some sort of a relationship with rural communities.   Julia Terhune   The Loren Gettel scholarship is a scholarship that our rural medical education students are still receiving today. And in fact, since 2010, 11 students have received this scholarship including Dan Drake who you heard just a few seasons ago and is going to be returning to the Thumb for practice in just a few short months.   Dwight Gascho   Terry was not a hospital CEO, but he was running an organization at the time that was an important part of the whole council and that's was Synergy Medical Educational Alliance.   Terry Lerash   So I was, quite frankly, offering to the hospital council opportunities for them to perhaps have students in their communities and in their hospitals, if they were able to provide the right types of resources. Well, after that hospital council meeting, I had two calls. One of them was from Dwight, in fact he was the first one that called me. And I think he probably was reflecting on Loren's message to him, and saw this was a great opportunity and so he called me and he asked if we could talk more about becoming a MSU student site. And so we worked through all the details. I can't remember all the details involved, but I remember driving two students up here and one of them was by the name of Kimiko Sugimoto. And she is now a general surgeon who actually completed the MSU rotation, her general surgery rotation in Saginaw, and is practicing in Saginaw as a general surgeon as we speak.   Terry Lerash   But she was one of the first students to come to Pigeon, and Dwight was so gracious in entertaining them and took them to board meetings and got them involved and connected with all sorts of things here at the hospital and they had a wonderful experience. And I can't even remember what the length of the rotation was but I know your physicians got involved in-   Dwight Gascho   It was actually a little longer than what it was supposed to be. It just stretched out. That was a first for them and a first for us, and so we were thrilled and enthralled to have these young students. Of course, they're brilliant kids and they're so much fun, they're very respectful. I included them in my leadership meetings and learn from what we were doing. I wanted them to get as much of an experience in a rural setting as they possibly could get. So medical staff meetings, board meetings, leadership meetings, interact with the patients, interact with the staff, it was all part of it.   Terry Lerash   And I think that we got raving reviews after that about their experience in Pigeon 20 years ago. And so I look at Scheurer Hospital as really a teaching hospital, and so we've built that culture. We, meaning Dwight, for many, many years, and me most recently, built a culture of a teaching organization and I think that started 20 years ago with Dr. Sugimoto, actually, as that first student.   Julia Terhune   That involvement with the leadership at our rural hospitals is one of the pillars of our rural medical education certificate, one that really lands with students and makes an impact. Pigeon makes a place for aspiring rural medical doctors, a place where people can come back and grow. People like Elizabeth and our recent graduate, Evan. Elizabeth is a native of Cass City who, when I interviewed her, was planning to go back to the Thumb for her medical education and now is halfway through her third year of medical schooling. Evan recently graduated medical school and is completing an internal medicine residency in Detroit.   Elizabeth   Yeah, I am super excited to go back. I recently had the opportunity to shadow at Scheurer and had some downtime and was able to go back down to the floor and see a lot of the nurses and the nurses aides that I worked with, and it just made me even more excited to go back there and be back with that group of people and in that environment and continue my education there. And I think it's really important if you eventually want to serve in a rural area to see how rural medicine is different. I can tell you, I had my adult wards rotation for second year at Sparrow this morning and it's way different. It's a different environment, there's different types of cases, so I'm excited about that.   Elizabeth   I'm excited to develop those relationships that you get to develop in rural areas that you don't get so much in bigger hospitals, relationships with patients and relationships with colleagues, other physicians, other employees in the hospital. I'm really excited to be a part of that and just be a part of that group and that kind of close knit community again.   Evan   I think the thing that's going to stick with me is that the sort of idealized version of a physician or what a doctor could be, sort of that dream, is still alive in a lot of places. I think a lot of times we get down on what medicine is becoming or has become or how it's changing and how the role of the physician is changing, and maybe it's not what we had thought. You know, a country doctor making house visits, knowing all their patients, delivering babies and doing minor surgeries and really being that do-it-all type of doctor who's also involved in their community, who's also a community leader. We don't see that as much anymore, I think, especially in bigger cities.   Evan   But having that experience in a rural community shows me that it's still possible. I've met plenty of physicians who were that do-it-all type of person. They were in covering shifts in the ED in the night and then in the morning they were in their clinic and after that they were on the board of the hospital and they still made it to their kids' sports games where they were the sports medicine physician there, and they were on the Rotary Club board as well. I mean, they were just in every facet of their community, being that leader and being that physician and everybody knew them.   Evan   And so I think it gives me inspiration that I can be the type of physician some day that I think I always wanted to be, or I was always really intrigued by. And I think that's a really great image and vision to sort of hold onto as you go through your training and ultimately look at how you want to set up your practice in life and where you want to end up.   Julia Terhune   I am proud and MSU is proud of our partnership with the Scheurer Hospital system and all of the hospitals, clinics and health departments that we get to work with in the Thumb region. All of these places have significantly contributed to our students' rural medical education, places like Hills & Dales Hospital in Cass City, McKenzie Health System in Sandusky, the Harbor Beach Community Hospital, and the Huron County, Tuscola County and Sanilac County Health Departments have all been taking our students for many years. The leaders of all of these facilities have become our friends and have taken on so much for our students. I can't even begin to thank them. They have provided not only a place for medical education during regular times and pandemic times, but they've been mentors and leaders that have provided students with perspectives they wouldn't have gotten anywhere else.   Julia Terhune   On top of that, they have constantly supported our program, our Pipeline program, and even things like this podcast. They have gone above and beyond to be so much more than just medical education partners and I think that that's one of the most important things about rural medical education is that you can't walk into a rural educational environment and not leave with family, friends and a brand new community. So we love Scheurer, we love the Thumb, but what do those who receive care from Scheurer think? I spoke to Lynn and Abby who not only receive medical care from Scheurer Health professionals, but are also employees.   Speaker 7   The more we grow, it gives the community another option and they're like, "Oh, well, oh, they can do that there. Okay, well, I'm going to go there then or request services there."   Speaker 8   It goes back to not being a number. Really, everywhere that you go here, they know you. They know your family, they know something about you, and they built a Meijer in Bad Axe that opened in July [crosstalk 00:21:40] We've got a clinic in there and things that can't get handled there, they can do at the Bad Axe site, and if they can't do it at the Bad Axe site they can send them to Pigeon. So it's all within... What is it? We have something within 12 miles of each other always.   Speaker 7   [crosstalk 00:21:57]   Julia Terhune   Thank you for listening to this podcast. I want to thank Dwight and Terry for taking time to speak with me, along with Shelby, Liz, Evan, Lynn, and Abby for their contributions to this episode. As always, thank you to Dr. Wendling for making this podcast a priority. I love getting the opportunity to hear and tell these stories. Also, Dr. Wendling, herself, is from the Thumb just adding more proof to the theory that some of the best doctors come from the pollex, the scientific term for Thumb. See, I learned something in anatomy. The Thumb is a wonderful place, a place where you can really make rural your mission. How did we do on your transcript?     Rev’s Quality Team reviews all transcripts rated 3 stars or less.    

XR for Business
Driving Innovation with Automotive VR Pioneer Elizabeth Baron

XR for Business

Play Episode Listen Later Jun 14, 2019 52:08


Elizabeth Baron drove innovation forward at the Ford Motor Company since the ’90s, advancing XR technologies in the automotive industry. Now, Elizabeth joins our host Alan as she discusses her new venture, Immersionary Enterprises, as well as her pioneering work at Ford. Alan: Today’s guest is Elizabeth Baron. Elizabeth has been a true pioneer of virtual and augmented reality as the global lead for immersive realities, bringing together multiple disciplines throughout Ford Motor Company. developing multiple immersive realities using VR, AR, and MR to provide information in context to the design studio, multiple engineering teams, UX developers, and computer-aided engineering analysis and many more. Elizabeth has seen dramatic changes, from huge room-sized, multi-million dollar CAVE systems, to haptic seats, to car cockpits made out of wood. From the promise of virtual reality to it becoming real, Elizabeth has been an industry leader always pushing the limits of technology. She has just started a new venture called Immersionary Enterprises, to provide probability spaces where an enterprise can study any potential reality, or the art of the impossible or possible, with a host of relevant data. These realities can be shared across a global connected work team for more collaborative decision making. Immersionary Enterprises aims to establish a holistic immersive reviews as a near-perfect communication and collaboration paradigm throughout industrial design and engineering. It is with great honor that I welcome VR pioneer Elizabeth Baron to the show. Welcome, Elizabeth. Elizabeth: Oh thank you for having me, Alan. That’s quite an introduction. I really appreciate it. Alan: Well, you certainly deserve it. You have been in this industry since the very beginning; you have seen some incredible changes, and maybe you can speak to what you’ve seen in the last 30 years of being involved in virtual and augmented reality, from where you started to where you are today. Elizabeth: Yeah sure. It’s actually quite a transformation I’ve witnessed. It really blows my mind in some regards. So, way back in the day when I started my career at Ford Motor Company, virtual reality was out there, but it wasn’t really a thing in enterprise, per se. And I really became interested in it and started working with it, I would say, before its time. So around the late 90s, I started working in that space and putting together, like, a life-sized human model that could scale to different proportions. We tracked the human through magnetic motion tracking. And since cars are made out of metal, that poses a little bit of a problem, so we created a wood — like, oak and mahogany — adjustable vehicle that could be a small car or a big truck, and then put you in it, and then changed you to be either like a super tall man or a very small woman, and let you do ergonomic assessments. So at that time, we were limited to 60 thousand polys in our entire scene. Alan: Wow! Elizabeth: I know! So we were culling data and massaging things and we’d say, you know, two weeks and we’ll have something for you. And we were really working hard because you have to try to represent a vehicle and a person and an environment in 60 thousand polys. So you can imagine what it looked like; it wasn’t very pretty. But we actually were able to get some value out of it. So we progressed from those days to working more with better tools and optical motion tracking became a thing. So that was a big advancement. So we can now work within prototypes of vehicles and it really opened up another whole set of possibilities for us. And so we worked in that regard, and at that time, I really realized the benefit of doing passive haptics. So

XR for Business
Driving Innovation with Automotive VR Pioneer Elizabeth Baron

XR for Business

Play Episode Listen Later Jun 14, 2019 52:08


Elizabeth Baron drove innovation forward at the Ford Motor Company since the ’90s, advancing XR technologies in the automotive industry. Now, Elizabeth joins our host Alan as she discusses her new venture, Immersionary Enterprises, as well as her pioneering work at Ford. Alan: Today’s guest is Elizabeth Baron. Elizabeth has been a true pioneer of virtual and augmented reality as the global lead for immersive realities, bringing together multiple disciplines throughout Ford Motor Company. developing multiple immersive realities using VR, AR, and MR to provide information in context to the design studio, multiple engineering teams, UX developers, and computer-aided engineering analysis and many more. Elizabeth has seen dramatic changes, from huge room-sized, multi-million dollar CAVE systems, to haptic seats, to car cockpits made out of wood. From the promise of virtual reality to it becoming real, Elizabeth has been an industry leader always pushing the limits of technology. She has just started a new venture called Immersionary Enterprises, to provide probability spaces where an enterprise can study any potential reality, or the art of the impossible or possible, with a host of relevant data. These realities can be shared across a global connected work team for more collaborative decision making. Immersionary Enterprises aims to establish a holistic immersive reviews as a near-perfect communication and collaboration paradigm throughout industrial design and engineering. It is with great honor that I welcome VR pioneer Elizabeth Baron to the show. Welcome, Elizabeth. Elizabeth: Oh thank you for having me, Alan. That’s quite an introduction. I really appreciate it. Alan: Well, you certainly deserve it. You have been in this industry since the very beginning; you have seen some incredible changes, and maybe you can speak to what you’ve seen in the last 30 years of being involved in virtual and augmented reality, from where you started to where you are today. Elizabeth: Yeah sure. It’s actually quite a transformation I’ve witnessed. It really blows my mind in some regards. So, way back in the day when I started my career at Ford Motor Company, virtual reality was out there, but it wasn’t really a thing in enterprise, per se. And I really became interested in it and started working with it, I would say, before its time. So around the late 90s, I started working in that space and putting together, like, a life-sized human model that could scale to different proportions. We tracked the human through magnetic motion tracking. And since cars are made out of metal, that poses a little bit of a problem, so we created a wood — like, oak and mahogany — adjustable vehicle that could be a small car or a big truck, and then put you in it, and then changed you to be either like a super tall man or a very small woman, and let you do ergonomic assessments. So at that time, we were limited to 60 thousand polys in our entire scene. Alan: Wow! Elizabeth: I know! So we were culling data and massaging things and we’d say, you know, two weeks and we’ll have something for you. And we were really working hard because you have to try to represent a vehicle and a person and an environment in 60 thousand polys. So you can imagine what it looked like; it wasn’t very pretty. But we actually were able to get some value out of it. So we progressed from those days to working more with better tools and optical motion tracking became a thing. So that was a big advancement. So we can now work within prototypes of vehicles and it really opened up another whole set of possibilities for us. And so we worked in that regard, and at that time, I really realized the benefit of doing passive haptics. So

Business Mentor Show
How Lizzy Grew Her Marketing Agency While Traveling

Business Mentor Show

Play Episode Listen Later Mar 27, 2019 18:02


Join our growing entrepreneur community on Facebook: www.facebook.com/groups/139597470073188/ Got any questions? Ask me on Facebook: www.facebook.com/AleksanderVitkinPage/ Check out my free business training: www.businessmentor.com Aleks: Hi, this is Aleksander Vitkin and I’m here with Elizabeth, and Elizabeth is a member of the Business Mentor Insiders which is our Mastermind at businessmentor.com, and she’s been doing quite well in business recently so I decided to invite her over. She’s from London, but she’s right now visiting her family in Ghana, so welcome Elizabeth. Aleks: So what has been happening in your business over the past six months, what is some good news that you can share? So I guess you started a while back, and then six months ago what is the best news from the last six months? Elizabeth: For the last six month, is when my business really kicked off. I was getting in a decent amount of revenue, in fact triple what I was getting in the work I was doing before. And over the last couple of months my perfect margins have become somewhat stable and quite a decent amount of what I make is purely from business at this point. Aleks: So when you started, you were already doing business part-time, so you were doing like 20 hours, is that correct? Or how many hours were you doing business when you started? Elizabeth: Yeah, about 20 hours per week was dedicated to learning and starting a business. Aleks: Right, and how did it go back then? So in the early stages of your marketing agency, how did it go, how were things running your business? Elizabeth: It was, I think the best word to describe it is stressful, right. It was a little bit unpredictable. It was quite a new world to me, so although I’ve always been somebody who works by myself so I had a kind of freelance mentality, let’s say. I know the amount of work I put in is how much I get out, but at the same time it was always a struggle because you know you’re in a big pool with a lot of people competing for the same prize, essentially. So, that was a really difficult hurdle to get over both impracticality and as a mental hurdle that was something difficult to get past. But with the kind of, with the support that I’ve been getting within the group, so speaking to yourself and various other mentors within the group, the coaching has been a great guide to overcoming both the mental roadblocks and then the literal roadblocks I have been facing whilst trying to build my business. Aleks: Right, I guess last August was one of your best months, so what led you from doing kinda average in business, running kind of an average part-time almost hobby to building a business where you were, what was your top month, it was like 6.8K, 6.9K? Elizabeth: Yeah, 6.9 so in August is when I kind of really made up my mind to take on board. So I’ve been, I have been taking on board what even the other coaches and mentors have been saying but I think still I wasn’t dedicating enough of my attention to starting my business. So in August, I decided I was going to take out two weeks where I was going to treat it like a full-time business, something that was going to help, you know, sustain my life. Something I could properly grow and have a full vision for. So I was able to make 6.9K in sales that month, and one thing that really really helped me was also bringing on additional support in my team to make certain aspects of my business, such as lead generation, for example, more consistent because essentially without that you don’t have a business in the first place. I am a social skills mentor, is the name and job that I did, that I still do actually. So I work with young children from really really young, as young as two, who have autism spectrum disorder or related disorders and my objective is to help them develop the social skills that they need to, you know, engage better in society, engage better with their parents, work, and school, and so on and so forth.

traveling ghana mastermind grew marketing agency 8k 9k elizabeth it elizabeth yeah elizabeth for
Cookery by the Book
The Italian Table | Elizabeth Minchilli

Cookery by the Book

Play Episode Listen Later Mar 18, 2019 16:53


The Italian TableBy Elizabeth Minchilli Intro: Welcome to the Cookery by the Book podcast. With Suzy Chase. She's just a home cook in New York City, sitting at her dining room table, talking to cookbook authors. Elizabeth: Hi, I'm Elizabeth Minchilli and my latest cookbook is The Italian Table.Suzy Chase: The Italian Table is glorious, from the recipes to the photos. The first thing you see when you open the cookbook is the stunning kitchen with rustic blue and white tile, and blue and white plates hung on the wall. Is this your kitchen?Elizabeth: Oh, I wish! That's a kitchen in a beautiful castle outside of Rome. Although I've spent a lot of time in it.Suzy Chase: Oh, that tile is to die for.Elizabeth: Beautiful. And you know, a lot of the kitchen, I didn't get into all the kitchens in the book, but the particularly beautiful ones I tried to include since they're so inspirational.Suzy Chase: I can't figure out what's more beautiful in this cookbook, your writing or your photographs. What do you love more?Elizabeth: Well, you know, for me, since the kinds of books I've always done have been so image-driven, I can't imagine one without the other. And I see the photographs as giving a different dimension to the words. And that's always been my response to cookbooks, you know. I love, obviously, recipes that work, but I love the story behind them. But I also like the visual inspiration, whether it's actually the food or the place settings or the tiles on the kitchen wall.Suzy Chase: Me too. So I found it interesting that each chapter captures a specific meal that you experienced in Italy. Describe how this cookbook is laid out.Elizabeth: Well the way, I was trying to decide how to combine my competing passions for, you know, interior design and setting and history with food. And I realized that it all came together at the table. And once I decided that, I wanted to share as many different kinds of meals as possible to show my readers how Italians really eat. I mean you know, most people imagine certain dishes with Italy, whether it's pasta or pizza or gelato. But people aren't eating those things all day long, and they're not eating them perhaps in the way that people think. So while the settings are beautiful, these are really the way people eat, whether it's at the beach, whether it's on a coffee break, you know, grabbing a slice of pizza in Rome. Whether it's in a summer vacation villa outside of, in Umbria. So I wanted to have a great range and that way to be able to explore both the setting and the food on the table.Suzy Chase: Yeah, I notice that you really drill down beyond the ingredients, beyond the cooking technique. Like you'll get the pasta and the bowl, but what about the bowl, or the tool used to get the pasta from the bowl to the plate or even the linens that cover the table. I love that part.Elizabeth: Yeah, that's my ... I love that part too. And not just because it involves shopping opportunities. What I really love about it is that it really, you know, 'cause when you go to a place you might have a great meal and you might support the local restaurants, in a way, but there's other ways that you can learn more deeply about a region and that's by visiting its artisans. And you know a lot of people will see pretty, you know, ceramics from Italy and stop there, knowing that they're from Italy. But I really like to, you know, drive home why this certain kind of plate shows up if you're on the beach in Positano, why a different kind of bowl shows up if you're in a small town in Puglia, and what those mean. And explore a bit about the people who are actually making those bowls, who are often the people that are eating those dishes anyway.Suzy Chase: Here's the question I'm dying to know the answer to. How did a girl from St. Louis end up in Rome as an expert on Italian cuisine?Elizabeth: Well, that goes back to the fact that when I was 12 years old I was living in St. Louis and my parents took a vacation, and they went to Italy and they did Florence, Venice, Rome. And they came back and instead of getting back to our life they packed up our house, sold the business, and we moved to Rome for two years. And although we only stayed there for two years and then moved back to the States, we always came back in the summer. And so I always felt at home whether it was in Italy or Spain or France, trying to get a way to get back, and that way came back in graduate school. And in the late '80s I decided if I picked a, you know, my dissertation topic correctly, I could get somebody else to sort of fund my permanent vacation, and I did. And I ended up in Florence working on sixteenth century gardens. And then along the way I met my Italian husband and started having Italian babies and Italian dogs and that's when my new career really shifted gears from academia to publishing. And at the beginning I was writing predominantly about art and architecture and design, but almost really really shortly thereafter I also started writing about food. But always in a cultural context. You know, when I was writing for Bon Appetit or Food & Wine or Town & County I would write about restaurants but more, not just as a place to find good food but as a way to dive deeper into the culture.Suzy Chase: Tell me about where you live.Elizabeth: I currently divide my time between Rome and Umbria. Umbria is a region located just north, in between, let's say, Rome and Florence. And my main house is a little apartment in the old section of Rome called Monti. It's a little, I'm now talking to you from my office on the roof of our building. We've been living here, my husband had the apartment when I met him, my kids have been born here, and it's right, I mean, if I walked out, I just now walked down the street and my cash machine, my ATM, is in front of the Colosseum. Which is kind of nice.Suzy Chase: Oh, wow.Elizabeth: And then our house up in Umbria, which is on the cover of the book, actually. We spend the summers there and have a big vegetable garden and we have olive trees so we make our own olive oil and that's where we live.Suzy Chase: How old is your house in Umbria? It looks like it's stone.Elizabeth: It's made out of stone. And the house itself is, I would say parts date back to the sixteenth century.Suzy Chase: Wow. That's gorgeous.Elizabeth: And you know, like all of these houses, they're built onto over the years, and we restored it. My husband's an architect, and his specialty is restoring these houses into inhabitable places. And in fact two of my books talk about restoring houses in Italy.Suzy Chase: Talk a bit about how the Italian food words are the hardest to tackle. Like, cicchetti, in Venice, if I'm pronouncing that correctly. What is it, and where would we eat it?Elizabeth: Well, cicchetti is a word that yeah, exists only in Venice. Took me a really hard time to figure out what it means, because people translate it into tapas, you know? 'Cause we think we know what that means. Or little bites. And they kind of are both those things. But when you say to a Venetia, they know exactly what it means and it has a sort of social context. It means, little things to eat along with a glass of wine so you don't get too drunk 'cause that's not the point. The point is actually meeting your friends and having a drink. And the food is sort of secondary. And you know all this stuff I just said, it's hard to put down in a one word translation. But it's funny you ask that because I mean, food in Italy is so difficult to translate and this past week I just did food tours as well, and Melissa Clark was just here and we were doing-Suzy Chase: Yes. You had your Awful Tour.Elizabeth: We had our Awful Tour. And it wasn't awful at all, it was wonderful. But it did deal with innards. And one of the things that we both learned, you know, we were both in Umbria, in Rome, and in Florence, is you know, the same little part of an animal can have, you know, ten different words depending where you are in Italy. And for me, that's sort of the fascinating thing. There's always something more to learn. You know, you said I'm an expert in Italian food, but I find it hard to believe that anybody's an expert. I think that there's always something to learn.Suzy Chase: Well since you brought up Melissa Clark, tell me about your food tours and your daughter Sophie.Elizabeth: So, when I first started my blog I didn't really know, you know, back in the early days of blogs, I didn't really know what it would lead to and how it would make money. 'Cause blogs don't make money. And so one of the things that it led to was doing food tours. And people started asking me for food tours and I didn't quite know what they were at the time. Nobody was really doing them in Rome. And so I started doing them, and I did market tours around several different neighborhoods in Rome on my own, and was immediately very busy doing these tours. And I was doing it on my own for a few years and then luckily my daughter, Sophie, graduated university. She was going to school in London, came back here, and I convinced her to work with me. And so now we both got sort of more work than we can handle. She's doing, handling the day by day tours here in Rome. I do some of them as well. But my time is mostly focused on our week in Italy tours. And those are deep dives into different regions. We're currently doing tours in Rome, in Florence, and in Puglia. And we do them on our own, they're usually six nights. We do them on our own, sometimes we partner with people. I've partnered with Melissa Clark twice and Evan Kleiman, who's located in LA. She's a cookbook author and host of Good Food.Suzy Chase: The best.Elizabeth: Yeah. And then in July we're doing one with Elizabeth Gilbert, the author of Eat Pray Love.Suzy Chase: Oh cool.Elizabeth: Yeah. We're doing one in Puglia. So it's a fun excuse to collaborate with friends, and also see Rome and Italy in general from a different point of view.Suzy Chase: What influence did Anna Tasca Lanza and her cooking school have on you?Elizabeth: Well I just remember seeing the book really early on, you know, when I first moved to Italy, working on my dissertation. I can remember picking up the Marcella Hazan books, cooking through them, and then there were these books also by Anna Tasca Lanza. And these beautifully illustrated books. And Sicilian food at the time, even in Italy, people weren't really talking about it. And I just found it fascinating. And when I started writing about food and getting sent on press trips, I found myself at the Tasca d’Almerita estate. And seeing these pictures of the food processes that were going on in both of the houses on the estate. And there was one house that sort of focused on the wine and then there was Anna Tasca Lanza at the other villa. And I would see these pictures of like, women pouring tomato sauce on wooden planks in a sun drenched courtyard making tomato paste, and her recipes talked about these really romantic memories of the house cook sort of teaching her how to make things, and with the ingredients from the land. And it always was something that stuck in my head, and over the years I've made it back there as many times as possible and I'm really happy to recreate a menu inspired by my time there.Suzy Chase: You have a gorgeous porchetta in this cookbook. What is the key to a good porchetta?Elizabeth: Well obviously the key to any of these dishes is getting great ingredients. And the other thing is that you have to sort of, a lot of these recipes that people love are often eaten in certain places. For instance, porchetta is most likely eaten at the side of the road, you know, as you're driving through Italy there's a porchetta stand and he's got, you know, this 200 pound pig on the side of the road that he's cutting thick slices off of. I don't think anybody that's buying my book has an oven big enough to fit a pig in it. And so the challenge of my recipe was creating a porchetta that you could cook at home. And in that case it was something that would fit in your oven, have all that crispy skin, have all the nice juicy fat, but not get dried out in the middle. And so I, working with my local butcher in Umbria, I came up with that recipe. So it has all those things. And it's just super easy. Once you get the really right kind of meat, you barely season it. I mean, you season it correctly, tie it up correctly, you put it in the oven and you walk away. So, and I have to say, most of the recipes in the book are sort of, you know, not a lot of work.Suzy Chase: I can't talk about porchetta without bringing up fraschetta. Describe a fraschetta.Elizabeth: A fraschetta.Suzy Chase: Fras-, yes.Elizabeth: Sorry! They're all really hard. Everybody mispronounces my name, too, because the C and the H and all those things are really hard to get in Italy. So, a fraschette.Suzy Chase: Yes.Elizabeth: A fraschetta is a restaurant located in the town of Ariccia. It's south of Rome and it's known for its porchetta. And these fraschette were originally just little shops, like hole in the walls that would sell wine. And people would sit outside and to provide shade the owners would put up a few branches to provide shade, so its leaves still attached. And those are frasce. And so these places became known as fraschette, where you could go get sort of table wine. And bring your own food. Eventually these places started serving their own food, turned into restaurants, but they're still called fraschette today. And one of the places that actually, Sophie and I visit a lot, is la Selvotta in Ariccia. And the pictures in the book come from our experience there, which is one of my favorite ones because it's actually located in a leafy sort of forest.Suzy Chase: It looks heavenly.Elizabeth: It is. And the food is just, you know, it's what you want to sit down at a picnic bench and eat. It's like, mozzarella and salami and olives. And then you always have a few cooked things included. Porchetta, maybe, some sausages. It's fantastic.Suzy Chase: So last night I made some of your recipes out of the menu for a late summer dinner under the pergola. Even though it's the dead of winter here.Elizabeth: I saw that, I saw that! I saw that. You put them on Instagram. They looked perfect. Well, I have to say when people are asking me what's my go-to recipe in the book, it's the bean soup recipe. It's just so good.Suzy Chase: It's two minutes.Elizabeth: I know. It's two minutes. And people really think you put a lot more effort into it than you did.Suzy Chase: Yeah.Elizabeth: I mean, if you start out with dried beans and soak them, it does become, you know. And I do suggest you do that. But I'm not gonna tell anybody if you use canned beans, that's okay.Suzy Chase: Okay, thanks.Elizabeth: But I have to say, it's a great winter recipe, but then I find that in the summer if you serve people soup they really appreciate it. It's like something they don't expect and they're sick of eating cold food.Suzy Chase: Describe the story that went with this menu, how you became a good Italian momma immediately after your daughters were born.Elizabeth: Well one of the things, one of the many things that I realized, is that being an Italian momma has lots of sort of unspoken rules. And one of them is that while you stay in the city with your kids during school year, the minute the school year ends or the weekend comes, you head out to a country house. And I don't know how it is, but everybody seems to have a country house. Whether it's your Nonna, whether it's, you know, your friends, you go out to the countryside. And so I would pack up the kids and go up to the country. And so that's where, you know, even though we live in Rome, I learned to cook a lot and entertain at our house in Todi. And you know I learned to cook, you know, meals according to the seasons as well, which is something that's, I think, really important.Suzy Chase: So moving on to my segment called My Last Meal, what would you have for your last supper?Elizabeth: You know, it has to do with place as well. So I think I would have to say, maybe a plate of carbonara at one of my favorite Trattoria, Perilli in Rome. Just because for me that sums up sort of everything. It sums up the place I would go for Sunday lunches with my family, it has my favorite waiter Valerio, it's a place that's always been there before I got there, it will exist long after I leave. And the plate, you know, the carbonara goes without saying.Suzy Chase: Where can we find you on the web and social media?Elizabeth: On social media, I'm eminchilli at Instagram. And I am Elizabeth Minchilli on Facebook, and eminchilli on Twitter. And my website is elizabethminchilli.com. And I also have an app, Eat Italy, which is guides for eating your way through Rome, Venice, Florence, Puglia, Umbria, and more and more cities every day.Suzy Chase: Thanks Elizabeth, for coming on Cookery by the Book podcast.Elizabeth: It was great to be here. Thanks for having me.Outro: Follow Suzy Chase on Instagram, @cookerybythebook, and subscribe at cookerybythebook.com or in Apple Podcasts. Thanks for listening to Cookery By The Book podcast, the only podcast devoted to cookbooks, since 2015.

A Life & Death Conversation with Dr. Bob Uslander
Aid in Dying What It Means to Those Who are Terminally Ill, Ep. 19

A Life & Death Conversation with Dr. Bob Uslander

Play Episode Listen Later May 25, 2018 40:47


Please Note:  This was recorded as a Facebook Live earlier this year prior to the recent ruling to overturn the California End of Life Options Act 2015 by Riverside County Superior Court Judge. In response, California Attorney General Xavier Becerra filed an emergency appeal seeking a stay of Superior Court Judge Daniel Ottolia's ruling that invalidated the less than two-year-old medical aid-in-dying law.  "It is important to note the ruling did not invalidate the law or the court would have said so explicitly in its order, so the law remains in effect until further notice," said John C. Kappos, a partner in the O'Melveny law firm representing Compassion & Choices. If this law and the right to die with dignity is important to you, we urge you to learn more from Compassion and Choices the organization that helped get the law passed.  Note: A Life and Death Conversation is produced for the ear. The optimal experience will come from listening to it. We provide the transcript as a way to easily navigate to a particular section and for those who would like to follow along using the text.  We strongly encourage you to listen to the audio which allows you to hear the full emotional impact of the show. A combination of speech recognition software and human transcribers generates transcripts which may contain errors. The corresponding audio should be checked before quoting in print. Need more information? Contact Dr. Bob for a free consultation. Transcript Dr. Bob: On this episode, Elizabeth Semenova and I speak very frankly about what it's like to support people through Medical Aid and Dying. We explain the process; we discuss who asks for this kind of support and why there are still so many barriers. This was originally captured as a Facebook Live and repurposed as a podcast because this information is so vitally important. Please share the podcast with everyone and anyone you feel would benefit from listening. Thank you. Dr. Bob: I'm going to do a little bit of introduction for myself, if you're watching this and you have been on the integrated MD Care site, you probably know a bit about me. I've been a physician for 25/ 30 years, somewhere in that range. Over the past several years I've been focusing on providing care for people who are dealing with complex illnesses, the challenges of aging, the challenges of dying. During these few years, I've discovered a lot of gaps in the health care system that cause a lot of challenges for people. Dr. Bob: We developed a medical practice to try to address those big challenges in those big gaps that we've encountered. It's been really remarkable to be able to do medical care in a way that is truly sensitive to what people are really looking for and what their families are looking for that is not constrained and limited to what the medical system will allow. It's not constrained by what Medicare will pay, what insurance will pay. We allow people to access us completely and fully and we are there to support them in a very holistic way with medical physician care, nursing care, social working care and then a whole team of therapists. Massage therapists, music therapists, acupuncturists, nutritionists. Dr. Bob: So that has been really fascinating and phenomenal. Elizabeth came along in the last several months. Really, she was drawn primarily to the true end of life care that we deliver and has been truly surprised how beautifully we are able to care for people who aren't necessarily dying as well. Elizabeth: Absolutely, yeah. Dr. Bob: So we can talk about all the different aspects of that, but we are here today to really talk about Medical Aid and Dying. Because, shortly after we started this practice, back in January 2016 California became one of the few states in the United States that does allow physician-assisted death. Dr. Bob: It allows what is also known as Death with Dignity, Medical Aid in Dying. The California End of Life Option Act passed in June 2016. At that point, a person with a terminal illness, an adult who is competent, had the ability to request a prescription of medicine from their physician, from a physician. That if taken, would allow them to have a very peaceful, dignified death at a place and time of their choosing. Since June 2016 we have become essentially experts and kind of the go-to team in San Diego for sure and actually throughout a good portion of Southern California because other physicians are reluctant to participate or because the systems that the patients are in make it very difficult or impossible for them to take advantage of this law. There is a lot of confusion about it. It's a very complex, emotionally charged issue. We as a team, Elizabeth and I, along with other members of our team have taken it upon ourselves to become true experts and guides so that people can get taken care of in a way that is most meaningful and sensitive. In a way that allows them to be in control and determine the course of their life leading up to their death and how they are going to die. That's why we are here. We want to educate; we want to inform, we want people to not be afraid of the unknowns. We want to dispel the myths. I'm passionate about that. We work together, and I think we do a very good job as a team, of supporting patients and families. I'd like to have Elizabeth share a little about why this is so important to her and then we are going to get into some more of the specifics about what's actually taking place, the requirements, how the process works and if there are questions people have we are going to answer those as well. We are going to go for about 20/ 25 minutes, and if it turns out that we don't get through enough of our material then we will have another session, but we don't want to make this too long. We want to make it concise, meaningful and impactful. Elizabeth: Okay. Dr. Bob: All right. Elizabeth: Okay. I started as a hospice social worker, and I became an advocate for Aid and Dying because I learned about the law. Learned that there were not a lot of options, policies, procedures in place, in Southern California when I started working in hospice for people to take advantage of and participate in the End of Life Option Act. Elizabeth: There were very, very, very few resources. There were no phone numbers to call of people who would answer questions. There were no experts who, well not no experts, who thoroughly understood the law but it was very hard to access that information. Elizabeth: I did my best to find it and became connected with some groups and some individuals who were experienced with and understood the law and became really passionate about pursuing advocacy and allowing as many people to have access to that information as possible. I started working on sharing that information and being a resource and learning everything that I could so that other people could have that. How I became connected with Integrated MD care and with you, I found you as a resource for another client, and we started having conversations, and I learned that it was possible to be supportive of people through this process through the work you were doing and I took the opportunity to become a part of it. We have done a lot to support a lot of people, and it's become a really special part of our work and my life. Dr. Bob: Why is it so important to you? Why is it so important to you for people to have access and the information? Elizabeth: I really believe that every life can only be best lived if you know all of the options that you have available to you. So how can you make choices without information? Right? So when it comes to something like this which is a life and death situation, quite literally, there are limited resources for people to make informed choices. What could possibly be more important than having access to information about what your legal rights are to how you live and die? With California only having begun this process of Aid and Dying. Exploring different perspectives and legal options and philosophical positions on the subject, I think it's really important to open that conversation and to allow people who support it as well as people who are against it to have those conversations and to explore how they feel about it and why. Then of course for the people who want to participate, who want information, resources, support in the process they have every legal right to it, in my opinion, they have every moral right to it and if there are no other people who are willing to support them I feel it is my duty to do that. Dr. Bob: Awesome. And you do it well. Elizabeth: Thank you. Dr. Bob: Yeah it's kind of crazy to think we have this legal process in place. People have spoken up and said, we want to have access to this, and we believe it's the right thing. Despite the fact that we have a law in place that allows it, it was so difficult, and it's still is to some degree, but especially in the beginning, it was like a vast wasteland. If somebody wanted to find out how to access this process, no one could really give them adequate information. There were organizations that would tell them what the process is and how it happens but there was no one stepping up to say 'I'll support you.' There were no physicians, and there was no one who was willing to give the name of a physician who was willing. It was very frustrating in the beginning of this process, in the first, I would say, the first year and a half. Still, to some degree, getting the right information, getting put in touch with those who will support it is difficult or impossible. Even some of the hospital systems that do support Medical Aid and Dying their process is very laborious, and there are so many steps that people have to go through that in many cases they can't get through it all. Our practice we are filling a need. Our whole purpose in being is to fill the gaps in health care that cause people to struggle. One of my mantras is 'Death is inevitable, suffering is not.' Right. We are all going to die, but death does not have to be terribly painful or a struggle. It can be a beautiful, peaceful, transformative process. We've been involved in enough End of Life scenarios that I can say that with great confidence that given the right approach, the right information, the right guidance, the right support it can always be a comfortable and essentially beautiful process. Elizabeth: Something that is important too also is to have people who have experience with these processes these struggles that people have. Not just anyone can make it an easy process. Not just anyone can make it a smooth process. You have to have it those obstacles you have experienced what the difficulties are and where the glitches are and in order to be able to fill those gaps you have to know where they are. Dr. Bob: Right. Elizabeth: Sometimes that comes from just falling into the hole and climbing out which is something we have experienced a few times. Dr. Bob: Having been through it enough times to... and of course we will come across- Elizabeth: More... Dr. Bob: Additional obstacles but we'll help...and that doesn't just apply to the Medical Aid and Dying it applies to every aspect of health care, which of course, becomes more complex and treacherous as people's health becomes more complicated and their conditions become more dire, and their needs increase. Hospice, yes it's a wonderful concept, and it's a wonderful benefit, but in many cases, it's not enough. Palliative Care, in theory, great concept, we need more Palliative Care physicians and teams and that kind of an approach, but in many cases, it's not enough. What we are trying to do is figure out how to be enough. How people can get enough in every scenario. We are specifically here talking about Medical Aid and Dying. In California, the actual law is called The End of Life Option Act. It was actually signed into law by Governor Brown in October 2015, and it became effective June 9th, 2016. I'll note that just yesterday the Governor of Hawaii signed the bill to make Medical Aid and Dying legal in Hawaii. The actual process will begin January 1st, 2019. There is a period of time, like there was in California, a waiting period, while they're getting all the processes in place and the legal issues dealt with. Elizabeth: Which you would think, that would be the time frame that health care intuitions would establish policies, would determine what they were going to do and how they were going to help. Dr. Bob: One would think. Elizabeth: You would think. Dr. Bob: Didn't happen here. Elizabeth: That didn't happen here. Dr. Bob: So maybe Hawaii will learn from what happened in California recently when all of a sudden June 9th comes, and still nobody knows what to do. What we are becoming actually, is a resource for people throughout California. Because we have been through this so many times now and we have such experience, we know where the obstacles are, we know where this landscape can be a bit treacherous. But, if you understand how to navigate it doesn't have to be. Elizabeth: We have become a resource not just for individuals who are interested in participating or who want to find out if they qualify but for other healthcare institutions who are trying to figure out how best to support their patients and their loved ones. TO give them without the experience that they need without having the experience of knowing what this looks like. Dr. Bob: Yup. Training hospice agencies. Training medical groups. At the heart of it, we just want to make sure that people get what they deserve, what they need and what they deserve and what is their legal right. If we know that there is somebody who can have an easier more supported, more peaceful death, we understand how incredibly valuable that is, not just for the patient but for the family. For the loved ones that are going to go on. So let's get into some of the meat of this. I'm going to ask you; we can kind of trade-off. Elizabeth: Okay. Dr. Bob: I'll ask you a question. Elizabeth: Okay. Dr. Bob: You ask me a question. Elizabeth: Okay. Dr. Bob: All right. If you don't know the answer, I'd be very surprised. In general who requests General Aid and Dying? Elizabeth: A lot of the calls we get are from people who qualify. So I don't know if you wanna go over the qualifications... Dr. Bob: We will. Elizabeth: Okay. Dr. Bob: That's the next question. Who is eligible. Elizabeth: Sorry. A lot of the people who call are individuals who are looking to see if they qualify and want to know what the process is. There are people who are family members of ill and struggling individuals, who wanna support them in getting the resources they might need. There are some people who just want the information. There are some people who desperately need immediate support and attention. Dr. Bob: Do you find, cause you get a lot of these calls initially, do you find that it's more often the patient looking for the information or is it usually a family member? Elizabeth: It's 50/50. Dr. Bob: Oh 50/50. Elizabeth: I think it depends a lot on where the patient is in the process and how supportive the family members are. Some people have extremely supportive family members who are willing to make all the phone calls and find all the resources and put in all the legwork. Some people don't, and they end up on their own trying to figure out what to do and how to do it. There are some people who are too sick to put in the energy to make 15 phone calls and talk to 15 different doctor's offices to find out what the process is. A lot of people start looking for information and hit wall, after wall, after wall. They don't even get to have a conversation about what this could look like, much less find someone who is willing to support them in it. Dr. Bob: Great, thank you. So who is eligible? Who does this law apply to? That's pretty straightforward, at least in appearance. An adult 18 or older. A resident of California. Who is competent to make decisions. Has a terminal illness. Is able to request, from an attending physician, the medication that if taken, will end their life. Pretty much 100% of the time. The individual has to make two requests, face to face with the attending physician and those requests need to be at least 15 days apart. If somebody makes an initial request to meet and I determine that they are a resident of California, they are an adult, they are competent, and they have a medical condition that is deemed terminal (I'll talk more about what that means) if I see them on the 1st, the 2nd request can happen on the 16th. It can't happen any sooner. The law requires a 15 day waiting period. That can be a challenge for some people, and we will talk a bit about that as well. In addition to the two requests of the attending physician, the person needs to have a consulting physician who concurs that they have a terminal illness and that they are competent to make decisions and the consulting physician meets with them, makes a determination and signs a form. The patient also signs a written request form that is essentially a written version of the verbal request and they sign that and have two people witness it. That's the process. Once that's completed, the attending physician can submit a prescription if the patient requests it at that time to the pharmacy. Certain pharmacies are willing to provide these medications, and many aren't. But, the physician submits the prescription to the pharmacy, and when the patient wants to have the prescription filled, they request that the pharmacy fill it and the pharmacy will make arrangements to have it delivered to the patient. The prescription can stay at the pharmacy for a period of time without getting filled, or it can be filled and be brought to the patient, and at that point, the patient can choose to take it or not. The patient needs to be able to ingest it on their own. They have to be able to drink the medication, it's mixed into a liquid form. They need to be able to drink five to six ounces of liquid, and it can be through a glass or through a straw. If the patient can't swallow, but they have a tube-like either a gastric tube or a feeding tube as long as they can push the medication through the tube, then they are eligible. The law states that no one can forcibly make the patient take it. They have to be doing it on their own volition, willingly. Okay. So, that's pretty much the process. Is there anything that I left out? What is a terminal illness? That is a question that is often asked. For this purpose, a terminal illness is a condition that is likely or will likely end that person's life in six months if the condition runs its natural course. Most of the patients that we see requesting Medical Aid and Dying have cancer. They have cancer that is considered terminal. Meaning there is no cure any longer. It's either metastasized, or it involves structures that are so critical that will cure them. In most cases, there is no treatment that will allow them to live with a meaningful quality of life, past six months. Of course, it's difficult to say to the day, when somebody is going to die, but there has to be a reasonable expectation that condition can end their life within six months. We also see a number of people with ALS, Lou Gehrig's disease, amyotrophic lateral sclerosis. That's a particularly sensitive scenario because those people lose their ability to function, they lose their motor function, and as it gets progressively, further along, they lose their ability to swallow. They can lose their ability to speak and breathe. The time frame of that condition can be highly variable. We see people with advanced heart disease, congestive heart failure, advanced lung disease other neurologic diseases. Elizabeth: The gamut. Dr. Bob: We see the gamut, but those are the majority. We've talked about who's requesting this for the most part, who's eligible? A patient who is competent has a terminal diagnosis and is an adult resident of California. We talked about the requirements, what's the process. Let's talk a little bit about the challenges that we've identified or that other people have identified. At the very beginning of this process, I became aware that the law was going to begin taking effect just a few months after I started my medical practice at Integrated MD Care and I figured great this is progressive. We are kind of like Oregon, we are going to have this option available, and I felt like it was the right thing. I've always felt like people should have more control and be able to be more self-determining. Especially at end of life. Who's life is it? Right? Who are we to tell somebody that they have to stay alive longer than they want to. That never made sense to me. I think if you're not in this world of caring for people at end of life or you haven't had an experience with your family. Most people figure when people are dying they get taken care of adequately. Hospice comes in, and they take care of things. IN some cases that's true. In many cases, it is the furthest thing from the truth. People struggle and suffer. Patients struggle and suffer, families suffer and if we have another option, if we have other options available wouldn't we be giving them credence? My answer is yes, we should. So when the law was coming into effect, I figured physicians would be willing to support patients because it's the right thing. I just assumed people would go to their doctors and say 'we now have this law, can you help me' and the doctors would say 'of course.' It didn't quite work out that way. Now I understand why I see it more clearly. People started calling me to ask for my support, and I started meeting with them and learning about what they were going through and learning about all of the struggles they've had through their illness and trying to get support with what is now their legal right and they were getting turned away by doctor, after doctor, after doctor. I learned what I needed to learn about the process and I started supporting a few patients here and there. As time went on, I saw A)what an incredibly beautiful, beautiful process it is. What an extraordinary peaceful end of life we could help people achieve and the impact that it has on the families was so incredibly profound that I know that this was something that I needed to continue supporting. With the hope that other physicians would come on board and there wouldn't be such a wasteland and so much struggle because I can only take care of some many people. Well, it's a year and a half later, and I do think things have- Elizabeth: Improved. Improved some. Some of the hospital systems in San Diego certainly, have developed policies and process to support patients through the Aid and Dying, sometimes it can still be laborious and cumbersome, and hiccups occur that create great challenges and struggles. But what we've developed is a process that is so streamlined. Like Elizabeth mentions, we've come across so many of these obstacles and these issues that couldn't have really been anticipated. That have to do with hospice agencies not wanting to be supportive. Of not being able o find a consulting physician for various reasons. Coroners and medical examiners not understanding anything about this process. So we've had to be educating them to make sure that the police don't show up at somebodies house in the middle of the night. It's become a real passion for both of us and our whole team. To be able to do this and to be able to do this really well, as well as it could possibly be done. More doctors are coming on board and being open to this. I'll tell yeah, I'm not so sure that's the right thing, and we have thoughts about that. I've been talking a lot, so I wanna sit back and let you talk, take a sip of my coffee and I wanna hear your thoughts on- Elizabeth: Other doctors. Dr. Bob: Other doctors and how they perceive this. Why we may not just want every doctor- Elizabeth: Doing it. Dr. Bob: Doing it. Elizabeth: I think it's really important that other doctors be open to it. Especially open to the conversations. I think one of the things that has been the most important for me is to help people start those conversations with their doctors, with their families, with other healthcare providers. A lot of the doctors are restricted by policies where they work or by moral objections or just by not really being familiar and being concerned that they might misstep. I think that having doctors come on board first in terms of conversations is fantastic. Then also learning the process is important. As simple as it is in the way that you described it it's more complex than that. There are a lot of small details, paperwork, and requirements. Things have to be done a certain way in order to be compliant with the law. There are aspects of supporting the family. This is a very unique experience. If you as a physician don't have time to have longer conversations with patients and families, if you don't have time to provide anticipatory support and relief for the grieving process or for the dying process, it can be a struggle for the patients and families to go through this even if they have the legal support that they need. I think that that's one of the things you were referring to in terms of why it's not necessarily good for everybody to come on board. Dr. Bob: Yeah. Because if they say that they will support a patient and be their attending physician through this process, they could start the process and then come across some of these hurdles that they don't know what to do with and it could completely derail the process. It's too critical when patients finally feel that they now have this option available to them, that they see the light at the end of the tunnel, every little misstep and every little delay, is- Elizabeth: Excruciating. Dr. Bob: Excruciating. We see that happening over and over again. So when people find us and we assure them, we will help you get through this without any more hiccups, without anything getting derailed, they are very cynical. We tell them- Elizabeth: They've been so many doctors, they've been to doctors who've said... Dr. Bob: They've been screwed, they've... Elizabeth: We will help you, and they haven't gotten the help that they need. Dr. Bob: There is nothing that's more painful for somebody, an individual or a family member who's finally come around to wanting to support mom or dad or husband or a wife or a child and then to have it be taken away from them or threatened. We make ourselves available. There are times when we say we are available for you anytime, day or night; you can contact us. They start calling us; I've gotten calls at 2 in the morning from somebody just to say I just wanted to make sure you were really there. That you really would respond. They can't wait to get to the endpoint. Not even because they are ready to take the medication but because they are ready to have the peace of mind and the security of knowing that they have an easy out, rather than have to struggle to the bitter end. Elizabeth: This is really about empowering the patient and the family. This is all about providing them with the opportunity to do what they want to do with their life. To live it the way they want to live it and to end it the way they want to end it. Not in a way that is incongruent with their moral, ethical, spiritual life choices. In a way that supports the way that they've lived, the principals they've lived by and the things that matter to them. I would also say that the difficulties that doctors have had and the struggles that we've had in working with other physicians it's not because they don't care about their patients. It's not because they don't want the best thing for them. Maybe they disagree with what the best thing is, or maybe they feel that they are not able to provide sufficient support. There are a lot of really good doctors who aren't able, for whatever reason, to do this. Dr. Bob: That's a great point. I think a part of it is that sometimes they work for organizations that won't allow them to, and that happens often. Then they don't understand the process; they are intimidated by it. They don't want to mess it up. And, they are so busy that they feel like it's going to require too much time out of their day. Elizabeth: Which it does. Dr. Bob: Which it can, and they don't have any way to bill for that. They feel like they are going to be doing everybody a disservice. But unfortunately, that often leads to the patients being in this state of limbo and not knowing where to turn. Elizabeth: Thinking that they maybe they have started in the process and Dr. Bob: Not, we have certainly seen that. Elizabeth: Discovering later that they haven't. Dr. Bob: So we are going to close it down here shortly. One of the things, and you spoke about empowerment, and how really important that is, both for the patients and for the families. One thing that I've recognized, so now I've assessed and supported well over a hundred patients through this process. I've been with many of these people when they've taken the medication and died. So, I've seen how beautiful and peaceful it is. It literally in most cases, a lot of times there's laughter and just a feeling of incredible love and connection that occurs with the patient and the family in the moments leading up to that. Even after they have ingest the medication we have people who are expressing such deep gratitude and love and even laughing during the time because they are getting freed. They are not afraid, they are almost rushing towards this because it's going to free them. Most of the time they fall asleep within a matter of minutes and die peacefully within 20 to 30 minutes. Sometimes sooner. Occasionally a bit longer. But, if anyone is wondering if there is struggle or pain or flopping around in the death throws. None of that. This is truly...this is how I want to go when it's my time. The one thing that seems very consistent with the patients that I've care for through this process is, they have a physical condition that is ravaging their bodies. Their bodies are decaying, they are declining, they are not functioning. Their bodies are no longer serving them. But their spirit, is still strong. They have to be competent to be able to make this decision. Most of the time they are so determined to be in control of what happens to them, their spirit has always been strong. They have lost control because their bodies no longer function and that is irreconcilable for them. They cannot reconcile this strong spirit in a body that is no longer serving them and that is only going to continue getting worse. That's the other important part of this. These are people who are dying, they are not taking this medication because they are tired of living. They are taking this medication because they are dying and they don't see any reason to allow their death to be more prolonged and more painful, than it needs to be. They are empowered, and we are empowering people to live fully until their last moments and to die peacefully. My last little note here is, why do we do this? Well, that's why we do this. Elizabeth: Yeah. Dr. Bob: Because people deserve the absolute best most peaceful, most loving, death. This is in many cases, the only way to achieve that. I think we are going kinda wrap it up. We obviously are passionate about this topic. We are passionate about wanting to share the realities of it. We don't want there to be confusion, misconceptions, misunderstandings. Aid and Dying is here; it's not going away. It's going to continue to expand throughout our country. We are going to get to a place where everybody has the right to determine when their life should end peacefully when they're dying. I'm very happy and proud to be on the forefront of this. I know it's controversial, I imagine there are people who think that I'm evil and I'm okay with that because I know. I see the gratitude that we get from so many patients and families. When we go out and speak to groups about this the vast majority of people are so supportive and Elizabeth: Sort of relieved, even the professionals are so relieved. We have a patient, we have been helping another doctor support that patient, and he's so relieved and so friendly and so grateful just to be able to provide the support that he wouldn't otherwise be able to provide. It's not just the patients; it's everybody we engage on this, it's really amazing. Dr. Bob: Thank you. It really is an honor to watch you engage with the patients and families and to be as supportive of what we're doing. It's remarkable. Elizabeth: Thank you. Dr. Bob: We will talk about some of the options that people have when they don't qualify for Aid and Dying because there are other options. We wanted to address some of those options as well but not on this live; we'll do that maybe next time. Thanks for tuning in, have an awesome day, and we will see you soon, take care. Photo Credit:  CENTERS FOR DISEASE CONTROL AND PREVENTION/WIKIMEDIA COMMONS PUBLIC DOMAIN  

The Passionistas Project Podcast
Elizabeth Tulasi Supports Democratic Women Running for State Office in California

The Passionistas Project Podcast

Play Episode Listen Later Mar 20, 2018 29:04


Elizabeth Tulasi, who brings 15 years of political and non-profit management experience to her Board role for California Women's List, a political action committee that supports Democratic women running for state office in California. She started her career as a Capitol Hill staffer in Washington DC. Upon returning to California, Elizabeth worked at a Food Bank, advocating to make healthy food more accessible and other programs that serve families living in poverty. Most recently she managed issue campaigns at California's largest business advocacy alliance as COO. More about California Women's List. Learn more about The Passionistas Project.   FULL TRANSCRIPT: Passionistas: Hi, and welcome to The Passionistas Project Podcast. We're Amy and Nancy Harrington. Today we're talking with Elizabeth Tulasi who brings 15 years of political and nonprofit management experience to her board role for California Women's List a political action committee that supports democratic women running for state office in California. She started her career as a Capitol Hill staffer in Washington, DC. Upon returning to California, Elizabeth worked at a food bank advocating to make healthy food more accessible and other programs that serve families living in poverty. Most recently, she managed issue campaigns at California's largest business advocacy alliance as COO. So please welcome to the show Elizabeth Tulasi. Elizabeth Tulasi: Hi, thank you so much for having me. Passionistas: Thanks for being here. What are you most passionate about? Elizabeth: I'm most passionate about, I think finding the truth and everybody recognizing what is the truth and what is real. And I think that if people have information and people recognize what's going on, then we can all make better decisions. I think a lot of things in our economy and our society and our political processes are hidden and obfuscated often on purpose. So if those things come to light and people have that information, then we can all make better choices that I think are better for everyone better for our clinic. Passionistas: How does that relate to the activism that you do? Elizabeth: Well, I think a lot of people don't know what decisions are made at all various levels of government. I think a lot of people don't even know what the various levels of government are. The presidential campaigns take up a lot of space in people's minds and they are of course, very important, but the decisions that affect your and my everyday life are usually made much closer to home. And we also have more control over those things. So, you know, thinking about schools, if we want good schools in our communities, those decisions are made by local school boards. The funding that schools have are determined because of state and local taxes that are also determined by state and local representatives. If you have good parks in your neighborhood or in your state, those again are determined by local and state elected officials. So a lot of power resides much closer to wherever you live. And I really want people to know about that and to insert their voices into those conversations. You know, Nancy and I were talking just a minute ago about the passing of Ruth Bader Ginsburg, and she has so many quotable nuggets of wisdom. But I think that I think a lot about is one where she said women belong in all places where decisions are being made and decisions are being made all around us. And we need to know what decisions those are, who's making them. And how do we be part of that? Let's go back. Passionistas: When did you get interested in politics and activism? Elizabeth: I remember as a kid, I was even in girl Scouts, which wasn't necessarily political, but it was public service. And a lot of the things that we were working on, it became a question of why is this problem? You know, like when we would go make sandwiches and give them out to homeless folks, living in our area as a kid, you're always asking why are there so many homeless people, why do we need to clean up the parks? You know, like all of the little service projects that we did, it was kind of, the question is always like, well, why is it this way? So I think that that really leads to understanding what factors govern our lives. And then in high school, I was in the, I think 10th grade or 11th grade when nine 11 happened. And so there was a lot of political choices that led to that and were coming out of that. And so I became more active at that point. And then I think also, you know, even things like LGBT rights, I mean, in high school, I was involved in drama in a theater. So I had a lot of gay friends. And at the time, I don't know that I knew that much about the politics of that, but, you know, you become kind of an activist in defending people's rights to just exist.   Passionistas: You actually worked in DC early in your career. So what did you do there? Elizabeth: I went to DC to do AmeriCorps. So AmeriCorps is like our domestic peace Corps program. So I gave a year to work for a foundation that promoted public service and volunteerism. So I did that for a year. And then I worked for a member of Congress who is actually from Los Angeles, Grace Napoitano. So I worked there as her scheduler. What did that entail? I definitely had a lot more power than I knew I had at that time. I did not capitalize on that as well as I should have a member of Congress is just constantly in demand by their constituents by special interest groups by lobbyists. They're always, time is just of the essence. And so my job was to manage her time and to assess all of those requests that were coming in all the time and assign them to other staff members or make the time on the congressman's calendar. There's just a lot of balancing of priorities. Passionistas: Did you like being in the DC system? Elizabeth: No, I did not like it there. I left after that second year. So a few reasons why I don't like D C one, weather it's terrible, there's like three nice weeks in the spring and three nice weeks in the fall. And then the rest of the time it's either like sweating, like anything you've ever experienced. You're trudging through sleet. And it's not like pretty glistening white snow and I'm from LA. So I, you know, you can't hang with that for a long.  Then two is the, I felt just professionally. The first question anybody asks you in any setting is who do you work for? And it's very much about assessing how valuable you are to them in that moment. And I just felt like people just talked about work all the time. And when I came back to California and I remember my lunch break at my, my first day at work and, you know, there's people in the kitchen, you know, microwaving their lunches or whatever. And people were talking about what they did on the weekends. People were talking about their, you know, how they went, kayaking. People talked about a meeting they were going to after work. I mean, I just realized like, Oh my God, you people talk about other things besides just what happens in this building. And I thought that was very impressive.   Passionistas: During your time in DC. was there something you learned there that you've sort of taken through your career? Elizabeth: I mean, it was also the very beginning of my career. So I think there's a lot that you just learn from being new in a professional workplace. One thing which may or may not be specific to politics, but is, you know, know your audience and understand what does this person, or what does this group want and how can I address that with whatever I have. And sometimes that doesn't necessarily mean giving them what they want, but it means like making them feel heard. And I think that that is applicable in a lot of different industries. I guess, making people feel heard without actually giving them anything or committing to anything is a skill that is useful. Passionistas: Did you come straight back to LA or did you go to San Francisco first? Elizabeth: I went to San Francisco after DC. I wasn't quite ready to move back home or move back to my home area. And I lived there for five years. What did you do there? I worked for a food bank there. So actually I lived in the East Bay. I lived in Oakland and Berkeley for some time, but I worked in San Francisco for the San Francisco Marin food bank. And I started out as an executive assistant, which was a good kind of transition from a scheduler type of role and also great for being able to see all the different parts of the organization and the business, how things run. And also at that organization, deep policy and advocacy stuff really happened with the CEO and in his office, out of his office directly. So I was useful in that space. Then I transitioned to become a major gifts officer, which is basically you talk to high net worth individuals and try to give them money for things that you're trying to do for the community. Passionistas: Was there a part of being of service in that job that you connected to? Elizabeth: I think what was really cool about that job is that I was basically Robin hooding, you know, like I was taking money from rich people and using it to buy food for poor people. And that, you know, just in a very simplistic way, it feels like a good use of time, energy. And we were really making a huge impact, even in a place as wealthy as San Francisco. One in four people are at risk of hunger and don't know where their food is. Next meal is going to come from. Most of those are children and the elderly, and that's true for a lot of places across the country. So we did, I think, really good work also on the policy front, there's a ton of policy that affects whether or not people have enough money for food and can afford to pay rent and pay for medical bills and pay for food. So I did some cool stuff there. I think that ultimately as a service organization, the amount of time that they could spend on advocacy is smaller than what I was interested. And so eventually I left because I wanted to get more into politics. Passionistas: So then you moved back to LA at that point, you worked for the Los Angeles County business Federation. Talk about that job and what you did there. Elizabeth: So the LA County business Federation is an Alliance of a bunch of different business groups. So if you think about every industry has an association, every ethnic or minority group basically has a chamber of commerce. Every city has a chamber of commerce. So you think about the national association of women business owners or the bicycle coalition or the Long Beach Chamber of Commerce or any of these kinds of groups that are operating in California. We kind of organized all of them together so that we could be advocating for economic policies. When, when we all agreed on them, we represented 400,000 business owners across California. And we're the largest association of associations basically in the country. Passionistas: While you were doing that, were you also volunteering at nonprofit org? Elizabeth: Yeah. I had all of these volunteer roles while I was working. So over the course of my time at that job, I also served on the board of the United left, the next fund. I also served on the local democratic club, our Stonewall young Democrats here in LA, and I'm started on the board of California women's list. Passionistas: Tell us about the California Women's List and what they do and what you do for them. Elizabeth: California Women's list is a political action committee. So we raise money for and support democratic women running for office here in California. And we're very focused on state level offices. So the state legislature, and also there's a lot of constitutional offices. So think about things like the treasurer, the controller, the secretary of state, the governor, those kinds of directly elected positions.  We are a fully volunteer run board and organization. So I'm the external relations chair. And I help to create partnerships with other organizations and to work on a lot of our kind of more public facing campaigns right now, for example, we are starting selling merchandise that sends electrical women. And so if you go to CaliforniaWomenPlus.org, you can shop our store and buy cool merch that is professionally designed by an awesome graphic designer that we have on our board. I mean, it's unique and very different from a lot of the other kind of political t-shirts that I've seen around there and hats and whatnot. So, you know, we had to get that store up and running. So that was a project I worked on. Passionistas: What is California Women's List most focused on as we get into crunch time leading up to the election? Elizabeth: We have endorsed 24 candidates for state office this cycle. So we are very focused on raising money for them and giving it to them right away so that they can spend that on mail on the technology that they need to, you know, transition and have transitioned from a lot of door knocking and in person events to now everything is digital. So those digital tools cost money, some cases, depending on their market, they might be doing radio or TV ads. So they need money for all of that kind of thing. Also in California, especially important for women running for office in California is now finally you can use campaign funds to pay for childcare. There are only 17 States that allow candidates to pay for childcare campaign funds and California just became one of those States last year. So if there are, you know, some, a lot of our candidates are moms and childcare is really important to make sure that more women who have kids are able to run for office and be successful. Passionistas: Tell us about The Grace Society and what that is. Elizabeth: Grace Society is the donor circle for California Women's List. So if you want to help elect more women in California, then you can be a member of our Grace Society. It's only $50 for a year. And so you can pay that all at once or you can do $5 a month or whatever you need to do, and it helps you be a part of the fabric of our organization and a more consistent way. We have a little lapel pin that we send. That's nice. You get early access to our merch when we launch new products and also to our events that we have, you get early access and discounted tickets and all that sort of thing. So it's just a way for folks who want to support our work to help us sustain this effort, because it is a lot of fundraising around campaign cycles, but the work is ongoing and particularly for a lot of local and state races, those are not always happening at the same time as kind of these more well known races like the presidential. So that organizing work is happening all year. Passionistas: Why is it so important to have more women in politics? Elizabeth: When we see more women in elected bodies, those elected bodies have more transparency and they aren't, they tend to be more effective. So it's really important that everybody is represented at the level that they are in the society. You know, so not just women, but also people with disabilities, people who are immigrants, people with different kinds of work experience, people of different ethnic and language backgrounds. All of these folks are part of our society, but they are not all represented commensurate to their numbers and society. So that is a symptom of a problem. You know, if all things were equal, then everybody would just be part of the process. But because they're not in California, only 33% of our legislature is women. And that's basically an all-time high in the early two thousands, California ranked sixth in the nation for the percentage of women in the legislature. But by 2013, we fell to the 32nd place. And that's not because other States made a ton of progress. It's because the number of women in California state legislature went down. So it's really important that we have equal representation. And it's important that we are all fighting for it all the time because the number went down because we took our eye off. The ball progress is not linear. You know, I think we see that, especially that has become very clear as people over the last four years, we can't just count on it happening.  Passionistas: Why don't more women run for office? Elizabeth: Women do win their races basically as often as men do, it's just that they don't self-select and run that much. Women have to be asked to run for office multiple times before they start. So I really want women to know that you have just as good of an opportunity to run. And I also want women and men to know, and everybody to know that a big challenge that women candidates face is raising money. And that is because women can also raise as much as men do. We just tend to do it in smaller chunks. So men generally have access to wealthier donors and business circles and things like that. And so they are often able to raise more money faster. Whereas women have to spend longer cultivating more donors who are giving at smaller or lower amounts. And so I say that because I want everybody who's listening to, this can be a donor, not everyone's going to run for office and that's fine, but everybody can be a donor. Everybody can be a volunteer. And so really think about how you can give as much as possible, how you can encourage other people to give to political candidates. Women give a ton to charity, but we do not give as much to political campaigns and investing in a political campaign is investing in the future that you want to see your list. Passionistas: You're listening to The Passionistas Project Podcast and our interview with Elizabeth Tulasi. Visit californiawomenslist.org to find out more about the organization. And join The Grace Society to receive an exclusive annual pin, a members only quarterly newsletter discount to tickets, to CWL programs and access to special members only events. Now here's more of our interview with Elizabeth. If there was a woman that wanted to get into the political arena, what would you want her to know? Elizabeth: I want women who are interested in politics to know that there are organizations out there to help support you to get you involved. So you don't have to feel intimidated. I think so many women feel like they don't know enough. And frankly, I wish more men recognize that they don't know enough because they don't know more than we do. They just don't care that they don't know more than we do now. And so I wish women would recognize that just because you don't know everything doesn't mean that you cannot be an effective leader in your community. It doesn't mean that you don't know good solutions. There are organizations out there of other women who can help support you as you learn more and figure out how to make change in your community. Passionistas: Why is it so important for women to get involved in all levels of government? Elizabeth: It's important to have women in all elected offices, but a thing that I want people to know about, you know, state and local is that those are the pipelines for higher office. So you look at somebody like Kamala Harris who ran for president. She's now the vice presidential nominee, but right now she's a Senator or US Senator. Prior to that, she was serving at the state level. She was California's Attorney General. And before that she was serving at, in her city. And a lot of the women that we heard of that were vice presidential contenders worked at various levels of government before they get up to that level. So it was great because of this democratic primary. There were a bunch of women who were running and had very viable campaigns, but obviously in the past, there was always hope putting all of our hopes and dreams on one woman. And that's because the pipeline to get to that level was so sweet. So if we have more women serving at various levels, then we have more opportunity for them to go higher. There are great women serving in state legislatures all across the country. So a couple of that, I just wanted to shout out Sarah Innamorato is elected in Pennsylvania. She's been serving since 2018 and she's from the Pittsburgh area. She's 34. And she beat an incumbent in a landslide by fighting for progressive values in a state that is very ideologically diverse. So she started her own marketing firm previously. And then she decided to run for office in Texas. There's a woman named Gina Calanni. She was a paralegal and a mom of three boys, and she ran. She's the first woman to represent her area in the state, Texas state legislature. She beat an incumbent Republican, and she's already passed 11 bills. And she's only been in the state legislature for a year. She's focused on the minutia of processes that slow things down like forensic testing or allowing school funding to go towards these big separate packages for fired administrators. So these are kind of unsexy details that really matter to how well your government works in Virginia. There was a woman named Masha Rex Baird, and she was the youngest woman ever elected to the Virginia House of delegates. When she won in 2015, she was 28 years old at the time. She's so active in her community. When you read her bio she's on so many different, you know, volunteering and serving on so many different boards and commissions, and she's focused on her service on economic development and education so that her community has good jobs in it. And then the folks in the community have the skills to be able to get into those jobs. And just this week, she passed a bill banning, no knock warrants in Virginia, which is the kind of warrant that police officers used when they murdered Brianna Taylor. So all of these women in different parts of the country are breaking barriers in their own ways and making really important change. You can see how important that is to their state. And so I share all of these examples because if you started looking at some of the women that are serving in your community in leadership roles, you will see that they're women just like you and your experience is important to bring to bear in California. Somebody who's now become a national figure is Katie Porter and in her first term in Congress. And she's the only single mom serving in Congress right now. And so she brings a lived experience that is really important because obviously there's so many single moms across America and the people who are making the rules and govern their lives, have no idea what they're doing. And so whatever you have, if you know anybody else who has that same kind of experience, then that voice deserves to be heard. Passionistas: Why are state and local governments so important? Elizabeth: That is keeping you up at night these days, or that's, you know, you're really stressed out about and state and local government have a huge impact on that. So COVID obviously is really on the top of everyone's mind. And the hospital capacity in your area is a function of probably your County government or, you know, what the kinds of facilities and specialties that they have in your area are also determined by state policy. Every community has a public health official and how much the politicians listen to that public health official. That's all determined. I mean, that's all happening at the local level. I think another thing people are really stressed out about right now is money. So how much you earn and how much it costs to live, where you live, that's all determined by local factors. A lot of money stuff is happening in your area.   And it's very specific to where you live national policies affect these things, of course, but the bulk of the economic policies that affect your day to day life are happening in your city or in your County or your state. I think a lot of folks right now also are paying more attention to family policies and also to unemployment. And that is handled at the state level. And so if have not yet received your unemployment check or you had the system was down when you tried to apply, that's because of stuff that's happening at the state level education and childcare education is handled by your local school board. How much money they have is determined by state and local taxes. Policing and prisons are really top of mind for folks right now, your city council and your mayor determine how much money the police are going to get in your city.   If you are in a place where you, you don't have municipal police, you might have a County sheriff. The sheriff is usually in elected position all across the country. So that's a directly elected person who's handling those policies and jails. I just learned in California that there's a bill going through the state legislature right now that is focused on how we in California treat people who are in jail and prison who are pregnant and whether or not they can be handcuffed to their hospital bed during childbirth, whether or not they get preference for the bottom bunk in, in their jail, you know, or have to climb up to the top bunk, whether they can be put in solitary confinement while you're pregnant. So there's a lot of policies that have to do with how we treat prisoners in our States that really matter and voting is another big one.   There's a lot of concern with the integrity of our various voting systems. And every one of those voting systems is controlled by your state government and your local elections. Or so if you're concerned about who has access to voting or who doesn't have access to voting, or how easy or hard it is to vote in your area that is completely determined by your state government, why is voting important to you? Really broadly voting is important to me because so many people have died for this, right, and have died, or, you know, really put their wives at risk for this democracy. And this democracy only works if people participate in it. So that is very motivating to me. And then I think specifically right now, why it's important that everybody vote is because I think we think of ourselves as very polarized right now as a country.   And that is certainly true, but there are so many more people that are not participating in that at all, that I think their voices don't matter, but they do. We often hear people saying that it doesn't matter. Who's elected all the politicians are the same. And I think we can see now that that is not true, that people who are elected have power over our lives. And we need to make sure that those people have values and lived experiences that are similar to ours. And I think that government is created to be hard for people to get engaged. A lot of our systems right now are, are designed that way. And similar forces want us to believe that our votes don't matter, that our voices don't matter. And that again, is to achieve certain goals that I don't agree with. And I don't want, I think we've also seen how much, particularly for women, the power and the status that we have as women now that certainly my mother's generation didn't have. My grandmother's generation did not have. That was hard fought recently won and backsliding. As we speak, women are still mostly responsible for what happens at home. So when we are all home all the time, now that means we're responsible for everything all the time. And a lot of women who are also trying to work, but then they're not able to spend as much time at work or working because of all of this kind of unpaid domestic labor that we're involved in. And it's going to have long-term effects on women's economic mobility. And then I think there's also, you think about maternal mortality, maternal mortality is going up in America. We're one of the only countries where maternal mortality is increasing and it's particularly a problem with black and indigenous and women of color. If our government is worth anything, it should be that it doesn't let women die while they're giving birth. We see like the number of elected women is going up right now, partly because of the rates that women feel. So we're taking to running and supporting each other. But again, that is not guaranteed, that kind of progress. And we need women in all rooms where decisions are being made. So in state legislatures at your city hall, in board rooms and CEO's offices in the white house, we need women's voices and all of these places. And that again is not guaranteed. And when people say things like make America great, again, this kind of backsliding is exactly what that means to them. And that is very motivating to me to not let that happen. Passionistas: How can the average person have an impact on the upcoming election? Elizabeth: All of us have spheres of influence and all of us have people that listen to us and care about what we're saying. A lot of people feel like helpless right now, or they don't know where to start. And like I said, it is confusing on purpose, but you can vote and you can get three other people to vote. You can check your voter registration today. You can encourage three other people to check their voter registration. You can call your friends. Everybody who is getting a Christmas card from me is also getting a phone call from me, asking them, what is your voting plan? Because asking somebody, what is your plan? And having them just verbalize that to you is actually a really proven, effective way to get people to actually vote. And so in that scenario, you're not even telling them like, Hey, you should vote for this person that I care about. Cause sometimes those are awkward conversations or, you know, whatever, even though that's what's necessary right now is have those conversations with people in your life. But at the minimum, what you can do is just ask people to vote and encourage them and make sure that they have the information. Passionistas: Thanks for listening to our interview with Elizabeth Tulasi, visit californiawomenslist.org to find out more about the organization. And join The Grace Society to receive an exclusive annual pin, a members only quarterly newsletter, discounted tickets to CWL programs and access to special members only events. Please visit thepassionistasproject.com to learn more about our podcast and subscription box filled with products made by women-owned businesses and female artisans to inspire you to follow your passions. Sign up for our mailing list, to get 10% off your first purchase. And be sure to subscribe to The Passionistas Project Podcast. So you don't miss any of our upcoming inspiring guests. Until next time, stay well and stay passionate.  

The Teaching Space
Why it's Time to Get to Know Your School Librarian, An Interview With Elizabeth Hutchinson

The Teaching Space

Play Episode Listen Later Mar 16, 2018 43:54


Episode 13 of The Teaching Space Podcast is an interview with Elizabeth Hutchinson from the Schools' Library Service in Guernsey. Podcast Episode 13 Transcript Welcome to The Teaching Space podcast, coming to you from Guernsey in the Channel Islands.  Hello, and welcome to Episode 13 of The Teaching Space Podcast. It's Martine here, thank you so much for joining me. In today's episode, I'm interviewing Elizabeth Hutchinson from the Schools' Library Service in Guernsey.  Martine: Welcome, Elizabeth. Elizabeth: Hello there, nice to be here. Martine: Lovely to have you here. Rather than me do the introductions, I'm going to kick off with a question to you. Who are you and what do you do? Elizabeth: Okay. I'm Head of the Schools' Library Service in Guernsey. I'm a librarian, and I support the school libraries across the Bailiwick of Guernsey. We look after and support all primary schools, all secondary schools, and we even fly across to Alderney to support them too. Martine: Fantastic. It's a busy job then, by the sounds of things. Elizabeth: It is. I've got a nice little team, which is good. We sort of share the schools between us. We each allocate, I allocate schools to individual librarians so that schools expect to see the same person most of the time. Of course we're sharing across our resources too, so it's a bit of an unusual role for us to play because it's a support service that we offer, but we work very closely with schools and teachers, which is our aim really. Martine: What do people think the role of the school librarian is, and what is it really? Two questions in one there. Elizabeth: Okay, well our service is slightly different, we are providing the professional school librarian role. Throughout the years that I've worked at Schools' Library Service, there is a very clear misconception on what a school librarian does. There are two people that you would see within a school library, one is a library assistant whose job is to issue the books and look after the day to day running of the school library. The other one is the professional school librarian, and their role is very different from what most people think a librarian does. Our role as a school librarian is just to work alongside the teachers and the curriculum. Our role is to support information literacy, which is the ability for anyone to find access, evaluate, give credit, and use good quality information. We provide resources and support in accessing those resources. There are the book loans from the Schools' Library Service that you can get from your own school library, but there's also the online resources. Our role is to support the students in using those effectively. What we find is that students are very good at doing that Google search, that question into Google and hoping that the answer's going to pop out. As you progress through your academic schooling you need to be using better quality academic resources or be very highly skilled in evaluating the resources that you're finding. We work with them to make sure that they understand a keyword search, that they understand that in any academic source you cannot type a question, that you have to think about what you're looking for, and actually how you tweak those keywords to actually find what you need. The more students look online for information, the less skilled they get at actually finding what they really need. That's where we're sort of, our main aim at the moment is, is to support that. Is that the new Guernsey curriculum has changed incredibly recently to look at the skillset. This is what school librarians have always done, the skill of research. We are now in a brilliant position to be able to go, well the skills that we have are the skills that we can teach your students, and what you've highlighted that you need at the moment. It's interesting times for a school librarian I think. Martine: It strikes me that the role of the school librarian has changed dramatically over the past sort of 20 years or so, but ultimately as you said, it comes down to research and helping students learn how to research properly. I guess it's not the sort of fundamentals of the role that's changed, it's where you're looking for the information has changed a little bit perhaps. Elizabeth: Oh absolutely. If you think about when we were back at school. Our research was probably the school library, but it was books. You could always copy and paste, but you'd actually have to hand write it. The chances of you being caught for doing that was quite unlikely unless the teacher was probably going down to the library to check the books that you were copying from. We live in a world now where information is really freely available and really easy to access. It's even easier to plagiarise but even easier to get caught. It's those skills, that skillset that has suddenly become very usable and shareable and people want them. It's a much wider world out there, and actually far more opportunities. Our skillset has had to adapt and change, but it has in a very exciting way, opened doors that I couldn't have imagined at the beginning of my career. Martine: It's a good time to be a school librarian, is what you're saying? Elizabeth: Absolutely, really exciting. Do you know, just the opportunity to share ideas on social media, talk to experts in our profession in a way that was just not possible before, has up-skilled all of us in a way that just wasn't possible. Having a personal learning network on social media has not only helped me to understand my role a bit more, but also helped me learn about things that I can then share with the students that I teach and the teachers that I work with. The worlds of research has really opened up in the last sort of few years and it is exciting times, yeah. I love it. Martine: It's really interesting to hear you talking about social media in that way as well, because I'm in huge agreement with you there, in that I get a massive amount of my CPD directly from Twitter because of all the links people share, and the Twitter chats that go on, and things like that. Technology is really exciting right now and it's great to hear about how the role of the school librarian has adapted to accommodate. Elizabeth: I think as well is that as part of learning and teaching research, I think it's important that we do include these technologies or these tools, because like you said, I too get a lot of my professional development from Twitter, but it's that digital literacy that is also around in school today that we're teaching. Actually if we can help students navigate resources like Twitter within the classroom, it then becomes less of a problem outside. Martine: Definitely. Elizabeth: So instead of us shying away from it, we need to be confident in using it ourselves as teachers to be able to then help the students navigate it. I think I was talking to somebody recently about the negativity, and the bullying, and the trolling that goes on, but actually if we had more people on social media that were brave enough to say, "Hey, that's not a nice thing to say." We drowned out the negatives with the positive then it would be a much better place to be. You can only learn those skills through usage. Actually, if we can learn to use it in a safer environment within the classroom then it would stand the students in better stead for the future I think. Martine: I'm in complete agreement with what you just said, and it almost leads onto a discussion about a topic I want to cover in a future podcast episode, which is this misconception that young people today are digital natives. Everyone seems to think, particularly amongst certain teachers I come across, that the kids today, they all know how to do anything online and they're very comfortable with technology. Yes, in terms of navigating an iPhone or some sort of smartphone, they can do that very easily, but they aren't particularly savvy when it comes to social media, and using technology and social media and things like that professionally. It's all about social. Is that something you've come across in your role at all? Elizabeth: Oh yeah, without a doubt. You know? Even to the extent of just good research, there's a lack of understanding amongst teachers that it is important that they check where their sources are coming from. The only way that that can happen is if we encourage teachers to insist on referencing. I know it sounds boring, do you know? I've had one teacher tell me that it stops the flow of the essay or the research - Martine: Really? Elizabeth: It spoils it, you know? For the understanding that actually where your information is coming from is important to the teacher makes the child then understand the importance for themselves. Once you learn how to reference, it doesn't take that long. If you collect your references as you go through, it is part and parcel of academic writing. Whether you like it or not, that's what we're doing at school, we are writing academically. Even the youngest of students, none of them are generally writing for pleasure. You can create the opportunity to write for pleasure alongside doing the research correctly and it should all just flow into it. You find that international baccalaureate students generally tend to be really good at their referencing because it's an essential part of the course. Teachers who teach GCSE and A Level, it's not. A lot of these students are spoon fed, and I get it, I do understand. Teachers are in a very difficult position that they are judged by their outcomes and teaching to the test and all of this, I get it. I do. But we're not doing our children any favours if we are not helping them to take responsibility for where their information's coming from. We talk about recently the fake news and you live in an internet bubble. I find that really interesting, it's something that I'm particularly interested in myself is that we go back to the social media question, is that we tend to follow the people who have the same ideas as us, share the same views, and reinforce what we believe to be true. That's a really dangerous position to put yourself into, that it's safe because you're not going to read anything that you disagree with, but actually, if we don't teach and encourage our students to actually look beyond that immediate understanding to get a more rounded view, then we are going to ... We're in a very scary position where we can be manipulated into believing that this is the only way for the world to work, or this religion is right, or that political party is correct. Actually, you can only get a full view of the world if you actually understand how you can actually access other sources of information that are going to give you a slightly different view. I find it, that to prevent students or not encourage students to actually go beyond that question into Google, we're opening a huge chasm that we might not ever be able to shut. Actually now is the time to take responsibility and start saying, "This is a serious situation and we as teachers and educators need to actually do something about it when we can," and we can do something about it, you know? Teach them to reference, understand plagiarism, understand the fact that you need to give credit for somebody else's work. All of this is about looking at how we behave online and how we gather our information for our own learning. It has to come and start in a school setting. Martine: The idea of living in an internet bubble, as you described it, is just absolutely terrifying. I mean if you don't ever have to challenge what you see, what you read, what you hear, how are you ever going to learn? It's very, very worrying. Elizabeth: Yeah, it is interesting because I think people forget. I think if you don't live in an information world where you're teaching people to find information, I think it's very easy to forget that ... I think we've had recently, sort of Facebook have tried to change it, but where they were feeding you the things that you want to find rather than what you chose to find. I think you need to be a little bit savvy about ... Or understanding that that is actually what goes on. Martine: Definitely. What you said about referencing and how if you do it as you go along, it's not difficult, that is so true. I'm a Google Certified Trainer, and so I use Google Docs for most academic writing activities with my learners, and it is so easy to reference in Google Docs. It really is straightforward. I shared a video on social a couple of days ago that showed how to do it in about 90 seconds. It was a demonstration that took that long, you know? It is easy, simple and straightforward. As long as you know how to do it, then I don't really understand why people wouldn't be doing it, particularly in Google Docs. Elizabeth: Well exactly. If you are a person who uses Word, there's a referencing tab in Word, which is equally as quick, do you know? When you think back to the dissertations we used to write and you'd spend three or four days putting in your references, literally if you're collecting them as you go along, it's a less than 10-second job to create your bibliography. Why would you not use that, you know? Martine: Exactly. Elizabeth: It is so simple these days. Martine: Talking about things being speedy, how can your school librarian save you time? This is a question on behalf of the teachers, how can your school librarian save you time? Elizabeth: That's an interesting question because I had a discussion with somebody the other day and the things that I thought teachers would understand was time-saving. Turns out to be not so. Martine: Right. Elizabeth H.: Let me explain. Schools' Library Service provides what we call project loans. Teachers can email us and say, "I'm doing Victorians next term with my year six students. I have three or four higher learners and I have about two that will need lower level books." We put together a nice little box, we deliver it to the school, which then lands in their library and they go and collect it and they start using it. That is time-saving. Martine: Yes, I would think so, yeah. Elizabeth: If you are a teacher who sends an email ... So this is what was pointed out to me, if you are a teacher that sends an email once a term and this box magically appears, you forget that actually, it takes time to curate those resources and put what you need into a box and issue it and get it out to you. There is a little bit of lack of understanding of what you are getting on the basic level from a school library, you know? Martine: Okay, yeah. Elizabeth: Obviously we're talking about the fact that we're Schools' Library Service and we have a centralized collection. If your school library itself has the resources that you need you could just ask your school librarian to do the same thing. I understand that there are people probably listening to your podcast that don't have a Schools' Library Service or do have a librarian in their library but had not ever thought to have that conversation. So please do. If you want resources for your classroom, then start with your school librarian or contact your Schools' Library Service and books will magically appear and save you time, because then you don't have to go and look for them. Martine: Which is fabulous. Elizabeth: It is. Other time-saving initiatives that we've looked at and started doing recently is helping teachers and classes connect with other students in classes across the world. The Guernsey curriculum is all about outside, and we're learning outside the classroom, and learning from experts beyond the walls of your classroom. A lot of teachers don't have time to find those connections and those collaborations, and it is one of the things that Schools' Library Service has worked hard at, at building up our contacts and opening the doors of the classroom. For instance, in the last few years, we have connected our students with students in India who were doing an Indian topic. They were able to talk to and ask questions of Indian students who are the same age as them. They were able to share the information about what Guernsey is like to those same students. It sort of puts a different perspective on what creating a good question looks like. For me as a librarian, my role is not only to connect these students, but it's also to make sure that the skillset is right, so going back to that information literacy role for this particular Indian collaboration we made sure that the children understood what made a good question. Them being able to ask those questions directly to somebody else changes your understanding of what makes a good question. What we found interesting was that some of the questions weren't so good and they got a very poor response or a poor answer. Actually, as the session went on you could see the children were changing the questions as they carried on. Their questioning got better, so it's about learning real ... What is it called? Real-world learning, and it does make a difference. Martine: What a fantastic learning experience for them. I bet there'll remember that for the rest of their lives, that session where they talked to kids in India. I mean that's great. Elizabeth: Yeah, and it's learning on all sorts of different levels. We had a class locally talking to experts on African penguins, and they were taken around a nature reserve via Skype. It was, again, so different from that experience of reading the information from a book or online, you know? We save teachers' time by creating and generating these connections and collaborations, and enabling them to have innovative lessons in a way that they wouldn't have done before, you know? I think for me our role has changed, you wouldn't automatically think that a school librarian is about collaboration, but anybody that you collaborate with is a learning opportunity, and librarians are about learning and finding information. If finding information is found via a person, then that's just as good as finding it in a book or online, do you know? It's all-encompassing. Martine: That's fantastic. I'm really starting to get a feel for how that role has developed. I'm certainly sensing from you the passion you have for sharing your experience of it. I'm also getting a real technology vibe from you too. I work very closely with our librarian at the College of Further Education and she's very, very tech savvy, and that's what we work closely on, technology for learning. I've always been amazed at how if you go for the kind of old, as we've identified, misconception of the school librarian ... I mean our librarian, Rachel, is the exact opposite of that. She's really techy, and she's always looking for the latest innovation to enhance learning. I've always been really impressed with that. I mean clearly with what you've been describing, you're a massive advocate for technology for learning as well, but how else do you work with teachers to enhance their understanding of technology for learning and sort of bring new tools to them and things like that? How do you work with teachers in that way? Elizabeth: Our big aim over the last couple of years is to make sure that we understand the tools because unless you understand the tools you can't then help and support teachers to use them. Through our connections online, so usually via Twitter, we have been listening and hearing about what other librarians have been using with their teachers. The latest tools that we have really used widely across the schools is Padlet and Flipgrid. Martine: I love both of those. Elizabeth: Just really useful tools. It's not about how the tool can engage the learner, it's about how it can enhance the teaching. The two together work well in partnership. It's not about providing a piece of innovation or tool that ticks the box that you've actually used technology, it's about how it's going to enhance your learning. Martine: Absolutely, I couldn't agree more. Elizabeth: For one example, we run book groups in our schools, so the librarian goes along, sometimes it's part of a lesson, other times it's a book group that is run at a lunchtime. Usually what we try and do is get them to read the same book so that then there's a book discussion. I've got two examples of Padlet enhancing what we were doing, one in primary and one in secondary. In the primary setting we had an author visiting, so Caroline Lawrence, she writes The Roman Mysteries. She had come as one of our Book Week authors last year. Our book group then decided that they were going to read one of her books and I then approached her, because she had been here, to say, "Do you know our students are reading your book, would you mind talking to them about it?" After a bit of a discussion and agreement that she would, we decided that we were going to use Padlet as our platform. Now Padlet is, for those of you that don't know, is like a post-it board online. Basically, you click a plus button and you can add a comment. It also allows other people to comment on your post-it. What we'd agreed with this book ground was that we were going to write questions for Caroline and look back the next week and see what she responded. I happened to manage to get in touch with Caroline just on the day that we were going to be doing the Padlet and told her what time we were going to be on and sent her the link. She appeared during that Padlet session. Martine: That's so cool. Elizabeth: The students were typing the questions and she was responding real time. Martine: I love it. Well, you cannot imagine the excitement of these students. You know, we sometimes worry, don't we, that if you allow something to happen live, we're at risk of students being silly or something going badly wrong, but I do believe genuinely that if you give students the opportunity and you've talked to them about the fact that you're going online and everybody could see, they genuinely behave in a way that is suitable. It's a brilliant learning, there were some amazing questions from that Padlet that we couldn't have got had she not answered real time, because one question led to another, to another. She was brilliant, she responded to as many of those questions as she could. Initially, we had lots of, "You're here. Ooh, exciting." You know? That is part and parcel of expressing how you're feeling about it, not something that's bad because you've been set a task to ask a question. It's about monitoring it and allowing it to happen naturally. Martine: It's just so memorable, like the example earlier with the Indian students, those students will remember that forever. Elizabeth: Of course they will. Of course, they will. Martine: So good. Elizabeth: They have come back and they've wanted to read more Caroline Lawrence books. The impact of that session was not just the fact that they ended up creating brilliant questions, but they were also engaged enough to sort want to continue and read more, and that's what it's all about, reading for pleasure. Okay, so the second example is a book called Wonder that has had international acclaim over the last few months and has actually been made into a book. For those of you that don't know, it's a story about a little boy who has severe facial disfigurements and it's written from several perspectives throughout the book, so it's written from his own perspective, his sister's, his friend's. It's about bullying, friendship, it's about understanding, empathy. It's gone down really well across the schools. We had planned to read the book with our book group in one of our secondary schools. I have a librarian friend who lives in Arkansaw. He is a librarian in a secondary school, so I said that we were going to read this book, did he fancy running a book club on Padlet. We agreed that this would be good, we set up the Padlet, the students themselves discussed the book across Padlet. When I look at the understanding that these children had and their shared ideas, and the variation of voices, it just gives me a tingle when I look at it, you know? We've got children from Nebraska, we've got children from Arkansaw, we've got children from Guernsey all talking about understanding and the importance of empathy. It doesn't matter whether you're from America or from England, those messages are all the same and show the students how people aren't any different. There may be different cultures and different ways of living, but actually, our friendships and our understanding of each other is all very similar. If that's what sharing an online book group is all about, then let's do more of it. Martine: Absolutely. I mean that's just such a great example of how technology for learning is so much more than simply getting learners engaged. I think a lot of people think, like you said, "Oh, we've got to tick a box, we've got to use technology. We've been told we have to." That's kind of one level that I think some people go to. Then the next level is, "Oh well, you know they're always on their phone, so let's use them in sessions and that will engage them." But it is so much more than that. Elizabeth: It is, yeah. Martine: That's exactly what you've just described. I love Flipgrid by the way. Elizabeth: Yeah, me too. Martine: I used it with my adult learners quite recently, because I teach our initial teacher training program at the College of Further Education, and we have one little bit of research that we have to do that isn't terribly exciting, they have to research a couple of different pieces of legislation that affect the role of the teacher. It's really not that exciting. Normally I get them to do it, a written approach to it and so on. This time I allocated the laws and codes of practice and regulations out to various members of the group and I sent them away to do their research. Of course, they noted their sources, so very important. Elizabeth: Good, good. Martine: Essential, as one of them was doing the copyright law so ... So yeah, they went away and they researched and they recorded a 90-second summary video on our Flipgrid sharing what they'd found out. It was so good, it went so well. Normally when they come to do that part of the assignment when they do it on their own, it's very challenging for them because it's just not the exciting subject that they want to be writing about, they want to be writing about the fun stuff of teaching. They did such a great job of it and it was because of the Flipgrid approach to research that we did. They were all quite nervous about using it, interestingly. Elizabeth: Yeah, people don't like having themselves videoed do they? Martine: No. Elizabeth: Actually, that is a skill in itself. Martine: Oh yes. Elizabeth: Condensing what you want to say in 90 seconds. Martine: Exactly. Elizabeth: It's a bit like learning on Twitter, that you have to say it in 140 characters, although I think it's a bit more now isn't it? Martine: It's 280 now I think. Elizabeth: 280, yeah. Actually, those are interesting skills in themselves. If you are anything like me, I'm a bit of a waffler when I write, and actually being made to restrict myself means that you learn to make sure you take the important bits rather than the bits that aren't important. That's where it does help. We also used Flipgrid to, again, talk about ... Again, it was, Wonder was a great book for us. The students in America asked the students in Guernsey what five words could they use to describe the book. We got lots of videos where the students are literally sitting in front of the camera giving five words. The work that's gone into that is far more than those 25 seconds that it takes them to say the words because they've actually had to think about which five words they wanted to choose, and why they were important, and how that was going to sound when they recorded it. They worked really hard at finding those five words. If we had set them a topic where we had just asked them and they were just going to write them down, I don't think you would've got the same engagement, but because they were going to share those with the world, they were then very careful about which five they chose, you know? It does add that extra element, it does add the audience that the children don't have in a school setting very often. Martine: I think for Guernsey students this becomes particularly important because we are living on a very small island and our community isn't as multicultural as perhaps we would like it to be, so students aren't exposed to perhaps as much diversity as students in other parts of the world would be exposed to. By opening the world up to them via technology or social media or whatever, I think it can do nothing but add value. Elizabeth: I've got another example that I'd love to share is that we do a lot of Google Hangouts. There's a thing called Mystery Hangouts where the librarians work together to find a school that would like to connect. You then organise it with the teachers. The teachers know where the other school is, but the students aren't told. The game is that they have to ... They can only ask questions that have a yes or a no answer, and they have to find the other school before they are found. Martine: I love it. Elizabeth: We ran this with Saint Anne's in Alderney, a year 10 group. It was all very exciting. Just to put it in perspective, normally when I used to go to Saint Anne's as the school librarian, I was the school librarian, nobody took any notice of me whatsoever as I walked down the corridors. I went in, I did my job, I worked with the teachers. It was all very similar to what it normally was. This day that I arrived in Alderney there was a buzz about the school, the whole school had heard that this game was going to take place. Everybody wanted to know what was going on. I was a little bit scared because it was actually our first attempt and wasn't sure that everything was going to work, but it thankfully worked beautifully. The game itself gave them good communication skills, it gave them research skills because they had to look at maps and atlases, and think about the questions that they were asking. The big deal for me from that one session was at the end where they were asked to share some information about where they lived, and the American students were very used to doing this kind of thing. They've never done it internationally before, but they'd obviously done Mystery Hangouts with other states in the US. These students had written lists of information about where they lived. We hadn't prepared our students that way. I did worry at that moment where there were lots of arms crossing and there's nothing to tell you about Alderney here. I thought, "Oh dear," you know? "This is where it all falls flat." Until one American student asked the Alderney students what they did after school. Their response was very negative, but it was, "We just go to the beach." They were the perfect words because again it was Arkansaw, they are 13 hours away from any beach. Martine: Oh wow. Elizabeth: They were just so amazed that Alderney students had a beach on their doorstep. The opportunity to pick up the laptop and take the laptop to the window and show the Arkansaw students the beach just suddenly made the Alderney students understand that they had a place in the world. Understand that they had something worth sharing. Martine: And how lucky they are to live in such a beautiful place. Elizabeth: Absolutely. Absolutely. It was a pivotal moment in my understanding of why we do what we do. Martine: Wow. Elizabeth: If I do nothing else in my career, it was a turning point, it was this is why this is so important. We live on a small island, you're right, Alderney is even smaller, but there are children who live in villages, there are children who live in cities, and actually seeing how other children live and it's a way of learning, it has huge potential, doesn't it? It is just an opportunity for us to open the world to them without them having to leave their classrooms, and to share their understanding of their place in the world is something that's really important. The more I can do with that the better as far as I'm concerned. Martine: Brilliant. The working title for this episode and I think I've just decided I'm going to stick with it, is Why It's Time to Get to Know Your School Librarian, and there it is. That's why it's time to get to know your school librarian because your school librarian can help you make amazing learning happen. Thank you, Elizabeth for sharing all of the things you've shared in this episode. That's been fab. Where can people find you online? Elizabeth: Schools' Library Service, Guernsey Schools' Library Service Blog Elizabeth on Twitter Elizabeth's blog Martine: Thank you so much, Elizabeth, that was excellent. You are welcome back on the show anytime. Elizabeth: Thank you, I really enjoyed it.

A Life & Death Conversation with Dr. Bob Uslander
Dealing With Loss, Elizabeth Semenova Ep. 6

A Life & Death Conversation with Dr. Bob Uslander

Play Episode Listen Later Dec 27, 2017 30:16


Elizabeth Semenova is the Director of Operations at Integrated MD Care. She shares her insights and personal stories about dealing with loss.     The holidays can be an especially difficult time, listen to how Elizabeth handled her own loss and how she and Dr. Bob help others. Transcript Dr. Bob: Welcome to A Life and Death Conversation with Dr. Bob Bob Uslander. I'm here with a guest who I'm excited to introduce everybody to, and somebody who has a wealth of experience and insights. And I'm very pleased to have her as part of my expanding team here at Integrated MD Care. So you're going to get to know quite a bit about my new director of operations for the practice, Elizabeth Semenova.Elizabeth, say hello to our listeners. Elizabeth: Hello. Dr. Bob: So Elizabeth came to us a few months back. And the way that we initially met was through a referral that she had made to us for a gentleman who was struggling with Parkinson's disease and was really at the tail end of his life, and Elizabeth made a recommendation that he contact us. And it was a real blessing for us to be able to meet this gentleman and guide him through the last weeks of his life. After that, we just had a few more encounters. And, Elizabeth, maybe you can share how what it was about what we do that drew you in and kind of encouraged to you to reach out and try to become part of the tribe. Elizabeth : Well, after I referred friends, clients to you, I looked more into what it is that you do and how you do it, and explored information that I received from other sources about your work, and I was inspired by your openness to life and death and your perspective on the importance of accepting and talking about death as a part of life. I was particularly intrigued by your willingness to support patients and families who are looking for resources, education, and services regarding the End of Life Option Act in California. So that's how I came to connect with your practice. Dr. Bob: Cool. Well, we're very happy that you did, and just to kind of summarize, Elizabeth came on, and we didn't have a social worker who was working with us. Elizabeth has a master's in social work and had been working as a social worker within the hospice world for several years. And we were really blessed to have her come and go out. She went out on a handful of patient visits when I was doing initial evaluations for people who were looking at aid and dying. And it was a real blessing to have her expertise and just her presence there to support those patients and families. Then we just had some changes at the office, and it became very clear that Elizabeth had a strong leadership ... had some strong leadership experience and genes. And everybody in the practice really felt comfortable with her guidance, and I offered her the position to help lead the practice, which has been great. So it's just been a short time, but the difference in our efficiency and just getting things done has jumped quite a bit. So we appreciate your very wise counsel and leadership, and it will continue to be a blessing for all of us for a long time to come. Elizabeth: I'm very humbled by your confidence and appreciation. Dr. Bob: Well, there's more to come. So let's talk a little bit ... We've had some conversations, many conversations around our individual kind of perspectives and feelings about death and how to work with people through those challenges. I know that you've had some very personal experience with loss and death in your life, and I'd like to hear a bit about that if you're comfortable sharing. And let's see how we can provide some valuable guidance, comfort, wisdom for some other people who might need that at this point. Elizabeth: Sure. I first encountered grief and loss and bereavement when I was in seminary, and I took a class on the subject. I remember being very inspired by everything that we read and discussed, but feeling a little disconnected from it, not really knowing how to understand it or contextualize it. Dr. Bob: Had you had any personal loss up until that point? Elizabeth: I had lost grandparents, but no unexpected losses, no tragic losses at that point. And several years later, I was living in Colorado with my daughter, who was nine at the time, and we received a phone call from my brother-in-law, who was my daughter's father's brother. So my daughter's father and I were married when she was a baby and had since separated but stayed very, very close as family and friends. And his brother called me to let me know that he had died suddenly in a car accident. That was my first real experience with death and loss. And at the time, as I said, my daughter was nine. So my purpose was to make the process as comfortable and manageable for her as I could, to do what I could to contribute to her healing and resilience in dealing with the loss of her father. Dr. Bob: So you were dealing with it on your own and then having to understand, learn how to navigate that for her as well. Elizabeth: Yes, and I think that I didn't deal much with it on my own at first because I was so focused on caring for her. The initial loss was devastating. I mean, the pain in my body and the tears were endless. And I remember reaching out to friends and just feeling so lost and unable to think or function or grapple with the pain that was physical as well as spiritual and emotional, which really surprised me. I didn't realize that that was something that could happen. But I turned my attention to making sure that she was okay. So it was really a few years before I started to deal with my own experience of the loss. Dr. Bob: Had you had at that point training in ... Had you been through the social work training or had been involved in any way with hospice? Elizabeth: No. At that point, I hadn't had any experience end-of-life care, palliative care, hospice care. I went into my master's program in social work later, so I had been involved in social services but not in any official certified capacity and not with this field at all. I'd worked a lot with homeless populations, mental health recovery, addiction recovery and really didn't have any context for dealing with loss other than what I had touched upon briefly in seminary. Dr. Bob: So now several years later, you're in a very different place. You have a whole different set of experiences and knowledge base. And so it's interesting because you can probably look back at how you managed and how you responded to things and helped your daughter, and see it through a different lens because you would probably ... I'm assuming that that experience helped educate you about how to support others who might find themselves in similar circumstances going forward. Is that a fair assessment? Elizabeth: I think that's right, although I would say that the experience of a sudden tragic loss that is unexpected is very different from the experience of being with someone on hospice or someone who is more naturally at the end of their life. My father-in-law died several years later on hospice of cancer, and we had the opportunity to be with him, and to say goodbye, and to share love and memory with the family. I would say that that educated me more on how to be a hospice social worker than the experience of losing Natalia's father. Dr. Bob: I get that. Yeah, for me, the loss of my parents, neither of which was completely unexpected--they each had their struggles in different capacities, but it wasn't sudden and traumatic, which adds just a whole multiple layers of complexity to, I imagine to the grieving process. So can you share ... Do you have some thoughts that you'd like to share for people who might be in circumstances like that, who might still be grieving after a traumatic loss, especially with respect to children? Elizabeth: Sure. Dr. Bob: Not to put you on the spot, but I just- Elizabeth: I would say that the first most important thing is to reach out to people, to stay connected because it's an extremely isolating emotional experience. It's rare, and it can feel uncommon and lonely, so in order to stay stabilized, especially on behalf of my daughter, reaching out was really an important part of making things work. In the context of helping my daughter, I had never experienced that kind of loss as a child, so I didn't know what she might need from first-hand experience. So I reached out to friends of mine who had lost parents at a very young age, and I had two friends in particular who were very helpful in sharing with me their experience, what was important to them, what they felt was missing from care that could've been provided for them. The thing that stood out the most to me was they talked a great deal about people shying away from the subject and how that was detrimental to their recovery, to their healing, to their resiliency. So I made efforts to be very open and communicative with my daughter about the circumstances of the loss, the experience of the loss both for her and for other family members, and to share vulnerability of my own sorrow with her. And I think that that openness has been helpful to her. I think that she would say that we've created a safe space for her to be however she is, and to feel however she feels, and to share that, and to not feel alone with it. Dr. Bob: I think that's probably really critical to not feel like there is ... just to feel like it's okay to feel however you feel and not to have any expectation or to feel like, "Oh my goodness, it's been four years or five years, and I should be over it, but it's still painful," but for you to allow that and to help them see that this too shall pass. Things cycle and the feelings will come, and they will go, and to be able to freely express that has got to be critical. Elizabeth: Yeah, and I think another thing that really stood out was that everybody's grief experience is different, so allowing her to know and accept that my experience would be different from hers and that she doesn't have to match my emotional experience with the loss of her father, that she doesn't have to expect anything of herself, that I don't expect anything of her, and that it's okay to be. However, she is with it at the time of the loss and going forward because I don't know what her life will hold in terms of how she integrates this into her world, into her emotional experience. I don't know how it's going to impact her, and I just want her to know that whatever it is that she needs, she has access to the support that I can provide and that others can provide, and that it's always okay to let that experience be a part of who she is, and that it can shape her, but it doesn't have to overwhelm her. Dr. Bob: It's beautiful. Elizabeth: Thank you. Dr. Bob: You said something I wanted to touch on a little bit, in that people tend to shy away from the subject. And I see this all the time after someone dies, I think especially when it's someone younger or it's unexpected, sudden, is that the people around who might be very well-meaning who would want to provide comfort are afraid that because they don't know what to say, they don't want to make things worse. They don't want to say something that will be offensive or painful. So they probably instead don't say anything, don't call. That discomfort creates this distance. Do you have thoughts about how people ... because not so many people ... Like you said, it's rare for somebody to experience a sudden traumatic loss in their own life, but it's not as rare for people to know somebody who they care about who is in this position. So can we try to provide some guidance for people who are wanting the comfort or connect with someone who's had a loss? Elizabeth: Yeah. I would say that there are no words that make sense at that time, and to have the expectation that there's the right thing to say or that something you can do will make it better will solve the problem or somehow fix something is an unrealistic expectation. I think that death is such a part of life that it can't be ignored, and being willing to be simply present with people as they experience loss and grieve that loss at the time of the loss and ongoing because it becomes a part of their life, is the most you can offer. I don't think that there is anything that a person should do to help support someone other than just be there for them and with them. Dr. Bob: Yeah, I mean, I agree. I think that there are ... It's a challenge because you don't want to push yourself on somebody, and I know when people say--they're very well-meaning--"Call me if there's anything I can do if there's anything you need." But in that situation, most people aren't going to call on people other than a select few and say, "Oh, I need someone to be with me," or, "I need meals prepared because I can't function enough to cook for my family." Elizabeth: And I think that's a factor of our society's unwillingness to be comfortable with death. It's not considered acceptable to be in deep sorrow, and to need support, and to reach out to a friend or a loved one. I've heard a lot of people, especially spouses, share that their family members, after a certain number of months or years say, "It's time to move on," and that, to me, doesn't make any sense. If someone needs support around grief and loss, it could be at any time. It could be immediately after the death. It could be months later. It could be years later, to be available to offer a cup of tea, to just show up with a small gift, to send flowers to let them know you're thinking about them. I think small gestures that aren't intrusive but are thoughtful can make a really big difference. And those small gestures will let someone know more than just saying, "Call me if you need anything. I'm really here with you. I'm thinking about you." And it opens a door that people might not realize is even there." Dr. Bob: At the time of this recording we're coming up towards the holidays, and I'm wondering if you have thoughts about ... We're talking about children. We were focusing a bit on children, and there are a lot of children who are facing their first Christmas, their first Hanukkah, their first New Year's without somebody. It could be a grandparent. It could be a parent. It could be a sibling. You have anything you'd like to share about how to support the families, especially children through that, those holiday times after a loss? Elizabeth: I'm getting a little emotional as I'm remembering our first holidays without Natalia's father. Something that we've done that she has expressed to me has been really helpful is finding different ways of memorializing him and making him a part of new traditions. So we still have a stocking for him on the fire place. We have made crafts, little ornaments for the Christmas tree that she and I made together in remembrance of him. We make sure to spend holiday time with his family who is still very much our family and to really include him in the things that we do either through memories, or through creating small things that we can carry with us, or through creating new traditions that he can be a part of. And since his passing, we have found new family members and welcomed other people into our world, and I think that it would be really interesting to get their perspective on this, but they have been very open to him being a part of our traditions and our family, and I think that it can be maybe hard to balance the loss of a loved one with the integration of new loved ones. And it's a different kind of blended family. But, again, I think that open communication is the thing that has really made a difference for us, being willing to openly share our love for someone who is gone and at the same time share love for people who are here and know that they're not mutually exclusive, and know that we can all be a family together, and offering that knowledge and experience to my daughter, who has to learn to live with both the loss of her past and the future that awaits her. Dr. Bob: And partly the future that in some ways was created through that loss. Elizabeth: Yes. Dr. Bob: So we talk about silver linings. And after the death of someone who's young and vital, who we expected to be part of our life for decades to come, it's hard to think about silver linings in those circumstances, but sometimes we don't know ultimately what the purpose of our life is. We don't know what the meaning, the reason for our sometimes premature departure. But I know that there are many instances where a death has resulted in new relationships developing and new understandings developing, which wouldn't have happened otherwise. And we don't get to decide whether ... You don't get to weigh the consequences of one versus the other, but we have to appreciate that there are these positive outcomes. And, like you said, you have to reconcile that because I would imagine especially children, they would never want to think that it's okay that this happened, that death occurred because this happened. That would be very I think hard for someone to reconcile. But we have to somehow be okay with all of that, right? We have to learn to be okay with all of it. Elizabeth: Yeah. I at one point in my life received a label of the queen of the silver lining because of my [infallible 00:24:53] optimism. I think that that is not mutually exclusive with the experience of sorrow and teaching my daughter that we can be both happy with the life that we've built since the loss and also deeply wounded by the loss are not mutually exclusive, are something that we can reconcile and that we can live with simultaneously. It's difficult, and it takes a long time I think to bring those things together, to integrate them, but I think that like anything in life, there's a gray area that balances the life and the death, the light and the dark. And being able to live with that unknown, the in-between, I think that's a goal that I've encountered since losing someone that I loved. Dr. Bob: And I'm sure that that understanding has been extremely valuable for others that you've been able to counsel and engage with in your capacity as a social worker, as a friend. I do, the other thing that you mentioned that I completely, wholeheartedly agree with is the value of communication. It think the families, the people who have the most difficulty in struggle and have the most negative impact throughout their lives are those who can't communicate, who don't know how to communicate when they're in this, reeling through these circumstances that they didn't bring on, that they have no control over. Communication is so critical. Elizabeth: Absolutely, and I think that noticing that has been a huge part of what has inspired me to become an advocate for education in this field and for working to create those conversations and allow people to be a little bit more comfortable with acknowledging and experiencing the difficulty and the discomfort that surrounds conversations about life and death. Dr. Bob: Wow, a little light morning conversation topic, but this is really valuable. This is wonderful, and I think that there's so much more than we could tap into and touch on. And I'm going to ask if you're willing to come back and have an additional conversation or two with me? Elizabeth: I would be honored. Dr. Bob: Yeah, I think we have a lot more to discuss. We've been together and with some patients and families, and there will be many other opportunities for us to have these Life and Death Conversations, which I hope others will find some to be interesting and valuable. So thank you for sitting with me and having this conversation today. It was really informative, and really I'm sure valuable for many of our listeners. Elizabeth: Thank you for the invitation. Dr. Bob: Alright. Signing off now. We'll be back and chatting with you again soon.