POPULARITY
Grant Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable? Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth. Elizabeth Hey, Grant. Thanks for having me back. Grant Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that. Elizabeth So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls. Grant It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right. Elizabeth So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish. Grant Yeah, so I was gonna ask you on that, you know, in the world of generative AI, where that's becoming more and more of a thing, and in the app, the appetite for sort of generating or producing that test data is the premise that because I've heard some argue, wait, generative AI actually helps me to overcome and avoid some of the bias issues, but it sounds like you might be proposing just the opposite. Elizabeth It actually works both ways. So um, so creating synthetic data can really help when you want to avoid data bias, and you don't have enough production data to, to do that well. And so you can do, you can, you can do that in a number of different ways. data augmentation is one way so taking your original data and say, flipping it, or changing the colors in it, etc. So taking an original dataset and trying to make it more diverse and kind of cover more cases than you maybe would originally to make your model more robust. Another another kind of way of doing that is synthetic data creation. So an example there would be, you have a 3d environment, in one of these, you know, game engine type things like Unreal or blender, you know, there's, there's a few, and you have, say, I want to detect something, and it's usually in a residential setting, right. So you can have a whole environment of different, you know, housing types, and it would be really hard to get that data, you know, without having generated it, right, because you don't have cameras in everybody's houses, right. So in those cases, what we encourage is, pilots, so you before, really, you know, deploying this thing, and, and letting it free in the world, you you use that synthetic data, but then you make sure that you're piloting that in your set in your real world setting as long as possible to, you know, sets out any issues that you might come across. Grant So let's go back to that first example you shared where you got the crosswalk, you have the pedestrians, and now you need to make sure you've got different poses, like you said, someone you know, sitting down on the road or laying on the rug, certainly using generative AI to create different postures of those. But But what about, hey, if the introduction, is something brand new, such as, like you said, the wheelchair or some other sort of foreign object? Is the generative AI going to help you solve for that? Or do you need to you need to lead lead it a bit? Elizabeth It absolutely can. Right? So yeah, it's, it's basically anything that you can model in a 3d environment. And so you can definitely model someone in a wheelchair in a 3d environment. And, and Tesla uses this method really often because it's hard to simulate every kind of crash scenario, right? I mean, sorry, it's hard to have real data from every kind of crash scenario. And so they're trying to model again, they're trying to model their problem as robustly as possible. And so in some of those cases, they are like, you know, all of these types of things could happen, let's get more data around that the most efficient, and kind of most possible way of doing that is with synthetic data. Grant Awesome. Awesome. Okay. So that's a key approach for addressing the this bias problem. Are there any other techniques besides this generative, you know, training data approach? What else could you use to overcome the bias? Elizabeth Yeah, so. So another type kind of is when you have, like I was saying a mismatch in test and production data. So a lot of people even you know, computer vision, people sometimes don't know how much this matters. When it's things like, for example, working with a live video. So in those cases, bitrate matters, FPS matters, your resizing algorithm and your image encoding. And so you'll have, in many cases, you're collecting data in the first place for your test data differently than it's going to run in production. And people can forget about that. And so this is a place where, you know, having a platform like plain sight, can really help because that process is standardized, right? So the way you're pulling in that data, that is the same data that you're labeling, and it's the same data that you're, then you know, inferencing on, because you're pulling live data from those cameras, and it's all it's all managed in one place and to end. So that's, that's another strategy. And another thing that happens is when there are researchers that will be working on a model for like, two years, right, and they have this corpus of test data, but something happens in the meantime, right? So it's like, phone imaging has advanced in those in that time, so then your your input is a little different, or like the factory that they were trying to model, the the floor layout changed, right. And they didn't totally realize that the model had somewhat memorized that floor layout. And so you'll get these problems where you have this, you know, what you think is a really robust model, you drop it into production, and you don't know you have a problem until you drop it into production. So that's another reason that we really emphasize having pilots, and then also having a lot of different perspectives on vetting those pilots, right. So you, ideally, you can find a subject matter expert in the area outside of your company to, you know, take take a look at your data and what's coming out of it. And you have kind of a group of people really thinking deeply about, you know, the consistency of your data, how you're modeling your problem, and making sure that kind of all of those, all of those things are covered? Grant Well, in reducing cycle time from this initial set of training, to, to sort of validation of that pilot is crucial to this because as you're pointing out, even even if you even if you keep that cycle time short, and you do lots of iterations on it, some assumptions may change. How do you help? How to me what's the techniques for, you know, keeping someone looking at those assumptions? Like you said, maybe it's a change in camera phone technology, or it's a change of the layout? Like I said, as technology people, Einsteins we get so focused on oh, we're just pushing towards the solution, we sort of forget that part. How do you how do you get someone? Is that just a cultural thing? Is it a AI engineering thing, that someone's got a, you know, a role in the process? To do that? Elizabeth I think it's both. So I think the first thing is organizations really need to think deeply about their process for computer vision and AI. Right. And, and some of the things that I've already mentioned, need to be part of that process, right? So you want to research your users in advance, or your use cases in advance and try to think through that full Problem Set holistically. You want to you want to be really, really clear about your labeling, right? So you can introduce bias, just through your labeling process if humans themselves are introducing it, right? Exactly. If you have some people labeling something a little bit differently than other people. So like on the edge of an image, if you have a person on the edge, do you count that as a person? Or is it or you know, or as another person? Or is it not counted? How far in the view do they have to be? So there's, there's all a lot of gray area where you really just need to be very familiar with your data. And, and be really clear, as a company on how you're going to process that. Grant So this labeling boundaries, but then backing up, there's the label ontology or taxonomy itself, right, which is, yeah, that itself could just be introducing bias also, right. Elizabeth Yeah. And then back to kind of what we're saying about how to ensure how to really think through some of these problems, is you can also make sure that that as a as a company, you have a process where you, you have multi passes, multiple passes on, on that annotated data, and then multiple passes on the actual inference data, right. So you have a process where you're really checking. Another thing that we've talked about internally, recently is you know, we have a pipeline for deploying your computer vision. And one of the things that can be really, really important in a lot of these cases is making sure that there is a human in the loop that there is some human supervision. To make sure that you're, you're, again, you weren't servicing bias that you didn't under your you didn't anticipate, or your your model hasn't drifted over time, things like that. And so something we've considered is being able to kick off just in that process, have it built in that you can kick off a human, like a task for a person, right? So it's, it's just built in. Grant And so it no matter what you do that thing is this, it's just as a governance function, is that what you're getting? Elizabeth Kind of so it's like, it's like a processing pipeline for your data. And, and so you can have things like, Alright, at this step, I'm gonna augment my data, and at this step, I'm gonna, you know, run an inference on it, or flip it or whatever it is, right? And so, in that you could make sure that you kick off a task for a human to check, right, or, or whatever the case may be. Yep. Yep. So there's several good, so good process maturity, is another technique for how do we help overcome bias as well as inaccurate models? And I'm assuming you're, you're almost bundling both of those into that right? In Yeah, both right. And, and like you said, they're the another way is reducing that time, and also making sure that you're working on production data whenever possible. So reducing the this, this is where the platform can help as well. Because when you you know, you aren't off in research land, without production data for two years, but you have a system where it makes it really easy to connect cameras, and just work on on real production data, then two things, you're, you're reducing the time that it takes to kind of go full circle on on labeling and training and testing. And then also you you have it all in one place. And that's that's one of the problems that we solve, right? Because, in many cases, computer vision engineers or, or data scientists, they're kind of working on the they don't have the full pipeline to work on the problem. So they have this test dataset, and they're working on it somewhat separately from where it will be deployed in production. And so we try to join those two things. Grant Yeah, I think that's one of the real strengths of the platform of your platform, the plain side platform is this reduction of the cycle, so that I can actually be testing and validating against production scenarios, and then take that feedback. And then augmenting that with the great governance processes you talked about. Both of those are critical. Let's let's talk a little bit and talk about fraud is, you know, certainly in this in computer vision, holy smokes, fraud has been probably one of the key areas that, you know, the bad guys have gone after, right? All right, what what can you do to overcome this and deal with this? Elizabeth You know, it can really become a cat and mouse game. And I think the conversation about fraud boils down to, it's not clear, it boils down to is it better than the alternative? Right? So it's not clear that just because there could be some fraud in the computer vision solution, it may or may not be true that there could be more fraud and another solution, right. So so the example is, technically, you used to be able to and I think with some phones, you still can 3d print a face to defraud your facial detection to unlock your phone. Yeah. And there is and so then they've, you know, done a lot of things, advancements, so this is harder to do, which, like there's a liveliness detector, I think they use their eyes, your eyes for that. And then you know, there's a few but you could still use a mask. So again, it's it's this cat and mouse game. And another place is is you know, there are models that can understand text to speech. And then there are models that you can put on top of that, that can make that speech sound like other voices, right? So the the big category here is deep fakes. But it's, you know, you can you can make your voice sound like someone else's voice. And there are banks and other things like that, that use voice as a as a method for authentication. Right, right. Grant I'm sure I'm sure we've all seen the the Google duplex demo or scenarios right. says a few years from now, right? I mean, that technology obviously continues to mature. Elizabeth Exactly. And so, so then the question is Okay, if I can 3d print a face and or a mask and unlock someone's phone, is that is that is that harder than actually someone just finding my, you know, four to six digit phone, you know, numerical code to unlock my phone. So, you know, so I think there it really becomes a balance of which thing is is harder to defraud and in fraud in general, you know, if you think about cybersecurity, and, and everywhere that you're trying to combat this, it's a it's a cat and mouse game, right? People are getting, you know, people are figure out the vulnerabilities in what exists and then and then people have to get better at defending it. So well. So the argument is, if I if I can say back to the argument is, yeah, it exists. But hey, how's this different from so many other technologies or techniques, where again, you got fraudsters trying to break in? This is just part of the business today? Right. That's where it is? Grant Yeah, I think it becomes a, an evaluation of is it? Does it cause more or less of a fraud problem? And then it's, it's really just about evaluating the use of technology on an even plane? Right. So it's not it's not about should you use AI? Because it causes fraud? It's should you use any particular method or technology because there's a fraud issue and what's gonna cause the least fraud? Right, a more specific use case? Elizabeth Yeah. Grant Yep. Okay, so So fraud. So, uh, you and I had talked about some potential techniques out there. Like there's a Facebook Instagram technology algorithm. Right. I think it's called seer. I think it came out not too long ago. It's a it's an ultra large vision model. It takes in more than a billion variables. P believe that. That's, that's a lot. A lot of massive. I mean, I've built some AI models, but not with a billion. That's incredible. So are you familiar with that? Have you looked into that at all SEER itself? Elizabeth Yeah, so So this, basically, this method where you can look, basically to try to address bias through distorting of images? Yeah, yeah. So I can give you a good example of something that actually we've worked on, I'm going to chase change the case a little bit to kind of anonymize it. But so in a lab setting, we were working on some special imaging to detect whether there was a bacteria in, in in samples, or not, right. And in this case, we were collecting samples from many labs across the country. And one thing that could be different in them was the color of kind of the substrate that the sample was just in, it was essentially a preservative. Wow. And so but but those, there are a few different colors. And they were used kind of widely. And so it wasn't generally thought that, you know, this would be a problem. But so the model was built and all the data was processed. And there was a really high accuracy. But what happened, and what they found out was that the, there was a correlation with the color and whether the bacteria was present or not. And it was just a kind of a chance correlation, right. But if you had had something like that, that image distortion, so if you took the color out automatically, or you mess with the color, then that would have taken that bias out of that model. And then as a second thing happened, actually, which was when the, the the people in the lab, were taking the samples out of the freezer, they would take all of them at once. And they were just kind of bordered. And so they would do all of the positives first and all of the negative second. And machine learning is just it's a really amazing pattern detector, right? Like that is that is what it is about. Yeah. And so again, they were finding a correlation just between the weather it was hot, more thawed or not. And that was correlating with whether it was positive or not. So, you know, some of this really comes back to what you learn in science fair and putting together a really Your robust scientific method and making sure you're handling all of your very variables really carefully. And, and, and, and clearly and you know what's going into your model. And you can control for that as much as possible. So, so yeah, that I mean that Facebook method is, can be really valuable in a lot of cases to suss out some of these correlations that you may just not know are there. Grant Yeah, I think what's cool is they open source that right, I think it's called swag SwaaV. Yeah. Which is awesome. The they figured that out and made that open source so that obviously, the larger community needs something like this course help deal with some of this, this bias challenge. Interesting. Okay, that's cool. So all right. I was I was I really wanted to ask you about your thoughts on that approach. So I'm glad to hear you validate that. Elizabeth Yeah, no, it's great. I mean, there really has to be a process, especially in a in a model like that, where you try to break it in any possible way that you can, right, there has to be a whole separate process where you think through any variable that there could be and so if there's a model that's, that has, you know, so many just out of the box, that's a really good, great place to start. Grant Yeah, awesome. Awesome. Okay. And then the last category here, around ethical violations, any thoughts on that? Elizabeth Addressing that overcoming that, you know, I think that really just comes down to when you need permission to be doing something, I need to make sure that you're doing it right, or you're getting it. And that, you know, obviously that happens in cases where there's facial recognition and making sure that people know that that's going on, and that's similar to being kind of videotaped at all right. And so that one's fairly straightforward. But sometimes people need to, you know, when you're putting together your ethics position, you need to make sure that you're really remembering that that's there. And you're checking every single time that you don't have an issue. Grant Yeah, permissions. And there's this notion, I'll come up with a term that feels like permission creep, right. It's called scope, right? It's like, well, you may have gotten permission to do this part of it. But you kind of find yourself also using the data stuff over here right to maybe solve other other problems, and that that's a problem in some some people's minds for sure. I was very good point. Yeah, various articles, people out there talk about that part of it sort of creeping along, and how do you help ensure that what it is I gave you the data for what we're using it for? Is just for its, you know, you know, permitted intended purpose, right? That was a challenge for sure. Okay, so you've been more than fair with your time here today with us, Elizabeth, gay, dry, any conclusions? What's the top secret answer to the overcoming the four pitfalls here of AI ethical? Elizabeth So one thing I have to add, we would be remiss if we didn't talk about data bias without talking about data diversity in data balance, right. And so, you know, obviously, the, the simple example there is fruit. So if you are looking at if you have a dataset with seven apples, one banana, and seven oranges, it's going to be worse at detecting the banana. But the more real world example that happens is in hospitals, right? So they, in the healthcare system, in general, we have a problem with being able to share data, even even anonymized data. So when a hospital is doing is building a model, there have been problems where a can be they, they have bias in their dataset, right. So in in a certain location, you can have something like if you're coming in with a cough in one area, it may be most likely that you have a cold, cold, but in another area, it may be more accurate to start evaluating for asthma, right. Grant So that kind of thing can come up so it if you if you take a model that's done in one hospital and try to apply it elsewhere, then again, that's a place where you can visit, is that kind of like a form of confirmation bias, meaning, you know, you have the same symptom, but you come into two different parts of the hospital and, well, this person's coughing and you know, you're in the respiratory area. So they immediately think it's one thing but now you go to another part of the hospital. Well, yeah, a cough is a symptom for that to suddenly you know, that's what they think you have. Elizabeth That's a great point. It really it's sort of the machine learning version. that? Grant Yeah, that's right. Yeah, it's a confirmation bias sort of view. It's like yeah, oh, this is, uh, but it how many variables does it take for you to actually have true confirmation? Right? But with this example from Facebook a billion, but how many do you need to have? Elizabeth I think it's really it's less about the variables. And it's more about your data balance and making sure that you're training on the same data that's going to be used in production. So it you know, it's less of a problem, if you are, you know, only deploying that model at one hospital. But if you want to deploy it elsewhere, you need data from everywhere, right? Or, or wherever you're, you're planning to deploy it. So So again, it really comes back to that data balance and making sure your test data and your production data are kind of in line. Grant Are there any of these ethical biases we've talked about that are not solvable? Elizabeth Um, that's a good question. I think Ah, maybe dancer, are you? Are you running? I think there are definitely some that can be really hard. So, so something that we touched on, you talked about, you know, is there inherently a, are our supervised models more inherently more biased than unsupervised? And like, the answer there is, is probably yes. Because you're T you're a human is explicitly teaching a model what's important in that image? And so you know, that that thing can be exactly what you're looking for. Right? You want to make sure there's not a safety issue or whatever it is. But, but, but just it's a human process. So there can be things there that you don't catch. Grant Yeah, yeah. Yeah, that's that's been a question on my mind for a while, which is the implicit impact of bias on supervised versus non supervisory, or work with another group called Aible, have you run into Aible, they're one of the AutoML providers out there. And more on sort of the predictive analytics side of AI, right. They're not doing anything with with computer vision, they have this capability, where they'll look at, but it's always supervised data, but what they're trying to the problem you're trying to solve is, okay, you got a lot of data. Just give me tone, give me signal. In other words, before I spend too much time, trying to, you know, do some training and guiding the model, just do a quick look into that data set and tell me, is there any toner signal where these particular supervised elements, they can draw early correlation to outcome or predictive capabilities. And the idea is that as the world of data keeps getting larger and larger, our time as humans doesn't keep getting larger and larger. So we need to reduce what's the total set of stuff we're looking at, dismiss these other pieces, they're irrelevant to, you know, being predictive. And then you can focus on the things that are important. Anything like that in the computer vision world? Elizabeth So So I was thinking I was trying so unsupervised learning is less common in, in computer vision. But, but, but one of the things that can happen is just the data that exists in the world is bias. Right? So So an example is say you want to predict what a human might do at any one time. And you want to use an unsupervised method for that. So say you want to scrape the internet of videos. If you look at the videos on YouTube, the videos that people upload are inherently biased. So if you look at security view videos, they're like, almost all fights, right. So your model, because that's what humans think, is interesting. And as you know, uploaded it in a security video. And so I mean, not almost all but a lot of Yeah, yeah, he's inherently what humans think are interesting. And so there are places like that where just inherently your data set is kind of biased because we're human. So So again, it's another place that you have to be pretty careful. Grant Yeah. Okay, so sounds like the problems are I'm gonna say I'm doing Air quotes. These are solvable, but it takes some discipline and rigor. Elizabeth Yeah, okay. And and it's just so important for organizations to kind of sit down and really think through their, their ethical use of of AI and how they're going to approach that and get a policy together and make sure they're really kind of living those policies. Grant Excellent. Okay. Elizabeth, thank you for your time today. Any final comments? Any parting shots? Elizabeth Um, no, I think I appreciate you having me on. That was a really fun conversation. And yeah, I always enjoy chatting with you. Grant Likewise, Elizabeth, thank you for your time. Thank you everyone for joining and this episode. Until next time, get some ethics for your AI. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.
Grant Welcome everybody. In this episode, we take a look at the four pitfalls to AI ethics and are they solvable? Okay, hey, everybody. Welcome to another episode of ClickAI Radio. So glad to have in the house today Plainsight AI. What a privilege. Elizabeth Spears with us here today. Hi, Elizabeth. Elizabeth Hey, Grant. Thanks for having me back. Grant Thanks for coming back. You know, when we were talking last time, you threw out this wonderful topic around pitfalls around AI ethics. And it's such a common sort of drop phrase, everyone's like, oh, there's ethics issues around AI. Let's, let's shy away from it. Therefore, it's got a problem, right? And I loved how you came back. And it was after our episode, it's like he pulled me aside in the hallway. Metaphorically like "Grant, let's have a topic on the pitfalls around these some of these ethical topics here". So I, you hooked me I was like, Oh, perfect. That's, that's a wonderful idea with that. Elizabeth So typically, I think there's, there's so many sort of high level conversations about ethics and AI, but, but I feel like we don't dig into the details very often of kind of when that happens, and how to deal with it. And like you said, it's kind of the common pitfalls. Grant It is. And, you know, it's interesting is the, in the AI world in particular, it seems like so many of the ethical arguments come up around the image, style of AI, right, you know, ways in which people have misused or abused AI, right for either bad use cases or other sort of secret or bad approaches. So this is like you are the perfect person to talk about this and, and cast the dagger in the heart of some of these mythical ethical things, or maybe not right. All right. Oh, yeah. Alright, so let's talk through some of these. So common pitfalls. So there were four areas that you and I sort of bantered about, he came back he said, Okay, let's talk about bias. Let's talk about inaccuracy in models, a bit about fraud, and then perhaps something around legal or ethical consent violations. Those were four that we started with, we don't have to stay on those. But let's tee up bias. Let's talk about ethical problems around bias. All right. Elizabeth So I mean, there's really there's several types of bias. And, and often the biased and inaccuracies can kind of conflate because they can sort of cause each other. But we have I have some examples of of both. And then again, somewhere, some where it's it's really biased and inaccuracies that are happening. But one example or one type is not modeling your problem correctly, and in particular, to simply so I'll start with the example. So you want to detect safety in a crosswalk, right, relatively simple kind of thing. And, and you want to make sure that no one is sitting in this crosswalk. Because that would be now generally be a problem. It's a problem. So, so you do body pose detection, right? And if you aren't thinking about this problem holistically, you say, All right, I'm going to do sitting versus standing. Now the problem with that is what about a person in a wheelchair? So then you would be detecting kind of a perceived problem because you think someone sitting in the middle of a crosswalk but but it's really just about accurately defining that problem. And then and then making sure that's reflected in your labeling process. And and that kind of flows. into another whole set of problems, which is when your test data and your kind of labeling process are a mismatch with your production environment. So one of the things that we really encourage for our customers is, is collecting as much production close as close to possible, or ideally just production data that you'll be running your models on, instead of having sort of these very different test data sets that then you'll then you'll kind of deploy into production. And there can be these mismatches. And sometimes that's a really difficult thing to accomplish. Grant Yeah, so I was gonna ask you on that, you know, in the world of generative AI, where that's becoming more and more of a thing, and in the app, the appetite for sort of generating or producing that test data is the premise that because I've heard some argue, wait, generative AI actually helps me to overcome and avoid some of the bias issues, but it sounds like you might be proposing just the opposite. Elizabeth It actually works both ways. So um, so creating synthetic data can really help when you want to avoid data bias, and you don't have enough production data to, to do that well. And so you can do, you can, you can do that in a number of different ways. data augmentation is one way so taking your original data and say, flipping it, or changing the colors in it, etc. So taking an original dataset and trying to make it more diverse and kind of cover more cases than you maybe would originally to make your model more robust. Another another kind of way of doing that is synthetic data creation. So an example there would be, you have a 3d environment, in one of these, you know, game engine type things like Unreal or blender, you know, there's, there's a few, and you have, say, I want to detect something, and it's usually in a residential setting, right. So you can have a whole environment of different, you know, housing types, and it would be really hard to get that data, you know, without having generated it, right, because you don't have cameras in everybody's houses, right. So in those cases, what we encourage is, pilots, so you before, really, you know, deploying this thing, and, and letting it free in the world, you you use that synthetic data, but then you make sure that you're piloting that in your set in your real world setting as long as possible to, you know, sets out any issues that you might come across. Grant So let's go back to that first example you shared where you got the crosswalk, you have the pedestrians, and now you need to make sure you've got different poses, like you said, someone you know, sitting down on the road or laying on the rug, certainly using generative AI to create different postures of those. But But what about, hey, if the introduction, is something brand new, such as, like you said, the wheelchair or some other sort of foreign object? Is the generative AI going to help you solve for that? Or do you need to you need to lead lead it a bit? Elizabeth It absolutely can. Right? So yeah, it's, it's basically anything that you can model in a 3d environment. And so you can definitely model someone in a wheelchair in a 3d environment. And, and Tesla uses this method really often because it's hard to simulate every kind of crash scenario, right? I mean, sorry, it's hard to have real data from every kind of crash scenario. And so they're trying to model again, they're trying to model their problem as robustly as possible. And so in some of those cases, they are like, you know, all of these types of things could happen, let's get more data around that the most efficient, and kind of most possible way of doing that is with synthetic data. Grant Awesome. Awesome. Okay. So that's a key approach for addressing the this bias problem. Are there any other techniques besides this generative, you know, training data approach? What else could you use to overcome the bias? Elizabeth Yeah, so. So another type kind of is when you have, like I was saying a mismatch in test and production data. So a lot of people even you know, computer vision, people sometimes don't know how much this matters. When it's things like, for example, working with a live video. So in those cases, bitrate matters, FPS matters, your resizing algorithm and your image encoding. And so you'll have, in many cases, you're collecting data in the first place for your test data differently than it's going to run in production. And people can forget about that. And so this is a place where, you know, having a platform like plain sight, can really help because that process is standardized, right? So the way you're pulling in that data, that is the same data that you're labeling, and it's the same data that you're, then you know, inferencing on, because you're pulling live data from those cameras, and it's all it's all managed in one place and to end. So that's, that's another strategy. And another thing that happens is when there are researchers that will be working on a model for like, two years, right, and they have this corpus of test data, but something happens in the meantime, right? So it's like, phone imaging has advanced in those in that time, so then your your input is a little different, or like the factory that they were trying to model, the the floor layout changed, right. And they didn't totally realize that the model had somewhat memorized that floor layout. And so you'll get these problems where you have this, you know, what you think is a really robust model, you drop it into production, and you don't know you have a problem until you drop it into production. So that's another reason that we really emphasize having pilots, and then also having a lot of different perspectives on vetting those pilots, right. So you, ideally, you can find a subject matter expert in the area outside of your company to, you know, take take a look at your data and what's coming out of it. And you have kind of a group of people really thinking deeply about, you know, the consistency of your data, how you're modeling your problem, and making sure that kind of all of those, all of those things are covered? Grant Well, in reducing cycle time from this initial set of training, to, to sort of validation of that pilot is crucial to this because as you're pointing out, even even if you even if you keep that cycle time short, and you do lots of iterations on it, some assumptions may change. How do you help? How to me what's the techniques for, you know, keeping someone looking at those assumptions? Like you said, maybe it's a change in camera phone technology, or it's a change of the layout? Like I said, as technology people, Einsteins we get so focused on oh, we're just pushing towards the solution, we sort of forget that part. How do you how do you get someone? Is that just a cultural thing? Is it a AI engineering thing, that someone's got a, you know, a role in the process? To do that? Elizabeth I think it's both. So I think the first thing is organizations really need to think deeply about their process for computer vision and AI. Right. And, and some of the things that I've already mentioned, need to be part of that process, right? So you want to research your users in advance, or your use cases in advance and try to think through that full Problem Set holistically. You want to you want to be really, really clear about your labeling, right? So you can introduce bias, just through your labeling process if humans themselves are introducing it, right? Exactly. If you have some people labeling something a little bit differently than other people. So like on the edge of an image, if you have a person on the edge, do you count that as a person? Or is it or you know, or as another person? Or is it not counted? How far in the view do they have to be? So there's, there's all a lot of gray area where you really just need to be very familiar with your data. And, and be really clear, as a company on how you're going to process that. Grant So this labeling boundaries, but then backing up, there's the label ontology or taxonomy itself, right, which is, yeah, that itself could just be introducing bias also, right. Elizabeth Yeah. And then back to kind of what we're saying about how to ensure how to really think through some of these problems, is you can also make sure that that as a as a company, you have a process where you, you have multi passes, multiple passes on, on that annotated data, and then multiple passes on the actual inference data, right. So you have a process where you're really checking. Another thing that we've talked about internally, recently is you know, we have a pipeline for deploying your computer vision. And one of the things that can be really, really important in a lot of these cases is making sure that there is a human in the loop that there is some human supervision. To make sure that you're, you're, again, you weren't servicing bias that you didn't under your you didn't anticipate, or your your model hasn't drifted over time, things like that. And so something we've considered is being able to kick off just in that process, have it built in that you can kick off a human, like a task for a person, right? So it's, it's just built in. Grant And so it no matter what you do that thing is this, it's just as a governance function, is that what you're getting? Elizabeth Kind of so it's like, it's like a processing pipeline for your data. And, and so you can have things like, Alright, at this step, I'm gonna augment my data, and at this step, I'm gonna, you know, run an inference on it, or flip it or whatever it is, right? And so, in that you could make sure that you kick off a task for a human to check, right, or, or whatever the case may be. Yep. Yep. So there's several good, so good process maturity, is another technique for how do we help overcome bias as well as inaccurate models? And I'm assuming you're, you're almost bundling both of those into that right? In Yeah, both right. And, and like you said, they're the another way is reducing that time, and also making sure that you're working on production data whenever possible. So reducing the this, this is where the platform can help as well. Because when you you know, you aren't off in research land, without production data for two years, but you have a system where it makes it really easy to connect cameras, and just work on on real production data, then two things, you're, you're reducing the time that it takes to kind of go full circle on on labeling and training and testing. And then also you you have it all in one place. And that's that's one of the problems that we solve, right? Because, in many cases, computer vision engineers or, or data scientists, they're kind of working on the they don't have the full pipeline to work on the problem. So they have this test dataset, and they're working on it somewhat separately from where it will be deployed in production. And so we try to join those two things. Grant Yeah, I think that's one of the real strengths of the platform of your platform, the plain side platform is this reduction of the cycle, so that I can actually be testing and validating against production scenarios, and then take that feedback. And then augmenting that with the great governance processes you talked about. Both of those are critical. Let's let's talk a little bit and talk about fraud is, you know, certainly in this in computer vision, holy smokes, fraud has been probably one of the key areas that, you know, the bad guys have gone after, right? All right, what what can you do to overcome this and deal with this? Elizabeth You know, it can really become a cat and mouse game. And I think the conversation about fraud boils down to, it's not clear, it boils down to is it better than the alternative? Right? So it's not clear that just because there could be some fraud in the computer vision solution, it may or may not be true that there could be more fraud and another solution, right. So so the example is, technically, you used to be able to and I think with some phones, you still can 3d print a face to defraud your facial detection to unlock your phone. Yeah. And there is and so then they've, you know, done a lot of things, advancements, so this is harder to do, which, like there's a liveliness detector, I think they use their eyes, your eyes for that. And then you know, there's a few but you could still use a mask. So again, it's it's this cat and mouse game. And another place is is you know, there are models that can understand text to speech. And then there are models that you can put on top of that, that can make that speech sound like other voices, right? So the the big category here is deep fakes. But it's, you know, you can you can make your voice sound like someone else's voice. And there are banks and other things like that, that use voice as a as a method for authentication. Right, right. Grant I'm sure I'm sure we've all seen the the Google duplex demo or scenarios right. says a few years from now, right? I mean, that technology obviously continues to mature. Elizabeth Exactly. And so, so then the question is Okay, if I can 3d print a face and or a mask and unlock someone's phone, is that is that is that harder than actually someone just finding my, you know, four to six digit phone, you know, numerical code to unlock my phone. So, you know, so I think there it really becomes a balance of which thing is is harder to defraud and in fraud in general, you know, if you think about cybersecurity, and, and everywhere that you're trying to combat this, it's a it's a cat and mouse game, right? People are getting, you know, people are figure out the vulnerabilities in what exists and then and then people have to get better at defending it. So well. So the argument is, if I if I can say back to the argument is, yeah, it exists. But hey, how's this different from so many other technologies or techniques, where again, you got fraudsters trying to break in? This is just part of the business today? Right. That's where it is? Grant Yeah, I think it becomes a, an evaluation of is it? Does it cause more or less of a fraud problem? And then it's, it's really just about evaluating the use of technology on an even plane? Right. So it's not it's not about should you use AI? Because it causes fraud? It's should you use any particular method or technology because there's a fraud issue and what's gonna cause the least fraud? Right, a more specific use case? Elizabeth Yeah. Grant Yep. Okay, so So fraud. So, uh, you and I had talked about some potential techniques out there. Like there's a Facebook Instagram technology algorithm. Right. I think it's called seer. I think it came out not too long ago. It's a it's an ultra large vision model. It takes in more than a billion variables. P believe that. That's, that's a lot. A lot of massive. I mean, I've built some AI models, but not with a billion. That's incredible. So are you familiar with that? Have you looked into that at all SEER itself? Elizabeth Yeah, so So this, basically, this method where you can look, basically to try to address bias through distorting of images? Yeah, yeah. So I can give you a good example of something that actually we've worked on, I'm going to chase change the case a little bit to kind of anonymize it. But so in a lab setting, we were working on some special imaging to detect whether there was a bacteria in, in in samples, or not, right. And in this case, we were collecting samples from many labs across the country. And one thing that could be different in them was the color of kind of the substrate that the sample was just in, it was essentially a preservative. Wow. And so but but those, there are a few different colors. And they were used kind of widely. And so it wasn't generally thought that, you know, this would be a problem. But so the model was built and all the data was processed. And there was a really high accuracy. But what happened, and what they found out was that the, there was a correlation with the color and whether the bacteria was present or not. And it was just a kind of a chance correlation, right. But if you had had something like that, that image distortion, so if you took the color out automatically, or you mess with the color, then that would have taken that bias out of that model. And then as a second thing happened, actually, which was when the, the the people in the lab, were taking the samples out of the freezer, they would take all of them at once. And they were just kind of bordered. And so they would do all of the positives first and all of the negative second. And machine learning is just it's a really amazing pattern detector, right? Like that is that is what it is about. Yeah. And so again, they were finding a correlation just between the weather it was hot, more thawed or not. And that was correlating with whether it was positive or not. So, you know, some of this really comes back to what you learn in science fair and putting together a really Your robust scientific method and making sure you're handling all of your very variables really carefully. And, and, and, and clearly and you know what's going into your model. And you can control for that as much as possible. So, so yeah, that I mean that Facebook method is, can be really valuable in a lot of cases to suss out some of these correlations that you may just not know are there. Grant Yeah, I think what's cool is they open source that right, I think it's called swag SwaaV. Yeah. Which is awesome. The they figured that out and made that open source so that obviously, the larger community needs something like this course help deal with some of this, this bias challenge. Interesting. Okay, that's cool. So all right. I was I was I really wanted to ask you about your thoughts on that approach. So I'm glad to hear you validate that. Elizabeth Yeah, no, it's great. I mean, there really has to be a process, especially in a in a model like that, where you try to break it in any possible way that you can, right, there has to be a whole separate process where you think through any variable that there could be and so if there's a model that's, that has, you know, so many just out of the box, that's a really good, great place to start. Grant Yeah, awesome. Awesome. Okay. And then the last category here, around ethical violations, any thoughts on that? Elizabeth Addressing that overcoming that, you know, I think that really just comes down to when you need permission to be doing something, I need to make sure that you're doing it right, or you're getting it. And that, you know, obviously that happens in cases where there's facial recognition and making sure that people know that that's going on, and that's similar to being kind of videotaped at all right. And so that one's fairly straightforward. But sometimes people need to, you know, when you're putting together your ethics position, you need to make sure that you're really remembering that that's there. And you're checking every single time that you don't have an issue. Grant Yeah, permissions. And there's this notion, I'll come up with a term that feels like permission creep, right. It's called scope, right? It's like, well, you may have gotten permission to do this part of it. But you kind of find yourself also using the data stuff over here right to maybe solve other other problems, and that that's a problem in some some people's minds for sure. I was very good point. Yeah, various articles, people out there talk about that part of it sort of creeping along, and how do you help ensure that what it is I gave you the data for what we're using it for? Is just for its, you know, you know, permitted intended purpose, right? That was a challenge for sure. Okay, so you've been more than fair with your time here today with us, Elizabeth, gay, dry, any conclusions? What's the top secret answer to the overcoming the four pitfalls here of AI ethical? Elizabeth So one thing I have to add, we would be remiss if we didn't talk about data bias without talking about data diversity in data balance, right. And so, you know, obviously, the, the simple example there is fruit. So if you are looking at if you have a dataset with seven apples, one banana, and seven oranges, it's going to be worse at detecting the banana. But the more real world example that happens is in hospitals, right? So they, in the healthcare system, in general, we have a problem with being able to share data, even even anonymized data. So when a hospital is doing is building a model, there have been problems where a can be they, they have bias in their dataset, right. So in in a certain location, you can have something like if you're coming in with a cough in one area, it may be most likely that you have a cold, cold, but in another area, it may be more accurate to start evaluating for asthma, right. Grant So that kind of thing can come up so it if you if you take a model that's done in one hospital and try to apply it elsewhere, then again, that's a place where you can visit, is that kind of like a form of confirmation bias, meaning, you know, you have the same symptom, but you come into two different parts of the hospital and, well, this person's coughing and you know, you're in the respiratory area. So they immediately think it's one thing but now you go to another part of the hospital. Well, yeah, a cough is a symptom for that to suddenly you know, that's what they think you have. Elizabeth That's a great point. It really it's sort of the machine learning version. that? Grant Yeah, that's right. Yeah, it's a confirmation bias sort of view. It's like yeah, oh, this is, uh, but it how many variables does it take for you to actually have true confirmation? Right? But with this example from Facebook a billion, but how many do you need to have? Elizabeth I think it's really it's less about the variables. And it's more about your data balance and making sure that you're training on the same data that's going to be used in production. So it you know, it's less of a problem, if you are, you know, only deploying that model at one hospital. But if you want to deploy it elsewhere, you need data from everywhere, right? Or, or wherever you're, you're planning to deploy it. So So again, it really comes back to that data balance and making sure your test data and your production data are kind of in line. Grant Are there any of these ethical biases we've talked about that are not solvable? Elizabeth Um, that's a good question. I think Ah, maybe dancer, are you? Are you running? I think there are definitely some that can be really hard. So, so something that we touched on, you talked about, you know, is there inherently a, are our supervised models more inherently more biased than unsupervised? And like, the answer there is, is probably yes. Because you're T you're a human is explicitly teaching a model what's important in that image? And so you know, that that thing can be exactly what you're looking for. Right? You want to make sure there's not a safety issue or whatever it is. But, but, but just it's a human process. So there can be things there that you don't catch. Grant Yeah, yeah. Yeah, that's that's been a question on my mind for a while, which is the implicit impact of bias on supervised versus non supervisory, or work with another group called Aible, have you run into Aible, they're one of the AutoML providers out there. And more on sort of the predictive analytics side of AI, right. They're not doing anything with with computer vision, they have this capability, where they'll look at, but it's always supervised data, but what they're trying to the problem you're trying to solve is, okay, you got a lot of data. Just give me tone, give me signal. In other words, before I spend too much time, trying to, you know, do some training and guiding the model, just do a quick look into that data set and tell me, is there any toner signal where these particular supervised elements, they can draw early correlation to outcome or predictive capabilities. And the idea is that as the world of data keeps getting larger and larger, our time as humans doesn't keep getting larger and larger. So we need to reduce what's the total set of stuff we're looking at, dismiss these other pieces, they're irrelevant to, you know, being predictive. And then you can focus on the things that are important. Anything like that in the computer vision world? Elizabeth So So I was thinking I was trying so unsupervised learning is less common in, in computer vision. But, but, but one of the things that can happen is just the data that exists in the world is bias. Right? So So an example is say you want to predict what a human might do at any one time. And you want to use an unsupervised method for that. So say you want to scrape the internet of videos. If you look at the videos on YouTube, the videos that people upload are inherently biased. So if you look at security view videos, they're like, almost all fights, right. So your model, because that's what humans think, is interesting. And as you know, uploaded it in a security video. And so I mean, not almost all but a lot of Yeah, yeah, he's inherently what humans think are interesting. And so there are places like that where just inherently your data set is kind of biased because we're human. So So again, it's another place that you have to be pretty careful. Grant Yeah. Okay, so sounds like the problems are I'm gonna say I'm doing Air quotes. These are solvable, but it takes some discipline and rigor. Elizabeth Yeah, okay. And and it's just so important for organizations to kind of sit down and really think through their, their ethical use of of AI and how they're going to approach that and get a policy together and make sure they're really kind of living those policies. Grant Excellent. Okay. Elizabeth, thank you for your time today. Any final comments? Any parting shots? Elizabeth Um, no, I think I appreciate you having me on. That was a really fun conversation. And yeah, I always enjoy chatting with you. Grant Likewise, Elizabeth, thank you for your time. Thank you everyone for joining and this episode. Until next time, get some ethics for your AI. Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.
This week, we report on Afghanistan. What was once America's longest war is now one of the world's worst humanitarian crises. But what ails the Taliban government? What is stopping the world from saving Afghanistan's broken economy? And what about the millions who face starvation and death? In this episode, Monica Hunter-Hart reports on Afghanistan's most immediate problems. She talks to leading experts, diplomats, journalists on the ground, and refugees who have left to escape the crisis. Click here for an exclusive offer to save 83% on a Nikkei Asia subscription Asia Stream is hosted by Wajahat S. Khan, our digital editor and executive producer, and produced by Monica Hunter-Hart and Jack Stone Truitt. Our theme music is “What's the Angle?” by Shane Ivers. Related to this episode: Too big to fail: China eyes Afghanistan investment amid fears of state collapse, by Betsy Joles Afghan poppy season returns in force under Taliban rule, by Moyuru Baba
Andy Paul has hosted 900+ sales podcast episodes and knows a thing or two about driving the sales process.======================Four Actionable Takeaways: * Avoid no-decisions by focusing on the impact of inaction to drive a business case.* Always know the next steps (value) for every deal in your pipe + the result once you provide it.* Establish the agenda for your next meeting at the end of your current meeting.* Double down on the high priority items in an RFP and put the other questions on the backburner.======================Andy's Path to President's Club:* Host of Sales Enablement Podcast* Author of two award-winning sales books, "Zero-Time Selling" and "Amp Up Your Sales".* Ranked #8 on LinkedIn's list of Top 50 Global Sales Experts.======================Outreach: Efficiently and effectively engage prospects to drive more pipeline, close more deals: https://click.outreach.io/30mpc======================Gong: Improve your win rates, clone your best sellers: gong.io/30mpc======================Vidyard: Free Screen Recording and Video Creationists: https://www.vidyard.com/30mpc======================Dooly: Instantly stop your CRM suffering: https://bit.ly/3kSahgE ======================Skipio: The Sales Rep's Playbook for Texting in the Sales Process: https://www.skipio.com/30mpcFocus Areas: Sales ProcessSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Prof. Vandiver goes over various problems to review for the quiz, such as sticking and sliding in a circular track, a rotating T-bar with an imbalance, a pendulum in an elevator, and other pendulum problems.
This recitation discusses the first problem from Problem Set 3, covering sweep-line algorithms and range queries.
This recitation starts with a discussion of Problem Set 5, and then covers graph representations and breadth-first search.
This recitation discusses the Rubik's cube problem from Problem Set 6, and then uses a graph model to find an optimal build order for a simplified version of the StarCraft game.
A teaching assistant demonstrates their approach to the solution for Problem 2 from Problem Set 7. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 2 from Problem Set 8. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 4 from Problem Set 6. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 3 from Problem Set 6. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 4 from Problem Set 5. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 3 from Problem Set 4. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 5 from Problem Set 3. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 4 from Problem Set 2. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 4 from Problem Set 1. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
A teaching assistant demonstrates their approach to the solution for Problem 3 from Problem Set 1. The TA notes common mistakes made by students and provides problem solving techniques for approaching similar questions on the problem set and exams.
The course teaching assistant introduces himself and explains the purpose and content of the problem solving videos in this course.
Property Law (Fall 2011) - Professor Polk Wagner - Fall 2011 Class Sessions (Slides + Audio)
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
MP4 format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
Harvard College's Computer Science 50: Introduction to Computer Science I
PDF format
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets
CSCI E-1: Understanding Computers and the Internet - Exams & Problem Sets