POPULARITY
"Fake news" is a term you've probably heard a lot in the last few years, but it's not a new phenomenon. From the ancient Egyptians to the French Revolution to Jack the Ripper and the founding fathers, fake news has been around as long as human civilization. But that doesn't mean that we should just give up on the idea of finding the truth.In True or False, former CIA analyst Cindy Otis will take readers young and old through the history and impact of misinformation over the centuries, sharing stories from the past and insights that readers today can gain from them. Then, she shares lessons learned in over a decade working for the CIA, including actionable tips on how to spot fake news, how to make sense of the information we receive each day, and, perhaps most importantly, how to understand and see past our own information biases, so that we can think critically about important issues and put events happening around us into context.Find more information about the author at cindyotis.comOrder True or False, by Cindy Otis releases on July 28, 2020#1 new release in teen and young adult modern historySites mentioned:Snopes.comPoynter InstitutePolitifact.comFactcheck.orgHoaxslayer.comBotcheck.meEpisode 43If you've enjoyed this episode and would like to hear more, please consider signing up as a contributing patron and join the community for exclusive commentary, and content. A $10 a month donation will really keep us going - https://www.patreon.com/thelivedropAlternatively, if you would like to help make Season Three operational you could offer a one time donation of any amount right here ---> https://www.paypal.me/thelivedropThank you for listening and your support,Mark ValleyCreator/Host Get bonus content on Patreon Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
In 2016 Russia defeated the United States in a cyberwar, selecting the president of the United States. Since that victory, Russia continues its campaign in the dark corridors of cyber, colonizing us at every turn: from the Nunes memo to the Parkland shootings. Cyber-colonization can be resisted but first it must be understood. Here are a few things you can do now: Check out Hamilton 68: https://dashboard.securingdemocracy.org/ Check out Botcheck and add the browser extension: Botcheck.me Follow Sheera Frenkel: https://www.nytimes.com/by/sheera-fre... https://www.buzzfeed.com/sheerafrenkel Timothy Snyder is a historian at Yale University, specializing in eastern Europe, totalitarianism, and the Holocaust. His books have received widespread acclaim. His most recent book, "On Tyranny: Twenty Lessons from the Twentieth Century," explores the everyday ways a citizen can resist the authoritarianism of today. He is also the author of "Black Earth: The Holocaust as History and Warning" and, forthcoming in April, "The Road to Unfreedom: Russia, Europe, America." Follow on Twitter: @TimothyDSnyder
The show where we discuss things we’re into every other week! In What’s Happening What’s Up we discuss “A Brief Inquiry Into Online Relationships” by The 1975 We then give recommendations for the biweek: Taylor The TV show: “The Final Table” Jacob The movie “Burning” Taylor “HERE’S HOW MUCH BOTS DRIVE CONVERSATION DURING NEWS EVENTS” by Issie Lapowsky https://www.wired.com/story/new-tool-shows-how-bots-drive-conversation-for-news-events/ Check out BotCheck.me and FactCheck.me Jacob “Christmas Song” by Phoebe Bridgers Follow us here: instagram.com/goodtastepod twitter.com/@jacobthewilson twitter.com/@taylorjaywilson Email us: goodtastepod@gmail.com Leave a review and something you want us to check out and we’ll do so! Click here: https://itunes.apple.com/us/podcast/good-taste/id1331981072?mt=2 Our friends: Our intro song is by Koi https://open.spotify.com/artist/6MhwQdck5uQDaUUf0wI1kj?si=vzuRLjPCSBSPoCi6wpPOOA Rival Sports Club https://www.spreaker.com/show/rival-sports-club Taylor’s other show: https://www.spreaker.com/user/10623863 Jacob’s blog: jacobandrewwilson.com/blog
The show where we discuss things we’re into every other week! In What’s Happening What’s Up we discuss “A Brief Inquiry Into Online Relationships” by The 1975We then give recommendations for the biweek:TaylorThe TV show: “The Final Table”JacobThe movie “Burning”Taylor“HERE’S HOW MUCH BOTS DRIVE CONVERSATION DURING NEWS EVENTS” by Issie Lapowsky https://www.wired.com/story/new-tool-shows-how-bots-drive-conversation-for-news-events/Check out BotCheck.me and FactCheck.me Jacob“Christmas Song” by Phoebe BridgersFollow us here:instagram.com/goodtastepodtwitter.com/@jacobthewilson twitter.com/@taylorjaywilsonEmail us: goodtastepod@gmail.comLeave a review and something you want us to check out and we’ll do so! Click here: https://itunes.apple.com/us/podcast/good-taste/id1331981072?mt=2Our friends:Our intro song is by Koihttps://open.spotify.com/artist/6MhwQdck5uQDaUUf0wI1kj?si=vzuRLjPCSBSPoCi6wpPOOARival Sports Club https://www.spreaker.com/show/rival-sports-clubTaylor’s other show: https://www.spreaker.com/user/10623863Jacob’s blog: jacobandrewwilson.com/blog
The show where we discuss things we’re into every other week! In What’s Happening What’s Up we discuss “A Brief Inquiry Into Online Relationships” by The 1975We then give recommendations for the biweek:TaylorThe TV show: “The Final Table”JacobThe movie “Burning”Taylor“HERE’S HOW MUCH BOTS DRIVE CONVERSATION DURING NEWS EVENTS” by Issie Lapowsky https://www.wired.com/story/new-tool-shows-how-bots-drive-conversation-for-news-events/Check out BotCheck.me and FactCheck.me Jacob“Christmas Song” by Phoebe BridgersFollow us here:instagram.com/goodtastepodtwitter.com/@jacobthewilson twitter.com/@taylorjaywilsonEmail us: goodtastepod@gmail.comLeave a review and something you want us to check out and we’ll do so! Click here: https://itunes.apple.com/us/podcast/good-taste/id1331981072?mt=2Our friends:Our intro song is by Koihttps://open.spotify.com/artist/6MhwQdck5uQDaUUf0wI1kj?si=vzuRLjPCSBSPoCi6wpPOOARival Sports Club https://www.spreaker.com/show/rival-sports-clubTaylor’s other show: https://www.spreaker.com/user/10623863Jacob’s blog: jacobandrewwilson.com/blog
The show where we discuss things we're into every other week! In What's Happening What's Up we discuss “A Brief Inquiry Into Online Relationships” by The 1975 We then give recommendations for the biweek: Taylor The TV show: “The Final Table” Jacob The movie “Burning” Taylor “HERE'S HOW MUCH BOTS DRIVE CONVERSATION DURING NEWS EVENTS” by Issie Lapowsky https://www.wired.com/story/new-tool-shows-how-bots-drive-conversation-for-news-events/ Check out BotCheck.me and FactCheck.me Jacob “Christmas Song” by Phoebe Bridgers Follow us here: instagram.com/goodtastepod twitter.com/@jacobthewilson twitter.com/@taylorjaywilson Email us: goodtastepod@gmail.com Leave a review and something you want us to check out and we'll do so! Click here: https://itunes.apple.com/us/podcast/good-taste/id1331981072?mt=2 Our friends: Our intro song is by Koi https://open.spotify.com/artist/6MhwQdck5uQDaUUf0wI1kj?si=vzuRLjPCSBSPoCi6wpPOOA Rival Sports Club https://www.spreaker.com/show/rival-sports-club Taylor's other show: https://www.spreaker.com/user/10623863 Jacob's blog: jacobandrewwilson.com/blog
UC Berkeley students and founders of RoBhat Labs, Ash Bhat and Rohan Phadte, have launched a Twitter bot checker called Botcheck.me using data science and machine learning to help any user identify fake news.Transcript:Lisa Kiefer:You're listening to Method to the Madness, a bi-weekly public affairs show on KALX Berkeley, celebrating Bay Area innovators. I'm your host, Lisa Kiefer. And today I have two UC Berkeley students, Ash and Rohan, and they have launched a Twitter Bot checker that has really taken off. We're going to talk to them about how they're battling fake news.I'd like to welcome the UC Berkeley students. What year are you guys?Ash Bhat:We're juniors.Lisa Kiefer:It's Ash Bhat and Rohan Phadte. And you've come to my attention because you came up with an innovative Twitter Bot Checker and I assume you've probably come up with a lot of other things too since then. But I wanted to talk to you about your lab, RoBhat lab, which combines your names, that's really great. So tell me first of all, what is a bot?Ash Bhat:Yeah, so one of the things that we've been really looking at was on Twitter. There are a lot of these accounts that try really hard to be human but actually have bot-like behavior behind it. And there are a lot of bots that are really harmless on Twitter. A lot of them are... actually say that they're bots and they actually just tweet out maybe like every word in the English language as an automation exercise, but there are some other bots on Twitter that are actually pretty dangerous and they end up pushing, or re-tweeting a lot of these political propaganda memes or topics and a lot of other people can actually see these bot networks spread this information, look at the information and think that, oh look, my friends are sharing it. There's a lot of people here.Lisa Kiefer:It's legitimate.Ash Bhat:It's legitimate.Lisa Kiefer:Yeah.Ash Bhat:In fact, it's actually been propagated by, you know, hundreds or maybe even thousands of bot-like profiles and it's basically artificially creating this virality on Twitter.Lisa Kiefer:When did you come to the realization that this was a problem that you had to find a solution to?Rohan Phadte:Yes, so in terms of our background, we started out trying to figure out... trying to identify fake news computationally. The way we approached it was trying to figure out where fake news was being spread. So we actually went on Twitter, started looking at the different accounts that are spreading fake news and we started noticing that they didn't look human at all. They were tweeting out every minute, they seem to be tweeting at every hour of the day. And so all of a sudden we're like this seems to not be human. And so that's sort of how we got into this entire...Lisa Kiefer:I read that... What's the guy's name? Yiannopoulos who was here...Rohan Phadte:Oh yeah.Lisa Kiefer:... was that the impetus?Rohan Phadte:Yeah, Milo is definitely an impetus in the sense that's sort of how we got into like the political space. We were both at the protest and while we were there we realized that there is so much misinformation that was being spread about the protest. And that's how we started getting acquainted with the space.Lisa Kiefer:So you are studying what here at UC Berkeley?Rohan Phadte:I'm an interdisciplinary studies field major, so I'm studying like quite a few different majors. So everything from sociology to philosophy to like computer science.Lisa Kiefer:Oh that's a nice mix. What about you?Ash Bhat:I'm studying computer science, electrical engineering. So I mainly the engineering side and doing a little bit of part of breakfast research.So I read that you call yourself Data Scientist. What does that mean exactly?Rohan Phadte:Yes. So in terms of a data science, we're looking at a lot of statistics. Data science is a very sexy word for like a statistical analysis. So we're looking at a lot of texts, we're looking at a lot of numbers and we're trying to make sense of it all. And that's essentially what we do as data scientists.Lisa Kiefer:You started this lab, walk me through your process. What did you need to do first after you realized you want to get truth in the information space?Ash Bhat:It started off as just us working on projects basically just be being like, what can we do? We're computer scientists, we can solve any problem. Like we can try it. You use our technical knowledge like solve any problem and-Lisa Kiefer:You're going to use it for a class project or is this outside of class?Ash Bhat:This is completely outside of class project. Completely outside of class. On nights we're like, "Hey, we should be doing something about this." In fact, one of the interesting things we saw was on Facebook, Facebook announced that they are going to be doing something about solving fake news and like trying to detect and trying to stop spreading it. And then right below that we saw instances of, fake news still being spread by a couple of like friends and profiles.And so we were just like, "Hey, there must be something to be done here." And taking matters in their own hands. We were looking at, hey, we can use our data science and machine learning that we learned at Berkeley to try to create our own algorithms to help solve this problem.Lisa Kiefer:And so what do you do with them? I mean explained to me. I use Twitter very rarely. How would I use your ... What do you call your Bot Checker?Ash Bhat:We call it botcheck.me.Lisa Kiefer:And tell me how I would use that. Walk me through how I can protect my account.Ash Bhat:Botcheck.me is actually a website you can visit and it has a couple of dashboards which talk about the ... Basically talk about the statistics of the current bot network, how they're acting, what are the most recent topics, what are they talking about and there's also a search bar where you can enter in any Twitter username and once you enter then a Twitter username, it will actually send up to our server. We'll run statistical analysis behind the scenes, we'll be looking at the tweets, we'll be looking at the how often they tweet, the tweet timestamps, the number of likes. Basically looking at the profiles network and we can accurately determine whether that profile or not is a bot or human.Lisa Kiefer:How accurately.Ash Bhat:Yeah, so recently when we first launched, we're getting about 93% of high confidence profile bot accounts and then since then, since we have a bunch of feedback from the community, we've actually had about 50,000 users in over 500,000 accounts classified. And that number is just risen since then because has been taking all this input from humans and learning. And so now that number's about 96 to 97%.Lisa Kiefer:That's not bad.Ash Bhat:Because I think it's a great start for understanding the button work on Twitter, especially since there's already not getting out there and just having all the information out there really add can educate a user whether an account or not is actually spreading humans stuff.Lisa Kiefer:When you need this data that to do your analysis, do you have to pay for that from Twitter? How do you get your information?Rohan Phadte:Yeah, so in terms of getting the information, Twitter actually has a public API and so we're actually able-Lisa Kiefer:And what is an API?Rohan Phadte:So API is essentially a ... It's sort of like hitting a URL to get access to information in a way that we can run data analysis on. So Twitter makes a bit of their service of available for developers like us to actually take advantage of and like use for statistical analysis.Lisa Kiefer:Well I didn't know Twitter provided that free. So they're taking a passive stance it sounds like, and letting developers. Why do you think they're taking a passive stance and not doing this themselves?Ash Bhat:We're actually a little bit confused concerning that we're two college students that have been able to build something that you just very clearly want. The response that we've got has been absolutely insane. But that being said, Twitter's a multi-billion dollar company with hundreds if not thousands of engineers and we think they should totally be doing more when this problem is so, so important and a problem that we all face.Lisa Kiefer:Well, have you talked to anyone there? What do you think is the reason? Is it because they need the advertising promotional? They don't want to put any restraints on business or what?Ash Bhat:We're not entirely sure, but one thing that has been pretty fascinating is Twitter hasn't really responded to our comments to us reaching out to them, but also we recently gave a talk at Stanford and I think Twitter was supposed to be there as well, and when they found out we were going to be speaking, I think they dropped out of the talk and so like we're not entirely sure like what's happening. We haven't really heard too much back. But yeah, we are definitely very curious.Lisa Kiefer:Yeah, it seems like Facebook and Twitter and maybe other, they're taking a passive approach like this problem is going to go away.Ash Bhat:Yeah, I mean from I guess one theory that we have and we don't want to speculate too much. It is a very complex area for them to be in, especially concerning even when Facebook try to prevent fake news, they actually got in trouble for seeming a anti-conservative and so there's a lot of this pushback against Facebook or Twitter seeming politically polarized if they were to take a position one way or the other.Lisa Kiefer:One side or the other would accuse them.Ash Bhat:Exactly, yeah.Lisa Kiefer:After all your research, do you think that one side of the political spectrum is using these bots more than others or is it pretty equal?Ash Bhat:In terms of how these bots come out in terms of politics, we've actually noticed bots on both sides of the political spectrum. It's been actually very, very, very scary. We've seen examples such as like the Parkland shootings when bots were tweeting out about gun control from both sides of the debate and what this actually creates is a even more of a divide.And on top of that, the thing that again scares us is these bots are able to sort of influence the conversation. So where in the case of Parkland, where it could have been a conversation were we would've found unity around mental health. It became a conversation where we fought over gun control. These are the different areas where we actually find bot networks incredibly scary in terms of like their effect on how we talk about certain issues.Lisa Kieifer:I mean, how do we know the truth and why is that important?Ash Bhat:Yeah, there was recently an MIT study that came out that said fake news actually spreads a much faster and much broader than real news. And they looked at data from the 2016 elections. And I think the core thesis that we took away from that was fake news is in some ways more sexy, more interesting to read. And thus it spreads virally a lot, easier.That sort of becomes this problem where it's inherently spread spending lot faster, yet it comes with all these different problems that is baked into being fake. And so we're sort of struggling with that. Like, how do we incentivize people to read the truth and also how do we stop fake news from spreading in the first place?Lisa Kiefer:It's an interesting question because education of most people in the country is not that great anymore. And so they're not learning how to critically think. And so there's a reliance instead of doubting something or going deeper, there's just a superficial like, okay that that's the truth. So if you don't have critical thinking skills, you're not going to be able to, to know the difference I think. And that's scary.Ash Bhat:Yeah, like the way we see it internally is like, we almost see it like a mind virus when it comes to like a lot of these ideas in the sense that they sort of spread in the same ways that viruses do. They infect a few like initial people, they start spreading their hosts and the analogy seems to just work almost perfectly.Lisa Kiefer:Yeah.Ash Bhat:Yeah.Lisa Kiefer:And the other side of that is once something is out there, you can't put it back in the bottle. Even if somebody says, no, no, no, no, it's been proven to be wrong. It's already in my head somehow.Ash Bhat:Yeah, it's incredibly hard to quarantine right?Lisa Kiefer:Right. Did you have many challenges when you develop this product and if you did, what were they?Rohan Phadte:Just from a technical standpoint of looking through all the basically statistical models and you have to actually basically teach a computer on how to learn what a bot and what a human is. And so it does take a little bit of engineering time behind it and I a little bit of research to understand how exactly can we make this computer most effectively learn what the difference between a bot and human is.And then even then as you have that very powerful model, it's basically a game of to figure out how can we best release this model out to the public so that they can understand what's going on in their network and how can they use it the most effectively versus becoming a cat and mouse game of like just people calling each other bots and you're a bot, you're not a bot and then making like a flame more based on that.And that's something we really want to avoid. Just want to increase ... Make people more aware of their entire social network without actually making people start calling each other names and saying, hey, you're bot not a bot.Lisa Kiefer:How did you do that?Rohan Phadte:Yeah, so what we wanted to do is we actually made the tool and when we tried to be very careful with here. We tried to basically say, hey, it's a very good tool to help you understand what you're talking to. Is it a bot or is it a human? Are you arguing with specifically someone who actually has an opinion on this versus an automated new account?And our tool is very good at just giving that information saying, "Hey, this is likely to be a bot" but like, "Hey, make sure be warned that this is, this is likely not to be a human." And so it gives that user that information. Just say, "Hey, be a little bit more careful here." If you're arguing and constantly getting, nothing's really making any difference than hey, just be warned that this could be a bot.Lisa Kiefer:So if I'm in conversation with something that may be a bot, what would I see? Some of them are automated at the other end so you can tell the answers are just automated answers.Rohan Phadte:There's different levels of bot to that that we've seen. So some are specifically completely all automated and some of them are are like maybe a mixture. So there's a human behind the scenes and there's could be some automation aspects to the actual itself.And so sometimes you'll be able to like message a bot and you won't get any response back because they're not set up for automation for that. And sometimes you'll be able to get a response back because there's someone controlling it behind the scenes and it's kind of hard to tell. One of the true tales that we can tell from just maybe like the profiles that you can see that there's a lot of retweets, a tweets happened very quickly, a tweets happened every few like few minutes or maybe once every minute happening pretty often close to 24 hours at a day doesn't really stop or maybe specifically re-tweeting the same sources and those are all true tale signs that you can check as a human to see if our this account or not is a bot.Lisa Kiefer:If you're just tuning in, you're listening to Method to the Madness. A bi-weekly public affairs show on KALX Berkeley Celebrating Bay area innovators. Today I'm speaking with founders of RoBhat labs, two UC Berkeley students using artificial intelligence to create a botbuster called Botcheck me, battling fake news.Seems like all of these services started out with a pretty idealistic philosophy, you know, whether it was Facebook or Twitter and the uprising in Egypt, Twitter was great for that. I see it as being a real great tool for journalists who are out in remote areas. It didn't take long though for it to become co-opted. What are your using now besides Twitter? What do you think is the best social media tool right now?Ash Bhat:To answer your question, I think you brought up a really interesting word and that's tool. I think that's totally what these things are, right? They aren't necessarily morally right or morally wrong. It's a tool and can be used in a variety of different ways. And so with Twitter, yes it's been beautiful for things like protests-Lisa Kiefer:Errant springs.Ash Bhat:Errant springs, but that being said, it's also a tool that can be used to create polarization, to create the spread of misinformation. And so like in that perspective, like when it comes down to like what is the best social network? I think all these social networks are tools and it's how we use them and how we like receive our information from them. So it's hard to like answer that question.Lisa Kiefer:Do you feel like you were successful and that you're moving on to new things or are you still working on this Twitter bot?Ash Bhat:So in terms of this problem, the way we've seen it is the first step in solving a problem is identifying it. And so that's sort of what we'd done with Botcheck me like over the past several months, I think society as a whole has become a lot more aware of these problems and we're really happy that we've played a role in terms of helping that happen.But that being said, I guess the next step for us in terms of things that we're working and when we're publishing a report that essentially is going to talk about like, and identify a lot of the different phenomenons that are going on just so that we can start becoming more aware-Lisa Kiefer:What kind of phenomenon?Ash Bhat:For example, like we see these things called DMAs which are distributed misinformation attacks. And so that's sort of what these bot networks are. They're essentially like a distributed way of like a lot of these bots trying to spread misinformation. And so like different concepts like that we're trying to like make available for the public so that we also have understanding for them. And I guess the next step after that is trying to understand ...Once we understand how to detect these problems, how to prevent them so that they don't happen again in the future. And so we're working with groups like the Democratic Party for example, the Democratic National Committee along with different groups to make sure that these sort of things don't happen Again.Lisa Kiefer:Are you working with Republican Party too?Ash Bhat:We've been talking to a couple different campaigns. We haven't had a chance to like have a similar conversation with the Republican National Committee, but we hope to.Lisa Kiefer:So you're moving on to new areas that read that you guys have like a blackboard with 20,000 apps on it. I know you're busy with school, you have to graduate too, but what is your next project or are you just wrapped up completely in what you just told me?Ash Bhat:Yeah, so I guess we can't comment too much on like the different projects that we have behind the scenes just because we have a lot of these different confidential or like relationships that we can't honestly talk about yet. But that being said, this is a problem that we care very deeply about and want to have a huge impact. And so we are actively ... Every single day, we spend time working on coming up with solutions to make sure that these sort of problems don't happen again in the future.Lisa Kiefer:I think it's so wonderful that you are so committed to this idea. How does that happen? You both grew up in San Jose area, correct?Ash Bhat:Yeah.Lisa Kiefer:And you've known each other a very long. Were you neighbors? Tell me about your backgrounds.Rohan Phadte:Yeah, actually Ash and I were basically childhood friends we even get at each other in about middle school. And since then we've been pretty good friends. So we saw each other as like a, you know, in highest and like basically school, I was like competitors, just a little competitive whenever we tried to try like have tests and stuff. But I mean for the most part we've been really, really good friends. We've had cross country together, we used to do robotics together back in high school.Lisa Kiefer:Robotics? Okay.Rohan Phadte:And so yeah, so we've always had an interest in passion in technology and that's just pretty much grown from there. And since when we got both gone to Berkeley, we're like, we have to be a housemates, we had to be roommates. And so that happened.Lisa Kiefer:How does that work out where you're rooming together and you also have a business together? Is that something that must be challenging sometimes?Ash Bhat:The thing that's really interesting about this entire thing is Rohan and I have been building projects since we were like teenagers. What sort of happened was over the past like eight or so years, every year I guess the world just started listing a little bit more and like it's just been very validating to work on these projects that we like build for our friends. And now like we have tens of thousands of people that use us every single day.Lisa Kiefer:And how did you get the word out? I know you grew up in Silicon Valley, so you've probably even run into people that are in this business. How did you make the approach, I don't know, how did you get your support?Ash Bhat:We were working in a space with a huge problem and no solutions. And so, I think it was a byproduct of like us being at the right place in the right time. We're in Berkeley working on computer science in one of the most interesting political probably in history, like in US history. And so like I think there's a lot of a huge component of luck to everything that's going on. And yeah, we're incredibly lucky to be where we are.Lisa Kiefer:Do you feel like the tech field is, it's a crisis point right now? I mean, I deleted my Facebook account. I mean, I just feel like it's not anything but a promotional tool. It's great for marketing, you know, we're companies. But for me personally, I just, I don't know. It's not what I thought it would be. They're definitely getting pushed back right now. There's room for a new company.Ash Bhat:I think we're reevaluating our techno optimism. I think for the past like a generation, we've been very, very optimistic about what technology can do. And in many ways we built these amazing tools that let us be connected to each other, get access to information in a way that we've never had.I have a phone in my pocket that I can ask any question to and it'll give me the answer in a couple of seconds. And like that's an amazing place to be in in terms like a point in history. But that being said, with tools come like the positives and negatives. And I think we're at this point, we've started reevaluating the what technology really means to us. And that being said, I don't think it's anything to base on technology. I think it's more so just natural progression of things.Lisa Kiefer:You're both studying artificial intelligence here?Ash Bhat:Yeah. That's correct. Yeah.Lisa Kiefer:What does that like, can you explain what you're studying here in terms of artificial intel and how you use that in your products?Ash Bhat:Artificial intelligence is a like really fancy word for getting computers to essentially work off of heuristics and essentially automate certain tasks. AI Or artificial intelligence is a very broad term that like covers everything from like machine learning to a lot of the simple apps that we use every single day.Rohan Phadte:So maybe that artificial intelligence is me basically used to make decisions for on layer very large data sets. So on an instance where a human might be overwhelmed with a large amount of data, like gigabytes and you know, penta bytes of data, artificial intelligence is a very good way of basically sorting that in an organizable way. Ways that computers can understand very well and then make high level decisions that are statistically probable to yield the highest result at the end.And this is a very powerful tool. I mean a lot of robots and a lot of self driving cars in fact, use this tool, get a lot of data and then make decisions based off that and they can get some high accuracy results in the end better than a human could.Lisa Kiefer:Right. Except for those few accidents.Rohan Phadte:Exactly. I mean this is all process of development. Yeah.Lisa Kiefer:You've been recognized by wired magazine and CBS News. What other accomplishments are you really proud of since you've gotten into this space of protecting all of us from fake news?Ash Bhat:I think the accomplishment that we're most proud of is the users that we've been able to like work with and the amount of accounts that we've classified. Twitter classified about 6,000 accounts when they came out and talked in front of Congress and I think it was November 1st October 31st last year.Today, we've classified over half a million accounts. Just having like a scope of that, that's what we get up every morning excited about. That's sort of what, what makes all of this so validating.Lisa Kiefer:If somebody approached you, your lab and said, we want to buy you for billion dollars or whatever, would you do it?Ash Bhat:I think at the end of the day like we're chasing after this goal. So like we evaluate all the options that we would have in terms of what brings us closest to like achieving that goal. And so like that's sort of the-Lisa Kiefer:So that's not your goal?Ash Bhat:Yeah. Yeah. That's not our goal. Yeah. Our goal is to like solve the problem that we're working on.Lisa Kiefer:You're natural innovators. Do you feel like this area has a lot of natural innovators because of where you grew up or is it your families? What do you think it takes to truly be an innovator like that? Is it the knowledge you've learned or?Rohan Phadte:Yeah, I think too, in order to be like an innovator in this area, you really need to understand where the problem speaks set and where the problems are in society and how it affects people. And then once you get a good understanding of that, you can actually start developing some interesting technology. And in the case of Ash and I and Botcheck me, we've actually spent months just studying fake news beforehand.Before wrote a single line of code, we studied how it spreads and how it looks at what the previous research papers on this area and looking at specifically how is Facebook and Twitter already trying to tackle this problem already and I think that's really important to truly understand the area before you go and delve in before you can just say, "Hey tech can solve that." Or "Hey, just add a little line of code, artificial intelligence can solve that." It really takes a bit of understanding of the problem space in order to understand what is the best way to attack the problem.Lisa Kiefer:What about money? Did you have to get funding to do this work or are you just doing it? Is it just your own blood, sweat and tears?Ash Bhat:Yeah, so funny story on that. It's our completely our blood, sweat and tears. Like we've been so like so frugal about like every expense we're supporting so many users and like we have to figure out how to like make it super cost efficient. A lot of startups don't necessarily have to like worry about like where do we get money or like they they raise funding. We haven't raised a single cent of capital. It's been just absurd like all the different efficiencies and like hacks that we put together to make this entire service run as as cheap as possible on us so that we can provide it for free for the users.Lisa Kiefer:So how would you make money if you're providing it free? Would you have to run ads on it eventually? What's your model?Ash Bhat:Yeah, so in terms of like monetizing this, we think there, this is a real problem. We see this as like the next generation of spam right now I think, I think we've done a pretty good job of like being sort of the thought leaders and like sort of like the experts in the space. Like as this problem becomes larger and groups like the Democratic Party, the Republican Party, run into these issues, we hope to be the group that solves these issues for them.We have access to the best data just because we have the most amazing users and we have access to the best insights. So like we're thinking through using that and that's sort of where we are looking to like monetize.Lisa Kiefer:So it's like the consulting fee or something like that?Ash Bhat:Not necessarily we want to build products with the insight that have in a scalable way so that all these different groups that are affected by problems like misinformation can actually take our products and solve their problems and we can solve those pain points.Lisa Kiefer:So at some point, you would put a price on that product, is that what you're saying?Ash Bhat:Yeah-Lisa Kiefer:Now it's free but-Ash Bhat:Not for the users, but we are starting to charge big groups like the Democratic Party and the Republican Party, like for these sort of services.Lisa Kiefer:At some point you'll be cut loose of UC Berkeley and you won't have access to that free data, right?Ash Bhat:So we don't actually use any of UC Berkeley's data.Rohan Phadte:So yeah, this is the, for example, the Twitter API is completely public in terms of getting gathering data. We've built this all on our own servers and our own end and so we pretty much have access to everything from the algorithm to the data to basically just the entire pipeline altogether. And so we want to scale this out and the way we can and we want to make it as accessible to all the users as we personally can.Lisa Kiefer:Can anybody have access to this data that you, if you have your own servers and everything?Ash Bhat:We're in a unique position because our users hand classify accounts every single day for us. That's why our models are able to keep up with the changing network. And so that's our proprietary data and the reason why we don't make it public, is because we don't want the bots to learn how we're classifying them as like propaganda accounts. And so we'd love to make that public. But like we were sort of at this like limitation where we're worried that the adversaries that we're sort of going after might learn if we were to like publish the datasets that we're working with.Lisa Kiefer:Has anybody asked you about publishing it?Ash Bhat:People would definitely have. And in terms of adversaries, we are servers get attacked every single day. We get attacked on Twitter every day. We have conspiracy theory videos on us. It's crazy.Lisa Kiefer:Okay. So I would assume that people are going to want to know more about you guys. Do you have a website or how do you let people get in touch with you?Ash Bhat:Yeah, so they can, anyone can email us at hi@robhat.com that's hi@r-o-b-h-a-t.com.Lisa Kiefer:And that's a combination of both of your names, right?Ash Bhat:Yeah, that's correct. Yeah. I make my social media incredibly public like, so it's @theashbot on Twitter and anyone can send me any question. I try to be as responsive as possible. And then if you want to check out botcheck.me like you can just go to botcheck.me, it's just a website and then you can-Lisa Kiefer:And you can download the app and use it?Ash Bhat:Yeah, you can download the chrome extension or you can use a website. We try to make it as easy as possible for our users to use. We've gotten recognized for the work that we've done, but I think from our perspective it's important to like also say that we're like just getting started. We've opened up a lab, it's just the two of us working out of our house right now. We've just gotten started.And so like the, the technology that we're working on to and we hope to release we hope makes a real impact in. We know that we're very lucky that the technology that we've already released has made an impact, but we're really, really excited for what 2018 is going to bring and hopefully what we can do in 2018.Rohan Phadte:We're not the only innovators in this space. There has been some other like great work out there and really encourage that because honestly the adversaries against us and the adversities against democracy in general are great. Automation has created a huge industry for adversarial attacks.In fact, there's actually some new research coming out for like deep fakes and other lip sinking, which is basically like you can use AI to modify videos and modify actual content and there's already Photoshop out there for images, but imagine deepex is basically modifying actual videos so it looks like someone else's face has been photo-shopped on someone else's other face and you get all those same expressions. You get it all the exact same like voices and stuff.Lisa Kiefer:Wow.Rohan Phadte:Like basically content and media in the future is in jeopardy. In fact, really, really dangerous. And so we want to find some sort of way where we can protect all content and make sure the content that you see is completely factual and 100% real because it can be very dangerous if an adversary gets access to this algorithm and basically photoshops a celebrity's face on some other celebrity and you can create these viral trends where fake news is being spread and you can have some really, really powerful consequences.Ash Bhat:Yeah, and I guess to add to that, the thing that really scares us is we already have people in positions of power that call real news, fake news. And the moment that we can't tell the difference between real and fake, we just run into this very slippery slope where those people can call anything fake news and we're not going to be able to prove them wrong. And so we want to build the technology now so that we don't run into that problem in the future.Rohan Phadte:Technology has made everything so accessible, made and use, so easy to read. Get up in the morning, just checking your phone and having the news app tell you, hey these are the top headlines. That convenience, that access is something that's incredibly valuable but it can also be taken in a in a way that can mislead, right?That you have clickbait titles, you have headlines that are completely false and then the content is actually like the complete different from the headline. And so yes, there's going to be some ways where the technology can be used in an adversarial way and I think it's up to technology to try to find ways to fix that again and make that completely a tool that is actually helping humans and helping humanity move forward and getting their information and not just become a disastrous tool that can be used to mislead.Ash Bhat:I also think like we're sort of past the point where we can go back in terms of technology like the Internet, like all these different services are here to stay. Like our generation like grew up on them and doesn't know a world without it. But that being said, I think the way we should be thinking is that with these amazing technologies, we've also created these problems that we should start thinking about solving now before they become much, much worse. And we're already seeing like the effects of that.Lisa Kiefer:But you're both pretty optimistic.Ash Bhat:Yeah. One we do think people need to start thinking about these problems now and we do think there are solutions in the space. Yeah, we are very optimistic that hopefully there's this amazing quote that goes "In the cave that you fear lies the answer that you seek." That's one of the quotes that like we should sort of share within RoBhat labs and, and yeah, this is a very scary, scary time in terms of technology, but that being said, we are optimistic. We might discover we might create something completely new that we were unaware of by like diving into solving this problem.Lisa Kiefer:Okay. Well thank you Ash and Rohan for coming in today.Ash Bhat:Yeah, definitely.Lisa Kiefer:And I'm going to keep track of you and I'm going to want you to come back in when you solve this problem.Ash Bhat:Definitely, yeah.Rohan Phadte:Yeah.Ash Bhat:Thank you.Lisa Kiefer: You've been listening to Method to the Madness. Goodbye weekly public affairs show on KALX Berkeley celebrating bay area innovators. You can find all of our podcasts on iTunes university. We'll be back in two weeks. See acast.com/privacy for privacy and opt-out information.