POPULARITY
In this podcast, we're going to listen in on a panel discussion hosted by the Stanford Cyber Policy Center on State Media, Social Media, and the Conflict in Ukraine. Convened by Nate Persily, Co-director of the Cyber Policy Center and James B. McClatchy Professor of Law at Stanford Law School, the panel considers the moves taken in recent days by governments and technology platforms, and the implications for the ways state-sponsored media and information will be regulated in the future. Guests include: Nathaniel Gleicher, Head of Security Policy at Meta, which operates Facebook, Instagram and WhatsApp Yoel Roth, Head of Site Integrity at Twitter Marietje Schaake, International Policy Director at the Cyber Policy Center and former Member of European Parliament Renée DiResta, Research Manager at the Stanford Internet Observatory Alex Stamos, Director of the Stanford Internet Observatory and former Chief Security Officer of Facebook Alicia Wanless, Director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace Mike McFaul, Director of the Freeman Spogli Institute for International Studies, and former U.S. Ambassador to the Russian Federation.
This week on Arbiters of Truth, our podcast on our online information ecosystem, Evelyn Douek and Quinta Jurecic bring you an episode they've wanted to record for a while: a conversation with Nathaniel Gleicher, the head of security policy at Facebook. He runs the corner of Facebook that focuses on identifying and tackling threats aimed at the platform, including information operations.They discussed a new report released by Nathaniel's team on “The State of Influence Operations 2017-2020.” What kinds of trends is Facebook seeing? What is Nathaniel's response to reports that Facebook is slower to act in taking down dangerous content outside the U.S.? What about the argument that Facebook is designed to encourage circulation of exactly the kind of incendiary content that Nathaniel is trying to get rid of?And, of course, they argued over Facebook's use of the term “coordinated inauthentic behavior” to describe what Nathaniel argues is a particularly troubling type of influence operation. How does Facebook define it? Does it mean what you think it means? See acast.com/privacy for privacy and opt-out information.
This week on Arbiters of Truth, our podcast on our online information ecosystem, Evelyn Douek and Quinta Jurecic bring you an episode they've wanted to record for a while: a conversation with Nathaniel Gleicher, the head of security policy at Facebook. He runs the corner of Facebook that focuses on identifying and tackling threats aimed at the platform, including information operations. They discussed a new report released by Nathaniel's team on “The State of Influence Operations 2017-2020.” What kinds of trends is Facebook seeing? What is Nathaniel's response to reports that Facebook is slower to act in taking down dangerous content outside the U.S.? What about the argument that Facebook is designed to encourage circulation of exactly the kind of incendiary content that Nathaniel is trying to get rid of? And, of course, they argued over Facebook's use of the term “coordinated inauthentic behavior” to describe what Nathaniel argues is a particularly troubling type of influence operation. How does Facebook define it? Does it mean what you think it means? See acast.com/privacy for privacy and opt-out information.
Facebook Inc on Thursday said it will start labeling Russian, Chinese and other state-controlled media organizations, and later this summer will block any ads from such outlets that target U.S. users. According to a partial list Facebook the world’s biggest social network will apply the label to Russia’s Sputnik, Iran’s Press TV and China’s Xinhua News. The company will apply the label to about 200 pages at the outset. Facebook’s head of cybersecurity policy, Nathaniel Gleicher in an interview said Facebook will not label any U.S.-based news organizations, as it determined that even U.S. government-run outlets have editorial independence. Learn more about your ad choices. Visit megaphone.fm/adchoices
Rumors, conspiracy theories and false information about the coronavirus have spread wildly on social media since the pandemic began.False claims have ranged from bogus cures to misinformation linking the virus with conspiracy theories on 5G mobile phone technology or high-profile figures such as Microsoft's Bill Gates.To combat disinformation, Twitter announced on Monday that it will start flagging misleading posts related to the coronavirus. Twitter's new labels will provide links to more information in cases where people could be confused or misled, the company said in a blog post. Warnings may also be added to say that a tweet conflicts with guidance from public health experts before a user views it. Related: Internet restrictions make it virtually impossible for Kashmiris to get COVID-19 infoSimilarly, Facebook's third-party fact-checking partners, which include Reuters, rate and debunk viral content on the site with labels. Last month, YouTube also said it would start showing information panels with third-party, fact-checked articles for US video search results. Yoel Roth, head of site integrity at Twitter, and Nathaniel Gleicher, head of cybersecurity policy at Facebook, have been working together to tackle disinformation during the pandemic. Roth and Gleicher spoke to The World’s host Marco Werman about their efforts to fight against fake news and the challenges they face.Related: Amid pandemic, Animal Crossing gamers create dreamy ‘islands,’ travel and mingle with friendly (and really cute) animal neighborsMarco Werman: Given that the COVID-19 pandemic is such a global problem, what new strategies have you had to come up with to deal with disinformation on your platforms? Nathaniel Gleicher: The truth is that disinformation or misinformation isn't something that any one platform — or quite frankly, any one industry — can tackle by itself. We see the actors engaged here, leveraging a wide range of social media platforms, also targeting traditional media and other forms of communications.One of the key benefits here, as we think about bad guys trying to manipulate across the internet, is we focus on behavior that they engage in — if they're using fake accounts, if they're using networks of deceptive pages or groups.The behavior behind these operations is very similar whether you're talking about coronavirus or the 2020 election or any other topic, if you're trying to sell or scam people online. And so the tools and techniques we built to deal with political manipulation, foreign interference and other challenges actually apply very effectively because the behaviors are the same. You’ve both mentioned bad guys and malicious actors. Who are they? How much do you know about them and how does that knowledge inform how you deal with individual threats of disinformation? Yoel Roth: Our primary focus is on understanding what somebody might be trying to accomplish when they're trying to influence the conversation on our service. If you're thinking about somebody who's trying to make a quick buck by capitalizing on a discussion happening on Twitter, you could imagine somebody who is engaging in spammy behavior to try and get you to click on a link or buy a product.If you make it harder and more expensive for them to do what they're doing, then generally, that's going to be a strong deterrent. On the other hand, if you're dealing with somebody who's motivated by ideology or somebody who might be backed by a nation-state, oftentimes, you're going to need to focus on not only removing that from your service, but we believe that it's important to be public with the world about the activity that we're seeing. Related: As pandemic disrupts US elections, states look for online alternativesWhere do most of the disinformation and conspiracy theories originate? Is it with individuals? Are they coordinated efforts by either governmental or nongovernmental actors? Gleicher: I think a lot of people have a lot of preconceptions about who is running influence operations on the internet. Everyone focuses, for example, on influence operations coming out of Iran, coming out of Russia. And we've found and removed a number of networks coming from those countries, including just last month.But the truth is, the majority of influence operations that we see around the world are actually individuals or groups operating within their own country and trying to influence public debate locally. This is why when we conduct our investigations, we focus so clearly on behavior: What are the patterns or deceptive techniques that someone is using that allow us to say that's not OK. No one should be able to do that. Nathaniel, you said that when determining what to take down, Facebook tends to focus on the behavior, the bad actors rather than content. But I think about the so-called "Plandemic" video, a 26-minute video produced by an anti-vaxxer. And it racks up millions of views precisely because it's posted and reposted again and again. How do you deal with videos like that, which go viral? Gleicher: That’s a good question, and it gets to the fact that there's no single tool that you can use to respond in this space because people talk about disinformation or misinformation. But really, it's a range of different challenges that all sit next to each other. There are times when content crosses very specific lines in our community standards such that it could lead to imminent harm; it could be hate speech or otherwise violate our policies.For example, in the video that you mentioned, one of the things that happened in there was that it suggested that wearing a mask could make you sick. That's the sort of thing that could lead to imminent harm. So, in that case, we removed the video based on that content, even though there wasn't necessarily deceptive behavior behind the spreading.And then finally, there are some actors that are sort of consistent repeat offenders — we might take action against an actor regardless of what they're saying and regardless of the behavior that they're engaged in. A really good example of this is the Russian Internet Research Agency and the organizations that still persist that are related back to it. They have engaged in enough deceptive behavior that if we see anything linked to them, we will take action on it, regardless of the content, regardless of the behaviors. Last Friday, a State Department official said they identified “a new network of inauthentic accounts” on Twitter that are pushing Chinese propaganda, trying to spread this narrative that China's not responsible for the spread of COVID-19. And State Department officials say they suspect China and Russia are behind this effort. Twitter disputes at least some of this. Can you explain, though, what is Twitter disputing precisely?Roth: Last Thursday, we were provided with more than 5,000 accounts that the State Department indicated were associated with China and were engaged in some sort of inauthentic or inorganic activity. We've started to investigate them. And much of what we've analyzed thus far shows no indication that the accounts were supportive of Chinese positions.And then in a lot of cases, we actually saw accounts that were openly critical of China. And so, this really highlights one of the challenges of doing this type of research.Oftentimes, you need a lot of information specifically about who the threat actors are, how they're accessing your service, what the technical indicators are of what they're doing in order to reach a conclusion about whether something is inauthentic or coordinated. And that's not what we saw thus far in our investigation of the accounts we received from the State Department. This interview has been lightly edited and condensed for clarity. Reuters contributed to this report.
It's getting harder to tell reality from fiction. Fake news and misinformation are all around us, and they're increasingly used as weapons of war. But what happens when A.I.-doctored videos are added to the mix? We meet the people fighting back against deep fakes, and even using them for good. And we visit Facebook headquarters to learn how Russian agents are trying to manipulate our behavior. In this episode: Nathaniel Gleicher of Facebook, John Micklethwait of Bloomberg News, Jose Sotelo of Lyrebird, Danielle Citron of Maryland Carey Law, Hany Farid of Dartmouth, and David Kirkpatrick of Techonomy. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
Brook Bello: Tech and Tech Policy Solutions to End Sex Trafficking (Ep. 160) Brook Bello joined Joe Miller to discuss how tech policies can help end sex trafficking. Bio Dr. Brook Bello (@BrookBello) is Founder and CEO/ED of More Too Life, Inc., -- an anti-sexual violence, human trafficking and youth crime prevention organization that was named by United Way Worldwide as one of the best in the nation. A sought-after international speaker and champion against human trafficking, Dr. Bello has been recognized with countless achievement awards, fellowships and appointment, she was recently named a Google Next Gen Policy Leader, with the ability to learn from leading Google executives and other leaders in profound aspects that deal with world issues in relation to tech and tech policy. She received the Lifetime Achievement Award from the 44thPresident of the United States and the White House in December 2016. She also received the advocate of the year in the state of Florida from Florida Governor Rick Scott and Florida Attorney Pam Bondi’s Human Trafficking Council. Dr. Bello is also the author of innovative root cause focused successful curricula such as, RJEDE™ (Restorative Justice End Demand Education) -- a court appointed and volunteer course for violators of sexual violence, prostitution and human trafficking prevention in Miami/Dade, Sarasota and Manatee counties. In addition, LATN™ and LATN D2 (Living Above the Noise) educational mentoring curriculum for victims to prevent sexual violence and human trafficking. She holds a Masters and PH.D in pastoral clinical counseling and accreditation in pastoral clinical and temperance based counseling. Her bachelor’s is in biblical studies. She also holds two honorary doctorates -- one in humane letters, theology and biblical studies from the Covenant Theological Seminary and Richmond Virginia Seminary. Her dissertation defends the urgency in spirituality in mental health and the profound pain caused by shame. Bello is also a licensed chaplain and ambassador with the Canadian Institute of Chaplains and Ambassadors (CICA)—the only university accredited by the United Nations Economic and Social Council (UN-ECOSOC). She is also an alum of the Skinner Leadership Institute’s Masters Series of Distinguished Leaders. Dr. Bello was chosen 1 of 10 national heroes in a series by Dolphin Digital Media and United Way Worldwide called, The Hero Effect.” Resources More Too Life Way of the Peaceful Warrior by Dan Millman Life is Not Complicated, You Are by Carlos Wallace News Roundup Tech stocks tank following earnings reports Tech stocks led a slide on major indexes as Amazon posted a two-day decline Monday, eliminating some $127 billion from its market value, according to the Wall Street Journal. Amazon actually posted a $2.88 billion profit in the 3rd quarter—11 times last year’s figure—but its sales increased by only 29%, falling about half a billion dollars shy of the average analyst estimate of $57.1 billion. Alphabet too missed analyst estimates by about $310 million, coming in with $33.74 billion in revenue in the third quarter, which was up by 21% over last year. At Twitter, active monthly users declined, but revenue was up 29% to 650 million for the third quarter. Twitter attributed the user decline to its purging of suspicious accounts. Tesla also reported strong earnings, with $312 million in profits on $6.8 billion in revenue. As for Snap – it looks like Facebook’s Instagram stories is eroding the platform, although Snap beat estimates, however slightly. Snap lost about 2 million users since the second quarter, but its net loss was two cents per share less than expected, and it also had more revenue than analysts expected -- $297.6 million – which was about $14 million more than analysts’ expectations. N.Y. Times reports that Trump uses iPhones spied on by Russia/China The New York Times reported that President Trump uses unsecured iPhones to gossip with colleagues that Chinese and Russian spies routinely eavesdrop on to gather intelligence. President Trump denies the report saying that he only uses a government phone and, in a Tweet, said the New York Times report is “sooo wrong”. Facebook identifies Iranian misinformation campaign Facebook identified an Iranian misinformation campaign which led it to delete 82 pages the company says were engaged in “coordinated inauthentic behavior”. Facebook’s head of cybersecurity Nathaniel Gleicher said the pages had over 1 million followers. Google paid ‘Father of Android’ $90 million to leave the company following a sexual misconduct allegation The New York Times reported last week in an investigative report that Google paid Android creator Andy Rubin some $90 million dollars in 2014 when he left the company following sexual misconduct allegations. Google released Rubin with praises from Larry Page even though an internal investigation found the allegations credible, according the New York Times. The newspaper reports that Google similarly protected 2 other executives. Rubin has denied the allegations and, in a letter to Google’s employees, Google CEO Sundar Pichai wrote that Google has fired some 48 employees for sexual harassment since 2016. U.S. launches election protection cyber operation against Russia U.S. cybercommand has launched a first-of-its-kind mission against Russia to prevent election interference. The initiative followed a Justice Department report released Friday outlining Russia’s campaign of “information warfare”. Alleged Pittsburgh shooter repeatedly posted violent content on social media prior to mass murder Before he allegedly murdered 11 people in a Pittsburgh synagogue, including a 97-year-old holocaust survivor, Robert Bowers allegedly posted hateful and violent content on social media numerous times on Facebook, Twitter, and the alt-right website Gab -- but he still wasn’t on the radar of law enforcement. Joyent, the web hosting platform that hosted Gab, has since banned Gab from using its platform, knocking it offline. Kevin Roose has more in the New York Times. U.S. restricts exports to Chinese semiconductor firm Fujian Jinhua The U.S. has decided to restrict exports to Chinese semiconductor firm Fujian Jinhua. The Trump administration says the company stole intellectual property from U.S.-based Micron Technology. The rationale is that if Fujian Jinhua supplies chips to Micron, there’s a risk that the Chinese-manufactured chips would edge out those manufactured by American competitors. President Trump signs U.S. spectrum strategy President Trump has signed a memo directing the Commerce Department to develop a spectrum strategy to prepare for 5G wireless. Mr. Trump has also created a Spectrum Task Force to evaluate federal spectrum needs and how spectrum can be shared with private companies. UK fines Facebook £500,000 for data violations Finally, the UK has fined Facebook just £500,000 for Cambridge Analytica-related data violations. That’s a little over $640,000— The Guardian notes that Facebook brought in some $40.7 billion last year. The UK’s Information Commissioner’s Office found that Cambridge Analytica harvested the data of some 1 million Facebook users in the UK via loopholes on Facebook’s platform that allowed developers to access the data of Facebook’s users without their consent.
Em julho deste ano, o Facebook retirou do ar 196 páginas e 87 perfis no Brasil alegando, em comunicado oficial assinado por seu líder de Cibersegurança, Nathaniel Gleicher, que os perfis excluídos formavam uma “rede coordenada” que “escondia das pessoas a natureza e a origem de seu conteúdo com o propósito de gerar divisão e espalhar desinformação”, sem explicar o que seriam “divisão” e “desinformação”. Em agosto, foi a vez de cerca de 650 páginas supostamente ligadas à Rússia e ao Irã que poderiam influenciar as eleições nos Estados Unidos. No mesmo dia, o Twitter anunciou a remoção de 284 contas supostamente ligadas ao Irã. No Brasil, a medida gerou um pedido de informações por parte de Ministério Público Federal em Goiás (MPF-GO), que foi respondido há cerca de um mês. Foi a primeira vez que a lista completa de páginas retiradas veio à público, levantando ainda mais polêmica sobre o viés ideológico das ações da empresa. No último dia 27 de agosto, o procurador Ailton Benedito, responsável pelas investigações, enviou uma representação à procuradora-geral da República, Raquel Dodge, que também é procuradora-geral eleitoral, afirmando que as redes sociais estão violando o direito à comunicação e a legislação eleitoral. O Podcast Ethos conversou com a Ailton Benedito sobre a representação e buscou também Caio Cabeleira, advogado e doutor em direito pela USP, Jacqueline Abreu, advogada especialista em direito digital e doutoranda em direito pela USP, além de Luca Belli, professor de Governança e Regulação de Internet na FGV Direito Rio, para entender os limites da atuação das plataformas e as discussões sobre regulação da área. - Tenha acesso a conteúdos exclusivos! Assine: https://assinaturas.gazetadopovo.com.br/ Escolha seu app favorito e receba uma seleção com as principais notícias do dia no seu celular: leia.gp/2MTnyrS Acompanhe a Gazeta do Povo nas redes sociais: Facebook: www.facebook.com/gazetadopovo Twitter: twitter.com/gazetadopovo Instagram: www.instagram.com/gazetadopovo Google +: plus.google.com/+gazetadopovo