Organization
POPULARITY
Greg Olear talks to Candace Rondeaux about her book “Putin's Sledgehammer,” which delves into the Wagner Group mercenary group and the complexities of Russia's political landscape. Rondeaux shares her background in journalism and her journey into understanding Russian geopolitics, particularly through the lens of the Wagner Group and its leader, Yevgeny Prigozhin. The discussion covers the origins of the Wagner Group, Prigozhin's rise and fall, the influence of figures like Alexander Dugin, and the implications of the Internet Research Agency's actions during the 2016 election. Rondeaux also reflects on US policy failures regarding Russia and the future of democracy in Ukraine, the US, and beyond.Candace Rondeaux directs Future Frontlines, a public intelligence service for next generation security and democratic resilience, and the Planetary Politics initiative at the New America Foundation. A writer and public-policy analyst, Rondeaux is a professor of practice and fellow at the Melikian Center for Russian, Eurasian, and East European Studies and the Center on the Future of War at Arizona State University. Before joining New America, Rondeaux served as a senior program officer at the U.S. Institute of Peace where she launched the RESOLVE Network, a global research consortium on conflict and violent extremism and as a strategic advisor to the U.S. Special Inspector General for Afghanistan Reconstruction. Rondeaux has documented and analyzed political violence in South Asia, and around the world for the Washington Post and the International Crisis Group. Before going abroad for the Post in 2009, Rondeaux covered criminal justice in Maryland and Virginia, where she covered capital punishment and was part of the Pulitzer Prize winning team of Post reporters who covered the 2007 Virginia Tech massacre. Buy the book:https://www.hachettebookgroup.com/titles/candace-rondeaux/putins-sledgehammer/9781541703063/?lens=publicaffairsFollow Candace:https://x.com/CandaceRondeauxhttps://bsky.app/profile/CandaceRondeaux.bsky.socialMake America Great Gatsby Again!https://bookshop.org/p/books/the-great-gatsby-four-sticks-press-centennial-edition/e701221776c88f86?ean=9798985931976&next=tSubscribe to the PREVAIL newsletter:https://gregolear.substack.com/about Make America Great Gatsby Again!https://bookshop.org/p/books/the-great-gatsby-four-sticks-press-centennial-edition/e701221776c88f86?ean=9798985931976&next=tSubscribe to The Five 8:https://www.youtube.com/channel/UC0BRnRwe7yDZXIaF-QZfvhACheck out ROUGH BEAST, Greg's new book:https://www.amazon.com/dp/B0D47CMX17ROUGH BEAST is now available as an audiobook:https://www.audible.com/pd/Rough-Beast-Audiobook/B0D8K41S3T Would you like to tell us more about you? http://survey.podtrac.com/start-survey.aspx?pubid=BffJOlI7qQcF&ver=short
How much of what you read online has been planted there by Russian propagandists? How many times have you followed a social media account, or reposted information from an account, that's controlled by a Russian Troll Farm? How aware are you of Russia's ongoing (and shockingly successful) attempts to cripple and then topple America from within? This episode is a different, much more dystopian kind of scary. Merch and more: www.badmagicproductions.com Timesuck Discord! https://discord.gg/tqzH89vWant to join the Cult of the Curious PrivateFacebook Group? Go directly to Facebook and search for "Cult of the Curious" to locate whatever happens to be our most current page :)For all merch-related questions/problems: store@badmagicproductions.com (copy and paste)Please rate and subscribe on Apple Podcasts and elsewhere and follow the suck on social media!! @timesuckpodcast on IG and http://www.facebook.com/timesuckpodcastWanna become a Space Lizard? Click here: https://www.patreon.com/timesuckpodcast.Sign up through Patreon, and for $5 a month, you get access to the entire Secret Suck catalog (295 episodes) PLUS the entire catalog of Timesuck, AD FREE. You'll also get 20% off of all regular Timesuck merch PLUS access to exclusive Space Lizard merch.
Seit etwas über 10 Jahren nehmen Trollfarmen strategisch Einfluss auf internationale, nationale und gesellschaftliche Prozesse wie die Annexion der Krim, die US-Präsidentschaftswahl 2016 und das Brexit Referendum. Mittels gezielter Online Kampagnen spielen sie eine signifikante Rolle bei der Beeinflussung der öffentlichen Meinung oder der Verstärkung bestimmter politischer Narrative. Während Social Media Plattformen wie Twitter, Facebook oder TicToc wenig Handlungsbedarf sehen (wollen) können Datenanalysten wie das niederländische Forschungsinstitut Trollrensics ebenso regelmäßige wie wirkmächtige Einflussnahmen nachweisen. Beispielsweise als bei der letzten Europa Wahl offenbar russische Trollarmeen die Stimmung zu Gunsten von beispielsweise AFD und BSW beeinflussten. Ein Gespräch mit Robert van der Noordaa und Richard Odekerken über die Entstehung von Trollrensics, die Mechanismen von Desinformationskampagnen, Yevgeny Prigorzhin und die Internet Research Agency, mithilfe welcher Mustererkennung Fake Profile identifiziert werden können und warum die Menschen auf Social Media längst in der Minderheit sind. [ENGLISH VERSION] For just over 10 years, troll farms have been strategically influencing international, national and social processes such as the annexation of Crimea, the 2016 US presidential election and the Brexit referendum. Using targeted online campaigns, they play a significant role in influencing public opinion or reinforcing certain political narratives. While social media platforms such as Twitter, Facebook or TicToc (want to) see little need for action, data analysts such as the Dutch research institute Trollrensics can prove that they exert influence just as regularly as they do effectively. For example, when Russian troll armies apparently influenced public opinion in favour of the AFD and BSW in the last European elections. A conversation with Robert van der Noordaa and Richard Odekerken about the origins of Trollrensics, the mechanisms of disinformation campaigns, Yevgeny Prigorzhin and the Internet Research Agency, how pattern recognition can be used to identify fake profiles and why people have long been in the minority on social media. Infos & Links zur Folge Trollrensics Robert van der Noordaa auf Twitter Infos & Links zum Podcast
This fall may well be the first big national test of this “new internet,” and, like the Internet Research Agency's interventions in 2016, could well lead to disaster...See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Don't forget you can watch all of these on YouTube!This week, we are discussing all things online influence operations with one of the foremost experts - Olga Belogolva. We're talking about Russians, Chinese, Iranians, and other actors who want to influence the online information environment. The title of this episode comes from one of her classes she used to teach at Georgetown.Olga is the Director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS). She also a lecturer at the Alperovitch Institute for Cybersecurity Studies at SAIS, where she teaches a course on disinformation and influence in the digital age.At Facebook/Meta, she led policy for countering influence operations, leading execution and development of policies on coordinated inauthentic behavior, state media capture, and hack-and-leaks within the Trust and Safety team. Prior to that, she led threat intelligence work on Russia and Eastern Europe at Facebook, identifying, tracking, and disrupting coordinated IO campaigns, and in particular, the Internet Research Agency investigations between 2017-2019.Olga previously worked as a journalist, and her work has appeared in The Atlantic, National Journal, Inside Defense, and The Globe and Mail, among others. She is a fellow with the Truman National Security Project and serves on the review board for CYBERWARCON.Enjoy! Get full access to Anchor Change with Katie Harbath at anchorchange.substack.com/subscribe
In this episode, we talk about misinformation, disinformation, and troll farms in the 21st century with Olga Belogolova and Regina Morales.Olga Belogolova is the Director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS). She is also a professor at the Alperovitch Institute for Cybersecurity Studies at SAIS, where she teaches a course on disinformation and influence in the digital age. At Facebook/Meta, she led policy for countering influence operations, leading execution and development of policies on coordinated inauthentic behaviour, state media capture, and hack-and-leaks within the Trust and Safety team. Prior to that, she led threat intelligence work on Russia and Eastern Europe at Facebook, identifying, tracking, and disrupting coordinated IO campaigns, and in particular, the Internet Research Agency investigations between 2017-2019. Olga previously worked as a journalist and her work has appeared in The Atlantic, National Journal, Inside Defense, and The Globe and Mail, among others. She is a fellow with the Truman National Security Project and serves on the review board for CYBERWARCON.Regina Morales is the principal of Telescope Research, where she conducts investigations on behalf of law firms, multinational corporations, financial institutions, and not-for-profit organisations. She has subject matter expertise in Latin American politics, corruption issues, extremism, and disinformation. In particular Regina specialises in investigating disinformation campaigns waged on social media platforms, forums, and certain messaging apps. These campaigns include online harassment, corporate disinformation relating to securities, conspiracy theories, and politically or ideologically driven campaigns. She has seen, often in real time, how the theoretical components of disinformation and propaganda are used in practice. Prior to founding Telescope Research, Regina worked for two top-tier, Chambers and Partners-ranked global investigative firms where she conducted and managed complex, multi-jurisdictional investigations on behalf of white shoe law firms and multinational companies.Patreon: https://www.patreon.com/EncyclopediaGeopolitica
In this episode of "The AI Frontier Podcast", we delve into the invisible world of AI in social media. We explore the fundamental role of AI in shaping our social media experience, from content curation to targeted advertising. We discuss real-world examples of how AI algorithms influence content discovery and user behavior on popular platforms like Facebook, Instagram, and Twitter. We also examine the subtle ways AI shapes our online interactions and perceptions, and gaze into the future of AI in social media. Follow us on Twitter @wadieskaf for more insights into the world of AI.----------References used in this episode:Towards FATE in AI for Social Media and Healthcare: A Systematic Review [https://arxiv.org/abs/2306.05372]Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis [https://arxiv.org/abs/2307.05809]The Digital Architectures of Social Media: Comparing Political Campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. Election [https://arxiv.org/abs/1904.07333]Predicting Engagement with the Internet Research Agency's Facebook and Instagram Campaigns around the 2016 U.S. Presidential Election [https://arxiv.org/abs/2010.14950]Detection of Fake Users in SMPs Using NLP and Graph Embeddings [https://arxiv.org/abs/2104.13094]Capturing Humans' Mental Models of AI: An Item Response Theory Approach [https://arxiv.org/abs/2305.09064]How Mock Model Training Enhances User Perceptions of AI Systems [https://arxiv.org/abs/2111.08830]Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making [https://arxiv.org/abs/2109.05792]On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making [https://arxiv.org/abs/2209.11812]A Study of Comfortability between Interactive AI and Human [https://arxiv.org/abs/2302.14360]Social media as political party campaign in Indonesia [https://arxiv.org/abs/1406.4086]Stance Detection on Social Media: State of the Art and TrendsState of AI Ethics Report (Volume 6, February 2022) [https://arxiv.org/abs/2006.03644]Trends Prediction Using Social Diffusion Models [https://arxiv.org/abs/1111.4650]Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
In this episode of the Cyber Security Uncut podcast, Liam Garman and Daniel Croft unpack the social paradigm of post-truth and how Russia's Internet Research Agency has exploited this to prosecute online disinformation campaigns. The pair begin defining post-truth and how individuals are increasingly using emotion and affiliative sensemaking to cut through information on the internet. They then look into case studies of how Russia's Internet Research Agency prosecutes online information campaigns. Garman and Croft wrap up the podcast by examining how artificial intelligence will be used in future information environments. Enjoy the podcast, The Cyber Security Uncut team
In this episode of the Cyber Security Uncut podcast, Liam Garman and Daniel Croft unpack the social paradigm of post-truth and how Russia's Internet Research Agency has exploited this to prosecute online disinformation campaigns. The pair begin defining post-truth and how individuals are increasingly using emotion and affiliative sensemaking to cut through information on the internet. They then look into case studies of how Russia's Internet Research Agency prosecutes online information campaigns. Garman and Croft wrap up the podcast by examining how artificial intelligence will be used in future information environments. Enjoy the podcast, The Cyber Security Uncut team
WormGPT is a new AI threat. TeamTNT seems to be back. Chinese intelligence services actively pursue British MPs. Gamaredon's quick info theft. Russia's FSB bans Apple devices. The troll farmers of the Internet Research Agency may not yet be down for the count. Anonymous Sudan claims a "demonstration" attack against PayPal, with more to come. Carole Theriault looks at popular email lures. My conversation with N2K president Simone Petrella on the White House's National Cybersecurity Strategy Implementation Plan. And, friends, don't take this typo to Timbuktu. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/134 Selected reading. WormGPT, an "ethics-free" text generator. (CyberWire) TeamTNT (or someone a lot like them) may be preparing a major campaign. (CyberWire) Chinese government hackers ‘frequently' targeting MPs, warns new report (Record) Gamaredon hackers start stealing data 30 minutes after a breach (BleepingComputer) Russia-linked APT Gamaredon starts stealing data from victims between 30 and 50 minutes after the initial compromise (Security Affairs) Armageddon in Ukraine – how one Russia-backed hacking group operates (CyberSecurity Connect) Russian hacking group Armageddon increasingly targets Ukrainian state services (Record) Russia bans officials from using iPhones in U.S. spying row (Apple Insider) Prigozhin's Media Companies May Resume Work As Mutiny Fallout Dissipates, FT Reports (Radio Free Europe | Radio Liberty) Anonymous Sudan claims it hit PayPal with 'warning' DDoS cyberattack (Tech Monitor) Typo leaks millions of US military emails to Mali web operator (Financial Times)
In this episode of "Disinformation," Paul Brandus explores the extent of Russian influence operations and disinformation efforts beyond just elections and social issues. He discusses the role of the Internet Research Agency, led by Yevgeny Prigozhin, in spreading false narratives. Meredith Wilson, CEO of Emergent Risk International, provides analysis on how the private sector and business community are also targeted by Russian disinformation. Tune in to gain insights into the insidious craft of disinformation and its impact on various sectors. Got questions, comments or ideas or an example of disinformation you'd like us to check out? Send them to paulb@emergentriskinternational.com. Thanks to our sound designer and editor Noah Foutz, audio engineer Nathan Corson, and executive producers Michael DeAloia and Gerardo Orlando. Thanks so much for listening. Learn more about your ad choices. Visit megaphone.fm/adchoices
Anatsa Trojan reveals new capabilities. Airlines report employee data stolen in a third-party breach. Canadian energy company SUNCOR reports a cyberattack. What of the Internet Research Agency? Microsoft warns of a rising threat to infrastructure. Joe Carrigan describes an ill-advised phishing simulation. Mr. Security Answer Person John Pescatore takes on zero days. And DDoS grows more sophisticated. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/122 Selected reading. Anatsa banking Trojan hits UK, US and DACH with new campaign (TreatFabric) Anatsa Android trojan now steals banking info from users in US, UK (BleepingComputer) Thousands of American Airlines and Southwest pilots impacted by third-party data breach (Bitdefender) American Airlines, Southwest Airlines disclose data breaches affecting pilots (BleepingComputer) American Airlines, Southwest Airlines Impacted by Data Breach at Third-Party Provider (SecurityWeek) Recruitment portal exposes data of US pilot candidates (Register) Suncor Energy says it experienced a cybersecurity incident (Reuters) Suncor Energy cyberattack impacts Petro-Canada gas stations (BleepingComputer) Canadian oil giant Suncor confirms cyberattack after countrywide outages (Record) Wagner and the troll factories (POLITICO) Cyber risks to critical infrastructure are on the rise (CEE Multi-Country News Center) The lowly DDoS attack is showing signs of being anything but (Washington Post)
Juxtaposition is the placement of two ideas in very close proximity so as to imply a direct connection between them. At no point is it required that they be actually connected in meaning. Any conclusion that is false falls squarely on the audience and gives the person doing the juxtaposition an automatic out (I didn't *explicitly say* that).Some exampleshttps://en.wikipedia.org/wiki/Tuvia_Grossmanhttps://thecjn.ca/news/international/activists-distribute-thousands-of-fake-anti-israel-new-york-times-papers/Internet memes are usually a very good example of this form of expression. More and more, people think that memes are actual logical arguments for or against something when usually they are just indications of which biases they hold.Music is often used for juxtapositional purposes in movies. We should *always* be wary of internet videos that are attempting to use music to enhance themes being spoken. This is an appeal to emotion and should always be regarded with suspicion.Juxtaposition is very commonly used in the spread of disinformation. A picture or video is shown with words appearing on the screen. The words are assumed to be describing the picture or video but there are a *lot* of examples of this not being the case. This technique is often now coupled with false images to further distort and alarm people for clicks and outrage harvesting.I notice juxtaposition a lot when it is just used with words. This happens a lot with misinformation online when two ideas are meant to be linked in the minds of the audience. .Linkshttps://www.dictionary.com/browse/juxtapositionhttps://en.wikipedia.org/wiki/Internet_Research_Agency
Mike Isaacson: Lügenpresse! [Theme song] Nazi SS UFOs Lizards wearing human clothes Hinduism's secret codes These are nazi lies Race and IQ are in genes Warfare keeps the nation clean Whiteness is an AIDS vaccine These are nazi lies Hollow earth, white genocide Muslim's rampant femicide Shooting suspects named Sam Hyde Hiter lived and no Jews died Army, navy, and the cops Secret service, special ops They protect us, not sweatshops These are nazi lies Mike: Welcome to another episode of The Nazi Lies Podcast. Today, we're talking about the lying press with Jonathan Hardy, professor of communications and media at the University of Arts, London. His most recent book, Branded Content: The Fateful Merging of Media and Marketing, explores the world of branded content, particularly native advertising or sponsored content–longform marketing copy made to look like news items. Welcome to the podcast, Dr. Hardy. Jonathan Hardy: Thank you, Mike. It's a pleasure to be here. Mike: It's great to have you. So I'm really excited to talk about marketing with you because that's the industry I'm in now, and I do have some ethical issues with some of the techniques that we use. Now I write in the B2B space, selling services to business owners and officers, so I don't super have a problem with what I do–you know, manipulating business owners into buying things. So reading your book, what comes up again and again is that most of these marketing techniques aren't new, but the digital age has made them more invasive and persistent. Can you talk a bit about how digitization has changed the advertising world? Jonathan: Sure. Well, it's done so definitely in a great many ways but I'll talk about some key ones that really relate to the work I've been doing on branded content. In the 20th century, through most of the 20th century, we had a model that I call advertising integration with separation, which means that the advertising appeared in the same vehicles as media. When you looked at a magazine or a newspaper, you turned the page and it's editorial, you turn the page, it's advertising. Or the adverts that appeared between programs on television and radio. So we had integration, but often some quite strict rules and strict practices that kept advertising and media separate. So what we're seeing in the digital age is an intensification of two tendencies which face in opposite directions. One is towards integration, so advertising getting baked into media content and integrated with it; product placement all the way through to influencer marketing, branding content and so on. But the other trend is disaggregation, advertising getting decoupled from media. Because essentially in the digital age, advertisers didn't need–as some of them put it–to pay the premium prices to put their ads in media content. They could track users around the internet. So these are trends going in opposite directions obviously, right? One is about integration, the other one is about disaggregation. But I argue that they have one really common power, which is that they indicate the growing strength of marketers over media. Media that rely on advertising revenue are having to become more and more dependent, satisfying advertisers who want to integrate their content so that people will engage with it. And they're also desperate because of these other trends of losing ad revenue coming from disaggregation to kind of, again, appeal as much as they can. So what we're seeing is a strengthening of marketer power in the digital age. Mike: So my intention with this episode was to give a deep dive into how things like the Cambridge Analytica scandal could have happened. To start, let's get some technical details. We're talking mostly about inbound marketing today. So before we get into advertising techniques and stuff, what is the difference between inbound and outbound marketing? Jonathan: Sure. Well, I'll talk about that, Mike. But we should acknowledge there's some confusion here, because these terms are not always used to talk about the same things. I think one really valuable aspect is this idea of push and pull, right? If you're pushing out messages, this is known as outbound marketing. You're sort of pushing your message out to reach people. If you on the other hand create great content that people come to you for to engage with, that's pulling. And that's known as inbound. So, so far, so good. That makes sense to me. But this is used in other ways too, and I think that illustrates actually a broader point which is that marketers, not surprisingly, are often in a competitive struggle to be on the side of the new and the innovative, and not the old and the tired. So some versions of inbound and outbound marketing I think get a bit problematic here. Because outbound in some versions is kind of associated with scattergun marketing. Right? The opposite of inbound as highly targeted aiming at particular people. And I don't really buy that. You know, marketers sometimes talk about spray and pray, for instance, you know? Chucking out messages. But quite honestly, most of the time modern marketers don't do that because they can't afford to do that. So I don't really buy the argument that outbound is untargeted. I think that's misleading. What's a bit more helpful from all of this, and actually quite a crucial issue, is if you like the challenges for a thing called push marketing. The challenge is when people are not engaging with traditional advertising forms and pushing them out, and the need to come up with more engaging content; either because it's more entertaining or it's more informative. And I think that aspect of inbound is important. Mike: So when it comes to inbound marketing, it's all about the buyer journey or the marketing funnel. Can you talk a bit about the theory behind the marketing funnel? Jonathan: Yeah, sure. I often test this out on students, but if you were studying advertising in the 20th century, you might have come across a model called AIDA, which was a mnemonic, helps you remember some important fundamentals. AIDA stands for Awareness, Interest, Desire, and Action. And it kind of summed up this idea of what's called in modern terms, a marketing funnel or a customer journey. Sort of how if you're a brand, people start off with awareness and then become more interested and motivated all the way up to purchase. That's essentially what the marketing funnel means. Just to relate it to branded content for a moment, it was often argued in the past that brands branded content, which means content that's produced or funded by brands, was particularly associated with that early stage–building brand awareness. But if you speak to people in the industry, they say it's not really true. Branded content is the content that serves people right across the customer journey. So if you think someone becomes more interested and they want to find more information about the product, for example, I think they're right and I think that's– We're often thinking about a new world where brands are involved in kind of thinking, "What are the information needs? What are the communication needs of consumers at every point?" And engaging with it. And amongst other things, that's breaking down some old divisions between what we might call advertising and customer services. And as an academic, I'm really interested. I'm critical of a lot of what's going on, but I'm interested in how that speaks to a changing world and convergence across communications. Mike: Where I work, we definitely use branded content across the buyer journey and we use different kinds of content for different points along the journey. So for instance, we do more informative content for when you're in the awareness stage. Whereas when you're in the purchasing stage, we hit you more with salesy content. Because that's the point where you're trying to just hear about the benefits and decide upon a final product. Jonathan: Yeah exactly. Mike: Can you talk a little bit about the software that's used to track customers? Because that's something that I don't think most people are aware about, the CRM software. Jonathan: Yeah. CRM means customer relations management software. Some of your listeners might be aware of software like Salesforce, which tracks relations between a company and its clients, or including its prospects. So yeah, customer relations management is a huge area. One of the things I looked at interestingly was the annual reports of what are called the holding companies. These are the really big groups that own advertising agencies and PR agencies. And they've been in a battle for survival and for their presence in companies, and they're often fighting alongside companies like Accenture who are offering companies all sorts of other data services. So it's a kind of interesting world in which the traditional advertisers are maneuvering to cover more ground because that ground's becoming more and more important to companies. And definitely, all the data around customers and other people in the chain is a really important battleground for these firms. Mike: Okay, and we'll talk a bit about what gets fed into the CRM in a bit. So the company I work for, we do exclusively owned media and digital ads, pretty much all inbound with an occasional email campaign here or there. But there's other forms of digital advertising, too. Let's start by talking about what owned media is. What do advertisers mean when they talk about owned media? Jonathan: Okay. This is content that's produced and published by the brand or the marketer themselves. It's got a really long history. In the United States at the end of the 19th century, the farm implements company John Deere had a magazine called The Furrow, for example. So what we now call contract publishing by a brand. Lots of other examples; the Michelin Guide to restaurants, the Guinness Book of Records, and so on. Brands have been involved in producing their own content for a long time, but this really got turbocharged in the internet age. With the early internet, brands started to create their own websites and web pages. They've now moved right across social media, for example. And some brands have become essentially media companies. So a brand like Red Bull, which is involved right across kind of music, sports, etc, is producing content of all kinds to support the brand. Your listeners, again, one of the models that's really helpful for students and might be of interest to your listeners is called PESO. PESO stands for paid, which is a term for advertising essentially, right? The brand pays and controls. Earned, which stands for traditional public relations. You work in PR, you write a great story or a feature, it gets carried by the media, you didn't pay. That's called earned media. The S is for shared. Used to be called viral, but shared is a much nicer word for things that get moved and amplified across the internet and social media. And then the O is owned. And what PESO tells us is, these things are still separate but they're overlapping and converging in the middle. Mike: Right. So the problem with owned media is that you have to get it in front of people. What are the various ways that advertisers try to get their own media to an audience? Jonathan: Well, I'm just gonna... If you don't mind, I'll just pick up this word 'problem', Mike, because it might help to explain where I come from on these issues. I think the industry is essentially looking for how to do marketing better, right? And quite a lot of people who are in academia, in universities like myself, are really asking and answering the same question. Their aim is really to help marketers do better and do research on it. And I call all of that affirmative. So the problem from that framing is how can we do this better? How can we learn how to be more effective? But I would self-describe myself as coming from a critical tradition, a tradition of critical political economy. And we ask a different question about “problem.” We say, "Are there problems in the way communications are organized and delivered? Are there problems for communication users? Are there problems for societies? And if there are, if things aren't great out there, let's identify them, understand them, and think about how to change them." So when I come to questions of problems, that's really the kind of dominant lens that I look at them. But obviously like anyone in order to understand things better, you've got to listen to everyone in this space; to industry practitioners, and I work a lot with them. So that's just a wider framing, but actually to answer your question. Well, it's interesting because historically, they've struggled. Right? Brands have kind of invested in great content and then found surprise, surprise! People aren't always interested in going to corporate websites and finding this stuff. So part of this story has been brands producing content that they need advertising, social media advertising, to say to people, "Hey, we've done this. Here's a snippet, but come and look at the full amount." That's an interesting feature. But essentially, in this space brands would say, "Well, you've got to produce material of value back to this language of sort of pull. People have to be engaged, entertained, and/or informed. Those are the key things you need to do to solve the problem." But the other thing we'll come on to is when the marketing messages get disguised and buried. Just to give you another take on problems, I think there are problems about brand's own content. Sometimes that can be really entertaining and I enjoy it like anyone else, but there are problems essentially because it's a brand voice. And sometimes that brand voice can be louder than other voices. And that essentially is an issue. But actually, I see less problems with brands and content compared to the material that's weaved into media content: sponsored, editorial, native advertising, and so on. Mike: Okay. What about things like SEO, SMO, paid search, display ads? That sort of thing. Jonathan: Sure. SEO, search engine optimization, is a practice of trying to improve your ranking traditionally in search results, but in wider areas of content so that it gets visibility and people engage with it. Right? Because we all know people don't turn mostly past the first page of search ranking results. And as I know you know, this divides into what are called relatively good practices and bad practices, sometimes referred to as white hat–in other words, everyone does this to try and be effective–and blackhat, which is nefarious 'don't do this'. What that sums up is a cat-and-mouse game between marketers and agencies and the platforms, because the platforms are concerned to ensure the integrity and quality of search results because they depend on that trust and therefore want to move some of these black practices off to the margins, if not get rid of them entirely. But we should remember, of course, these platforms are not just there to serve the consumer. They're there to generate ad revenue. And some of the tensions that play out in that space are important to note, too. But I'd say for me, again, there's a whole literature on how to do search engine optimization and if you were teaching people how to be marketers, I'd certainly say they need to understand that. One of the bigger concerns for me is about awareness. How aware are consumers of things like sponsored search results? There was some really important research done by the UK regulator for communications, Ofcom, which looked at young people and found that a majority of them couldn't recognize the difference between sponsored listings and so-called organic ones. Only a third of young people aged 12 to 15, for example, knew which search results on Google were sponsored, were adverts, or organic. That's a really, I think, important issue and an ongoing issue. Mike: Yeah, especially when it comes to children. Let's dig a little deeper into SEO. What kind of techniques do content producers both in media and advertising use to boost their search engine results? Jonathan: Oh, wow. There's a lot of terms and some great names out there to describe some of this stuff: keyword stuffing, cloaking, bait and switch. What they really have in common is artificially enhancing the value of your content without the intrinsic worth and value that would come from people's clicks and engagement. Okay? So there are a whole series from mildly artificial through to downright criminal and exploitative means to do it. One of the more serious, for example, is this great term brandjacking, where someone acquires or otherwise assumes the online identity of a brand for the purposes of inquiring their followers, their brand equity as they say. Mike: It can be less than that too. It can just be, for instance, putting a brand's name as one of your keywords in paid search. That's brand jacking too. Jonathan: Exactly, Yeah, exactly. Mike: Yeah. So keyword stuffing, this idea of throwing search terms into content. One other thing though that bothers me a little bit where I work is the way that we go after keywords. The content that we write is pretty much exclusively based on whether there is search for it. And so as a result–I guess in the aggregate–you end up with huge patches of knowledge that just are not covered by free media. Jonathan: Yeah, I agree. I think one of the fundamental questions here is, "What about brand voice in a world where that voice comes with resources that are not widely shared?" Right? In order to be a marketer, you have resources of money. And money buys you the chance to speak. Not everyone in our world gets the chance to speak and be heard, but brands can do it through their money. Now, of course there are small brands, there are radical organizations who advertise. But we also know that the concentration of voice is often in the hands of the concentration of wealth. Which means some people, some brands, some interests, some ideas get privileged over others. And that is a really fundamental concern and it drives, for me, this issue of saying, "Well, what's the settlement for society between communications and brands?" In the old world, I mentioned the 20th century, we had some settlements. We had some rules which said, "We're going to really make sure that you know this is an advert and we're going to keep some controls on where advertising appears and how much appears and what's advertised." And the digital age is throwing up challenges all the time because new spaces, new opp,ortunities emerge for brands. And the rules are often some way behind. So those are the, kind of fundamental issues. I think voice is a really good term to use to get into that. Mike: Right. So in addition to the black hat and white hat, there are gray hat techniques which kind of straddle the boundaries of marketing ethics. One example is the subject of your book, which is native ads, sponsored content, advertorials. So, what are these? What is sponsored content? We've talked about it a bit, we haven't really defined it. Jonathan: Sure. Well, lots of different forms. But what's common to a lot of the forms I examined is in the way the industry would describe it, that the advertising is blended into the media environment in which it appears. Okay? The advertising is integrated and blended in. And I think a good way in is–building on what I was just saying to you, really–is to start by asking some questions about payment and control. Those are really key elements in tracking this story. In the old world, we had advertorials in newspapers and magazines. We still do, of course, but they're a feature of the old world. And the brand paid and controlled the content. It was an ad, but it was an ad that started to blend in to its surroundings. But what's happened in the digital age is that's taken off across all media. So we have native advertising as a term for adverts, which are also paid for and controlled by the brand, but are coming into your newsfeed on mobile social media and so on. Then we have sponsored content. And here, things get a bit more complicated because these questions of payment and control get widened. Because sometimes the brand pays and controls, sometimes the brand pays and the media, the publisher, or an influencer for example says, "No, we control the content." And sometimes it's a blend of both. And fundamentally across that spectrum, we don't have clear and consistent labeling that is readily understood by people to know exactly what's happening here. So we don't always know when a brand paid and shaped content in this space, and that's a fundamental problem. Sorry, but can I just put in–I don't know if this will be helpful or not-- but an example I was going to give from the UK is that we have a London paper called The Evening Standard. And an investigation by an online publication, openDemocracy, discovered that Syngenta, which is a US agribusiness firm, was paying for favorable editorial in that newspaper. But those stories weren't being clearly labeled as paid for and sponsored by Syngenta. And obviously, that's a big deal because Syngenta was at the time being sued by a large number of American farmers, which of course didn't feature in this more positive coverage. So here we have some problems of labeling and identifying content, we have some problems of what kind of story gets shown, but we also have an issue which goes to the heart of this where the brand could pay but the publication could say, "No, we're in control. So we don't have to label that as an ad." Mike: Right. And there's also the other problem of advertisers' control over media in general, where if there's an unfavorable story they could have it pulled. And we've seen instances of this, too. Jonathan: Yeah, it's funny. And just to share with you, sometimes when you're talking to students particularly as a professor, it's good to show them that you may make mistakes, too. So I shared the fact that, you know, I'm in a tradition which has seen advertiser influence on the media as essentially a negative force, right? And looked at, kind of, "Well, when does this happen? And how does it happen? And how is it resisted?" You know, sometimes it's resisted because journalists say, "We're not going to have it." Chrysler company told American magazine editors it wanted to be told when they were putting its ad next to content it thought was controversial. The American Society of Magazine Editors said, "We're not doing that. We stand up for free media." So, those kinds of stories. But I said to the students I have to update this. Because we're in an era where advertisers are using their power and clout, sometimes for positive and progressive ends–ends that many of you might agree with. So you know, Unilever doing an ad ban on Facebook. The current ban or semi-ban, if you like, in which one of these major holding companies Omnicom is, quote, "Advising its clients,” so it's not quite a ban, but it's advice, “not to advertise on Twitter because look what Musk is doing, who knows how this is going to play out." So in its language, it's concerned with brand safety. It's advising marketers to produce a boycott. So what I'm saying is I come from a tradition which sees advertising influence as negative. You could argue and it's important to recognize there's some positive things happening in these stories, brands doing good, right? Calling out hate speech and racism and xenophobia. That story, of course, isn't just because those brands are angelic. It's because they've been put under powerful pressure from campaigns, from #StopHateForProfit in the US, Sleeping Giants, we have Stop Funding Hate in the UK. ANd also, frankly there's still a problem. Because however good they do, they still have enormous power and they can still use it in unaccountable ways. But anyway, there's a story that just acknowledges that it's sometimes complicated. Mike: So native advertisement's gone beyond traditional news media in the digital age. Where else do we find sponsored content? Jonathan: Well, we find it right across what we could call audio-visual. We've had a long history of product placement in films and television programs but, you know, there's some big questions about where that's going next. Amazon is a company that sells things, but it's chock full of audiovisual content, sponsored brand videos, and so on. So as this world evolves, as we get Amazon's Alexa and audio marketing, we're going to have more and more content in which there's a brand role and a brand presence. Another big example is the Beta Verses. I was at a recent conference with advertising lawyers and they were kind of half-jokingly saying, "What's going to happen in this world? Are people going to walk around in T-shirts with #AdOn if it's sponsored? How is the brand presence going to be seen and identified?" And again just on this, I'd like to go back to something that was written in 1966. The code of the International Chamber of Commerce is kind of the big international code, the self-regulatory code for marketers. And it said, at the time, "Advertisements should be clearly distinguishable as such, whatever their form and whatever the medium used." Again, I like to share with you and my students, that's great language. That includes TikTok. It was written in 1966. It's really clear what it's asking for. And it went on to say, "When published into medium post that also contains news and editorial opinion, an advertisement should be so presented that the consumer can readily distinguish it from editorial matter." That's interesting because it didn't even need to add that second sentence. It's just indicating that it really underscores the importance in some of our media like news and editorial that it really matters that we can trust the content and it's not an ad. That was 1966, I don't think that describes the world today, I don't think that rule even in its current form holds, but it does exist to call on. Mike: Yeah, I know. We now have companies that are flooding their own reviews with positive reviews to boost their rankings on Google and stuff. I do want to talk about something that skeeves me out in what I do, and that's ad retargeting. So, what is ad retargeting? Jonathan: Retargeting ads are a form of online targeted advertising that is served to people because they visited a particular website. We all know this, you kind of go to a website, look at a pair of shoes, go on to some other websites, and you're being flooded by adverts for those shoes. What on earth is going on? And the answer has been third-party cookies. So to introduce another term, cookies are bits of data that get put onto your browser, so they can then follow you as you move around the rest of the internet. And those so-called third-party cookies are sold for advertising purposes; they build up a profile of you so that you can be advertised to. And that's essentially what's gone on in retargeting. Now, the world of cookies is undergoing a change at the moment, which is interesting. But all your listeners will know this experience, as you say, of ad targeting. And it's still very much present in our experience of the internet. Mike: Yeah. So basically the cookies originally were intended, as I understood it, to allow websites to remember what you have, like in your shopping cart on digital marketing or on a digital storefront. And they kind of morphed into this weird thing where they can now track you across the Internet and add things to your profile so they have more and more information about you. Okay. Jonathan: Yeah. Well, there's an important difference, Mike. The first type you're talking about is called first-party cookies. And the important thing is, again, many of your listeners will say, "Actually, some of what they provide is quite helpful to me." You know, you go to a website, you put something in a shopping basket, you don't want to pay for it. But when you come back to that site, it's still in your shopping basket, right? That's a cookie that's controlled by the website itself. And often, frankly that can be a help to us. It's still collecting data. It still raises privacy issues.But it's often helpful. Third-party is different. For example, you go to a publisher who signed up to Google's AdSense. You go there because you want to read a story, but what gets put onto your browser is a third-party cookie. And that is being used to sell advertising to reach you. Mike: The third party being AdSense, right? Jonathan: Yeah. Mike: Okay. So let's talk a little bit about market research. How have market research techniques advanced in the digital age? Jonathan: I mentioned there's this challenge to third-party cookies. And that's been driven by a number of factors. It's been driven partly because with more use of mobile, people are on different devices, it's harder to track them. It's been driven by privacy pressures which have led to important new regulation, particularly for us in Europe. And I'd say that from the UK, we don't know exactly what's going to happen next. In fact, we have a government that's probably going to relax rules that apply in Europe. But from 2018, Europe said, "You need permission to collect cookies." And there was a really deep intake of breath across the advertising and marketing and platform industry saying, "This is going to destroy the model of internet advertising." So you need permission, and we have strong rules now that demand it. As I understand it in the US, there's no federal-level regulation. But there are states–California is an example–which have brought in new rules for consent to kind of strengthen privacy and protection. So, third-party cookies are on the slide. And to answer your question about data, one of the things that is becoming more and more important is so-called first-party data. So companies, brands are collecting as much material as they can about their customers so that they can market to them. So we're seeing a huge industry growing up around digital data in the areas of customer data, financial data, and operational data. Mike: In addition to collecting their own market research data, businesses can also pay for data. So, what kind of marketing data are businesses and ad agencies buying? Jonathan: What marketers are interested, as I say, in customer, financial, operational, derive from different sources. So yeah, they're buying up to create a richer tapestry of their clients and potential clients from their own data first party and from third-party data. And we're seeing the whole ecology of advertising and marketing and media changing with the growth of these firms that are basically data harvesters and data brokers. Mike: And are advertisers the only one that are buying these data. Jonathan: Certainly not. Political movements and organizations who want richer data on consumers to target them are also absolutely buying up this data too. Mike: Okay, so now I think we've discussed is everything you'd need to know to understand how the Cambridge Analytica scandal worked. So let's talk about it. So unlike the UK, the US did not have widely publicized hearings regarding Cambridge Analytica, so a lot of my US audience will probably be unfamiliar with what happened. So before we get into the details of how the scandal worked, big picture, what was the Cambridge Analytica scandal? Jonathan: Well, I like to think of this as kind of a bundle of scandals actually because it involved failures across quite a range of organizations. Cambridge Analytica, this company that gathered and used data and sold it on to political campaigns, but other players too. I mean, it's one of the biggest scandals for Facebook. So essentially what happens–and this as a practice goes back to 2015–is a Cambridge-based researcher puts out an app which collects the data on US Facebook users. But not just them–the people who willingly took part–it accesses the profiles of all their friends and family. So in the end, data on about 87 million Americans–about a quarter of the whole Facebook audience in the US–were collected. Mike: Can you describe the app that they put out? Jonathan: Yeah. Sure, Mike. The researcher was called Aleksandr Kogan, and he put out an app called This Is Your Digital Life. It was a psychological profile app in June 2014. Either way, one group that comes out reasonably good from this story and I'm particularly proud of this or pleased about this because it is close to my heart, was the Ethics Committee at Cambridge University, because that rejected an application by this academic and also made the damning judgment that Facebook's approach to consent fell far below the ethical expectations of the university. In other words, it was deeply unimpressed with Facebook's provision. But of course having said that, we could say Cambridge University has questions to answer because this was still an academic who undertook this work. So it was an app, people who took part gave consent, but they didn't give consent for their entire network to be data scraped in this way. The crucial thing about the scandal is that data was then used and sold on to right-wing politicians in the US in various forms, to Ted Cruz for his presidential campaign, and later for Donald Trump, because it produced rich, detailed profiles of American voters, which allowed micro-targeting. And we've seen this more and more, but it's a kind of early example of what kinds of micro targeting can be done. In other words, you identify a voter who's going to be particularly triggered by rights to own and carry a gun, for example, but you trigger a different message to a different voter to mobilize them. And often those messages can be actually flat contradiction that can be at odds, but it doesn't matter. It's whatever works to build your political coalition. I think the other thing just to highlight from this is this is often framed as a digital story, but it's older and broader than that. It's about power and money. We've had lots of lobbyists who engage in political campaigns and, again, we might all agree it's okay to promote your candidate and do marketing techniques. But it's not okay to do the dark arts of demolishing a candidate through fake news and misinformation, for example. Some of your listeners might be interested; I'm in the UK, I have a great shoutout for the Channel 4 News, a public service news channel which did amongst other things, an undercover investigation in which executives from Cambridge Analytica are sort of bragging, because they don't think they're being filmed, about how they've intervened in democratic elections. It's a deeply disturbing portrait of how money and power can be used to undermine democratic processes. Mike: Okay. And Cambridge Analytica wouldn't have been nearly as successful with what they did without the plethora of right-wing content farms pumping out slanted and misleading news content. Talk about the online ecosystem that existed in 2015-2016 that allowed these websites to thrive. Jonathan: Yeah, one kind of crystallizing example, again some of your listeners will remember, was an infamous example of a Russian organization called the Internet Research Agency, which spent thousands of dollars on social media ads and promoted posts in an effort to influence the US elections in 2016. So misinformation, fake authors, pretending to be Texans when you're actually in a content farm as part of the kind of quasi-state corporate world of Russia. How did that all happen? It partly happens because of the deeper logics and business models of the internet, right? You know, promoting controversy and hate, driving traffic and engagement. It happened because of lax rules on who's the source and sponsor of marketing messages. Lots of things caused it but yeah, that was the ecosystem at the time. And I think, again, before we just jump to the digital, this happens because of money. And so much of the right which can often appear to be kind of grassroots is, as we know, funded by very rich corporate donors who often don't like to be particularly transparent about who they are and how they operate. And the left progressive forces, which are more rooted in popular movements, in the end have less resource. We don't have the power of capital. We have the power of trade unions and collective work, but relatively weakly resourced. And that's a key issue. Mike: And the content farming, it wasn't just from the Russian state, it was also private sector too. I mean, there was money to be made here. So can you talk a bit about how that was profitable? Jonathan: Yeah. Well, if you generate clicks, if you produce clickbait, then the algorithmic world recognizes success at the levels of engagement and eyeballs, and that can be monetized. We should remember that's often not the primary motivation for political campaigns, it was information, disinformation, and mobilizing people to vote for candidates. But yeah, there's an economy built around it as well which meant advertisers became very aware that they were often not choosing to support right wing publications because of the way the algorithms were driving traffic towards popular and shared content. And that's one of the reasons we saw the first wave, if you like, of boycotts and withdrawals from big brands like– big companies, rather, like Unilever who were being advised that their brand safety was being compromised by the sites that were appearing on and that many of their consumers were deeply unhappy about hate speech being connected with their advertising and advertising dollars. Mike: Yeah. So one of the things that happened too as a result of these boycotts was that major social media and search platforms kind of reformed their algorithms to try to suppress this misinformation from proliferating. So, how has the digital media landscape changed since the 2016 presidential election and since Brexit? Jonathan: Well, as I say, I think we should recognize that it's often been civil society power, political power, these campaigns that have forced marketers to divest. This hasn't just come from corporate voices; it's come from popular campaigns which absolutely deserve recognition. But as I say, I think marketers using their power for good is all well and good, if you like, but it's still an exercise in a marketer's power. And that power is ultimately private and in my view, unaccountable. I mean, a defender would say, "What are you talking about? The market decides that consumers don't like it. That's a powerful force on brands." To which you could say, "Well, consumer power does matter." Right? Ad blocking is a really good example of consumer power in this world. But consumer power is dispersed, it's not concentrated. And it's not sufficient very often to challenge corporate power and interests. So these are all arguments, essentially, for a much stronger public regulation of communications because it shouldn't be left to private power to regulate itself. But nor, however important it is, can we rely on consumers only, you know? Like other people, I believe in the importance of media literacy and better education so we can find our way through this world and decode it, but I also don't think the burden of responsibility should lie on consumers. It should be a principle. If you're big and you're in a communications space, then you act responsibly, and public regulation is the only way to kind of underpin that that is actually done. Relying on self-regulation from powerful forces in this world is not enough. Mike: Yeah. Especially when the advertising techniques are constantly changing and evolving, you can't expect consumers to be privy to new ways of reaching them. So we've talked about various advertising techniques, let's talk a bit about their social implications. What are the consequences we're seeing from the proliferation of owned media? Jonathan: Sure. Well, I like to sum up the whole world of what I call branded content around three problem areas. The first key problem area is around consumer awareness–this principle that we should know when we're being sold to. And that gets the lion's share of attention, actually, from all parties to the discussion. And that's important. It's about labeling and disclosure and identification. But I argue that that attention tends to displace two others. The second big area of concern is around the quality and integrity of the media. I don't think there's enough people in this world who are speaking up for the importance of having media spaces that are free from commercial influence and interference. So that's the second area. And then the third area, which I think is really where the radical voices come in, where the critical tradition I'm part of comes in, is this notion of marketer's power of voice. You know, the significance of a world in which the ability to pay can give you a louder voice. It's not to say we can wish that away, but it is to say that it's a way of thinking about historically that societies have put limits on that. They've said, "This is where advertising can appear. This is how it can appear." And I think we need a conversation about what those rules should be for the 21st century because at the moment, we're in a bit of a hybrid of old rules that are weak and don't work, and new spaces that are opening up. So for me, that's the call of my book, really, to say, "These are deep problems. This isn't just about surface-level techniques; this isn't just about new tools in the marketing toolbox. This is a much more deep reconfiguration of the space between commercial voices, advertising, and communication space, and we need to work out what the rule should be. I put a call in for saying we really need to have a discussion about what a 21st century version of separation–keeping media and advertising apart–would look like. And I say that because of course we can't put it all back in the box, we can't come up with a solution that would have worked in 1960 and say that's going to do it. It isn't going to do it. But I think that's a really key discussion to be had. Where should we be seeking a world which is free or freer from commercial influence and interference? How are we going to create that? How should it be configured and organized? Mike: Yeah. Going back to owned media, I mean, the owned media dominates search results now. It's basically impossible to look something up online unless you're finding it through Wikipedia without having to use corporate blogs. And there's always a limitation to that, right? There's always a wall where they will not give you more information than is necessary to hook you to their services, right? When I farm out my content to freelancers, I actually specifically instruct them that the reader should come away knowing what to do, but not how to do it. And so there's a technique to writing instructional articles that make you feel more helpless, and that's definitely what we aim for in our copy, which I take particular pleasure in making business owners feel helpless and so on and whatever. So let's talk about native. Jonathan: Can I just say, I think that's such an important point and I agree entirely, and it shows that, kind of, you know, this isn't a simple change where we can easily identify the before and after. What you're describing is a kind of world where more and more content comes from an interested party and is underpinned by money and monetizing it as a driver. And we know historically we've relied on content to come from other quarters, right? I'm very proud to work in a university world because that's a world that defends the idea of, "Well, actually we should ask questions that are important for society, not the sponsor, not the company." So that's one side. We've traditionally had media in various traditions, you know, a free press in the US standing up for the idea of independent and impartial, know the advertisers can't call the tune because then we lose something really precious and what it means to do journalism. And all of those alternative sites are weak because for me, this all comes down to these questions of resource, money and power as a way in, so they have relatively less. What are we going to do about it? Well, in Europe some of us defend and advocate for public service media, but also for new forms of public service community media, non-profit, hyperlocal, because those are really important spaces where that other content gets produced. I don't know about you, sometimes it's depressing that we don't link up the networks more effectively. Why don't we have publications that pull together all the non-governmental organizations and civil society groups who are producing great content but can't always get it out to wide audiences? We don't have a very great tradition of connecting the content with the vehicle to promote it amongst, if you like, the left and progressive causes. But plenty of people are thinking about how to do it. And yeah, absolutely, it comes through to other solutions. We need to defend and extend public media–what I call in Europe, public service media. And do that in new ways, too, because some of the old ways have been– Well, PBS in the States and all the problems of corporate funding kind of shrinking what gets said in that space, so a lot to fix too. But I think that's a really important part of the solution. We need non-commercial media, and have to work out how to support and develop it to create that kind of other kinds of information. Mike: So by that same token, open-access journals I think are also really important, too. The fact that so much media now is putting up paywalls, all these academic journals are charging $30-$40 to rent an article, and there's just really no way to get free information that isn't paid for in some way. So let's talk about native. What is the effect of sponsored content on the public? Jonathan: Let me answer that by an example I show my students, which is an Exxon advertorial in the New York Times. Exxon paid for an editorial which said, "Guess what? The solution to the climate crisis isn't the removal of fossil fuels. It's smarter use of our assets." That sums up for me some of the greatest dangers, which is sponsored content amplifies voices who can speak with partiality because they're advocating for themselves, but undermines independent journalism in the process. To give another example, Facebook, as you know, has paid huge sums into lobbying and influencing politicians in Europe because it senses danger, right? Europe has created some quite strong rules on data privacy and on cookies as we discussed earlier. Facebook took out 20 ad-sponsored content items in the British newspaper, The Daily Telegraph. So it sets up stories with charity bosses who say Facebook is great without disclosing that they're financed by Facebook. It has people saying what great things it's doing to kind of cut content, even though it's been pushed out just after the Christchurch massacre, which of course was relayed for hours on Facebook and other social platforms before it was taken down. That's the problem with sponsored content, it strengthens and amplifies voices. And of course there are other problems; it's disguised; it's hidden; people aren't aware of it. We should know who the source of our content is. In fact, just be interested to talk to you because you're working in journalism. I think one of the things I grapple with but would really like to see more debate out is about the disclosure of sources. Now, I know from the Human Rights tradition and so on the absolute importance of protecting a journalist's sources. Because we don't get good stories if journalists can't protect whistleblowers and others. But we need something which protects that important public interest right. That gives readers better guidance to what the provenance, you know, what's behind the story. We could have ingredients in food and drink, but what were the sources? And in particular, we definitely need to know when there's been a paid source underlying a piece of content. So what drives me in that debate is one of the things that happened in the UK was we had a debate about political advertising on Facebook, which said we should be told better when there are political advertising. But that was running alongside another debate about how to save the British press, which was saying let's have more native advertising. So we've had contradictions and gaps in the way these issues have been treated. And I think we should recognize what's happening underneath, which is we don't always know the interests and sources behind our content. And we should do. Particularly when it's either a political voice or a commercial voice. Mike: Yeah. And I want to give a shoutout here to Corey Pein and his book, Live Work Work Work Die, where he talks about how the tech world typically, they don't really concern themselves with following rules and regulations. They just kind of do what they do and then just once regulators catch up to them, they hope they've made enough money where the fines or penalties or whatever is insignificant to how much profits they've made. And we see that with what happened with Facebook and I guess Twitter to some extent where they weren't regulating political advertisements at all. At least in the United States, political advertising has certain rules for financing and stuff that you have to report it and stuff like that. And so in the 2016 election, that was just out the window. And that's been fixed. Facebook now requires that political advertisements are registered as such, and they only get served in certain ways. All right, so there are regulations in place regarding advertising. What safeguards exist to protect the public from nefarious advertisers? Jonathan: I think just to respond to what you were saying, these are kind of almost the deeper myths, the deeper stories that have been told. The story that internet innovation was somehow kind of natural, inescapable, has-to-be-done-this-way. You want change and all these great services, this is what comes with it. It's going to be driven in these ways, we're going to move fast, we're going to trip over the old rules. I don't know about you, I think that is a myth in the making and it doesn't stack up, and it's already fragmenting and under pressure. So when Facebook's Mark Zuckerberg gets into US hearings in the likes of Cambridge Analytica, he has to say something different at that point. He has to say we do stand up for privacy and consumer protection. The problem is he doesn't fully deliver, and perhaps the bigger problem is the grand-sounding statements are there to reassure investors and markets and other stakeholders, but behind the scenes, Facebook carries on paying millions into lobbyists who go and influence politicians to make sure the rules are kept as weak as possible. So that would be my summary. In the space that I've looked at, native advertising and so on, we see a kind of mixed progress. So just taking the United States, 2015 Federal Trade Commission comes in with new rules and guidance on native advertising. And the rules are certainly an improvement: they're sharper; they're clearer. But what happens? Compliance by the industry remains low. Some early studies found 70% of marketers within I think a year of the new guidance weren't compliant. It got a bit better. But all the latest studies show right across publishing or influencer marketing, there's a compliance problem. There's that lobbying problem I mentioned. So the big marketers say, "Yes, we want to be responsible and transparent, it's in our interests that consumers know they've got ads." But actually then go and lobby. And the kind of thing they lobby over is to say, "Leave it to us what the disclosures should be." So what happens is consumer awareness is very low. Lots of the academic studies in this area have found awareness rates of about 10%, right? People being able to fully identify ad-sponsored content in news publications, for example. And it remains very low. So these industry people are kind of saying, "Well, leave it to us. We need to be fitting for the platform." And the result is consumers have low awareness and are confused. And people like me in this debate and in my book say, "We should call this out. We should have– If the objective really was consumer awareness, then we should move to clearer and more consistent labeling." And why I perfectly accept Instagram and Facebook are not the same thing and TikTok is not the same thing, if we had much more consistent labeling, we'd be in a better place. So one of the things I've argued for in Europe, for example, when we have product placement on television, unlike in the US it has to show a sign–a P sign to tell you that there's product placement. And not just at the end of programs as you're used to where the credits roll very quickly, but before and after each ad break. So why don't we have a sign, a hashtag ad, or a B sign for branded content across all branded material? I think that's an important argument to have because I think we're going into a world which is going to become even less recognizable as these new forms and formats emerge. Mike: Okay, so we've talked about some of them already, but what kind of policy gaps do you see with respect to marketing and media? And what do you think we should do to patch them? Jonathan: Well, I must just say it's a lovely time to speak to you and your audience about this because we've just started–I'm very proud of this–a three year research project which is looking into the rules and regulation of branded content. So we have what's called a Branded Content Governance Project and we're looking at the United States, Canada, Mexico, the UK, every country in the European Union, and Australia to kind of track what the rules are and what we can learn from that to do better. When I map this, I see the forces sitting in four areas. There's regulation, public regulation. There's industry self-regulation, when it makes its own rules. There's the power of the market, ad blocking, for example. And there's the power of civil society arguing for better. And I think we're at a point where self-regulation by the industry is failing. And that's becoming recognized not just by activists if you like, but by governments too. So we need a new settlement. And I think that needs a strengthening of public regulation as I've outlined. But I think all the elements need to work together. And that means putting pressure on companies to actually do as they say and strengthen their own self-regulation. Mike: Okay. Let's talk a bit about the stakes. So given the current digital landscape, what do you see the internet looking like if policy does not catch up with advertisers? Jonathan: Yeah. Well, that's a great question. Pretty chastening one, isn't it? There's a famous moment in 1994 where the chairman of Procter & Gamble, Edward Harnes, gets up and does a speech to the American Association of Advertising Agencies. And it basically says, "Hold your nerve. Things are happening, digitalization is about to happen. You could get slaughtered. The digital world could help people bypass ads and evade them. But if you keep your nerve, you can dominate this space." And I don't know about you, Mike, but I feel he was right. [chuckles] We knew this was happening in the early internet, the commercialization of the internet. But that corporate model and that corporate dominance is dominant. It's strong. However, I think we always need to look for sources of hope. And if it's dominant, it's also contested. There are forces challenging it, whether those forces are kind of carving out space for public media as we discussed, or whether like I am with others, we're kind of arguing for the rules to be improved on behalf of consumers and society. So I think we're losing, but classic Gramscian and optimism of the will is required. And to recognise all the things that are being done to highlight the problem and think through solutions. Whether that's very local ones like– I mean, something we haven't mentioned I think is very important is kitemarking, right? Small publications, non-profit or low profit saying, “We're going to signal what standards are to readers." And that's good for the publication but I think it also is good for awareness. It says, "Well, yeah, why is this publication different from these other commercial ones?" Because this is how it engages with advertisers. So I think that's all really important, too. Mike: All right. Well, cool. Well, hopefully, we can save the internet. Thanks, Dr. Hardy, for coming onto the Nazi Lies Podcast to talk about the lying press. The book again is Branded Content out from Routledge. Thanks again, Dr. Hardy. Jonathan: Thank you. Mike: If you liked what you heard and want to help us pay our guests and transcriptionist, consider subscribing to The Nazi Lies Patreon. Subscriptions start as low as $2, and some levels come with merch. If you don't want to commit to monthly donations, you can give a one-time donation via PayPal.me/NaziLies or CashApp to $NaziLies. [Theme song]
The First Amendment prohibits the U.S. government from censoring speech. In this episode, drawing from internal Twitter documents known as “the Twitter files” and Congressional testimony from tech executives, former Twitter employees, and journalists, we examine the shocking formal system of censorship in which government employees are using their influence over private companies to indirectly censor speech in a way that they are clearly prohibited from doing directly. Please Support Congressional Dish – Quick Links Contribute monthly or a lump sum via PayPal Support Congressional Dish via Patreon (donations per episode) Send Zelle payments to: Donation@congressionaldish.com Send Venmo payments to: @Jennifer-Briney Send Cash App payments to: $CongressionalDish or Donation@congressionaldish.com Use your bank's online bill pay function to mail contributions to: 5753 Hwy 85 North, Number 4576, Crestview, FL 32536. Please make checks payable to Congressional Dish Thank you for supporting truly independent media! View the shownotes on our website at https://congressionaldish.com/cd270-the-twitter-files Background Sources Recommended Congressional Dish Episodes CD224: Social Media Censorship CD141: Terrorist Gifts & The Ministry of Propaganda (2017 NDAA) CD113: CISA is Law The Twitter Files "Capsule Summaries of all Twitter Files Threads to Date, With Links and a Glossary.” Matt Taibbi. Jan 4, 2023. Racket News. Matt Taibbi “The Democrats' Disastrous Miscalculation on Civil Liberties.” Matt Taibbi. Mar 12, 2023. Racket News. “#1940 - Matt Taibbi.” Feb 13, 2023. The Joe Rogan Experience. Hunter Biden Laptop Story “Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad.” “13. They did the same to Facebook, according to CEO Mark Zuckerberg. ‘The FBI basically came to us [and] was like, “Hey... you should be on high alert. We thought that there was a lot of Russian propaganda in 2016 election. There's about to be some kind of dump similar to that”'” [tweet]. Michael Shellenberger [@ShellenbergerMD]. Dec 19, 2022. Twitter. Influence, Propaganda, and Censorship “From the Twitter Files: Pfizer board member Scott Gottlieb secretly pressed Twitter to hide posts challenging his company's massively profitable Covid jabs.” Alex Berenson. Jan 9, 2023. Unreported Truths. “Twitter Aided the Pentagon in Its Covert Online Propaganda Campaign.” Lee Fang. December 20, 2022. The Intercept. “Facebook, Twitter dismantle a U.S. influence campaign about Ukraine.” Aug 24, 2022. The Washington Post. Angus King Takedown Request Spreadsheet Audio Sources Hearing on the Weaponization of the Federal Government, the Twitter Files March 9, 2023 House Judiciary Committee, Subcommittee on the Weaponization of the Federal Government Witnesses: Matt Taibbi, Journalist Michael Shellenberger, Author, Co-founder of the Breakthrough Institute and the California Peace Coalition Clips 17:20 Rep. Jim Jordan (R-OH): In the run up to the 2020 Presidential election, FBI Special Agent Elvis Chan, in his deposition in Missouri versus Biden, said that he repeatedly, repeatedly, informed Twitter and other social media platforms of the likelihood of a hack and leak operation in the run up to that Presidential election. He did it even though there was no evidence. In fact, he said in his deposition that we hadn't seen anything, no intrusions, no hack, yet he repeatedly told them something was common. Yoel Ross, Head of Trust and Safety at Twitter, testified that he had had regular meetings with the Office of the Director of National Intelligence, the Department of Homeland Security, the FBI, and other folks regarding election security. During these weekly meetings, federal law enforcement agencies communicated that they expected a hack and leak operation. The expectations of a hack and leak operation were discussed throughout 2020. And he was told they would occur in a period shortly before the 2020 Presidential election, likely in October. And finally, he said "I also learned in these meetings, that there were rumors that a hack and leak operation would involve Hunter Biden." So what did the government tell him? A hack and leak operation was coming. How often did the government tell him this? Repeatedly for a year. When did the government say it was going to happen? October of 2020. And who did the government say it would involve? Hunter Biden. 19:35 Rep. Jim Jordan (R-OH): How did they know? Maybe it's because they had the laptop and they had had it for a year. 21:50 Rep. Jim Jordan (R-OH): Finally, as if on cue, five days later on October 19, 51 former intel[ligence] officials signed a letter with a now famous sentence "the Biden laptop story has all the classic earmarks of a Russian information operation." Something that was absolutely false. 25:25 Rep. Stacey Plaskett (D-VI): And the Republicans have brought in two of Elon Musk's public scribes to release cherry-picked, out-of-context emails and screenshots designed to promote his chosen narrative, Elon Musk's chosen narrative, that is now being paroted by the Republicans, because the Republicans think that these witnesses will tell a story that's going to help them out politically. 25:50 Rep. Stacey Plaskett (D-VI): On Tuesday, the majority released an 18 page report claiming to show that the FTC is quote, "harassing" Twitter -- oh my poor Twitter -- including by seeking information about its interactions with individuals before us today. How did the report reach this conclusion? By showing two single paragraphs from a single demand letter, even though the report itself makes clear that there were numerous demand letters with numerous requests, none of which we've been able to see, that are more demand letters and more requests of Twitter. 28:05 Rep. Stacey Plaskett (D-VI): Mr. Chairman, Americans can see through this. Musk is helping you out politically and you're going out of your way to promote and protect him and to praise him for his work. 28:15 Rep. Stacey Plaskett (D-VI): This isn't just a matter of what data was given to these so-called journalists before us now. 31:35 Rep. Stacey Plaskett (D-VI): Mr. Chairman, I'm not exaggerating when I say that you have called before you two witnesses who pose a direct threat to people who oppose them. 32:30 Rep. Stacey Plaskett (D-VI): We know this is because at the first hearing, the Chairman claimed that big government and big tech colluded to shape and mold the narrative and suppress information and censor Americans. This is a false narrative. We're engaging in false narratives here and we are going to tell the truth. 37:35 Michael Shellenberger: I recognize that the law allows Facebook, Twitter, and other private companies to moderate content on their platforms and I support the right of governments to communicate with the public, including to dispute inaccurate information, but government officials have been caught repeatedly pushing social media platforms to censor disfavored users and content. Often these acts of censorship threaten the legal protection social media companies need to exist, Section 230. If government officials are directing or facilitating such censorship, and as one law professor, it raises serious First Amendment questions. It is axiomatic that the government cannot do indirectly what it is prohibited from doing directly. 41:50 Matt Taibbi: My name is Matt Taibbi, I've been a reporter for 30 years and a staunch advocate of the First Amendment. Much of that time was spent at Rolling Stone magazine. Ranking Member Plaskett, I'm not a "so-called" journalist. I've won the National Magazine Award, the I.F Stone Award for Independent Journalism, and I've written 10 books, including four New York Times bestsellers. 45:35 Matt Taibbi: Ordinary Americans are not just being reported to Twitter for deamplification or deplatforming, but to firm's like Pay Pal, digital advertisers like Xandr, and crowdfunding sites like GoFundMe. These companies can and do refuse service to law abiding people and businesses whose only crime is falling afoul of a distant, faceless, unaccountable, algorithmic judge. 44:00 Matt Taibbi: Again, Ranking Member Plaskett, I would note that the evidence of Twitter-government relationship includes lists of tens of thousands of names on both the left and right. The people affected include Trump supporters, but also left leaning sites like Consortium and Truthout, the leftist South American channel TeleSUR, the Yellow Vest movement. That, in fact, is a key point of the Twitter files, that it's neither a left nor right issue. 44:40 Matt Taibbi: We learned Twitter, Facebook, Google and other companies developed a formal system for taking in moderation requests from every corner of government from the FBI, the DHS, the HHS, DOD, the Global Engagement Center at [the Department of] State, even the CIA. For every government agency scanning Twitter, there were perhaps 20 quasi private entities doing the same thing, including Stanford's Election Integrity Partnership, Newsguard, the Global Disinformation Index, and many others, many taxpayer funded. A focus of this fast growing network, as Mike noted, is making lists of people whose opinions beliefs, associations, or sympathies are deemed misinformation, disinformation or malinformation. That last term is just a euphemism for true but inconvenient. Undeniably, the making of such lists is a form of digital McCarthyism. 1:01:00 Matt Taibbi: So, a great example of this is a report that the Global Engagement Center sent to Twitter and to members of the media and other platforms about what they called "the Pillars of Russian Disinformation." Now, part of this report is what you would call, I think you would call, traditional hardcore intelligence gathering where they made a reasoned, evidence baseed case that certain sites were linked to Russian influence or linked to the Russian government. In addition to that, however, they also said that sites that quote, "generate their own momentum," and have opinions that are in line with those accounts are part of a propaganda ecosystem. Now, this is just another word for guilt by association. And this is the problem with the whole idea of trying to identify which accounts are actually the Internet Research Agency and which ones are just people who follow those accounts or retweeted them. Twitter initially did not find more than a handful of IRA accounts. It wasn't until they got into an argument with the Senate Select Intelligence Committee that they came back with a different answer. 1:06:00 Rep. Debbie Wasserman-Schultz (D-FL): Before you became Elon Musk's handpicked journalists, and pardon the oxymoron, you stated this on Joe Rogan's podcast about being spoon fed information. And I quote, "I think that's true of any kind of journalism," and you'll see it behind me here. "I think that's true of any kind of journalism. Once you start getting handed things, then you've lost. They have you at that point and you got to get out of that habit. You just can't cross that line." Do you still believe what you told Mr. Rogan? Yes or no? Yes or no? Matt Taibbi: Yes. Rep. Debbie Wasserman-Schultz (D-FL): Good. Now, you crossed that line with the Twitter files. Matt Taibbi: No. Rep. Debbie Wasserman-Schultz (D-FL): Elon Musk -- It's my time, please do not interrupt me. Crowd: [laughter] Rep. Debbie Wasserman-Schultz (D-FL): Elon Musk spoon fed you his cherry-picked information, which you must have suspected promotes a slanted viewpoint, or at the very least generates another right wing conspiracy theory. 1:11:20 Matt Taibbi: That moment on the Joe Rogan show, I was actually recounting a section from Seymour Hersh's book, Reporter, where he described a scene where the CIA gave him a story and he was very uncomfortable. He said that "I, who had always gotten the secrets, was being handed the secrets." Again, I've done lots of whistleblower stories. There's always a balancing test that you make when you're given material, and you're always balancing newsworthiness versus the motives of your sources. In this case, the newsworthiness clearly outweighed any other considerations. I think everybody else who worked on the project agrees. 1:14:45 Rep. Dan Bishop (R-NC): Richard Stengel, you know who that is? Matt Taibbi: Yes, he's the former, the first head of the Global Engagement Center. Rep. Dan Bishop (R-NC): I want the American people to hear from him for 30 seconds. Richard Stengel: Basically, every country creates their own narrative story. And, you know, my old job at the State Department was what people used to joke as the "chief propagandist" job. We haven't talked about propaganda. Propaganda. I'm not against propaganda. Every country does it, and they have to do it to their own population. 1:24:20 Rep. Jim Jordan (R-OH): December 13, the very first letter that the FTC sends to Twitter after the Twitter files, 11 days after the first Twitter file, there have been five of them come out, the FTC's first demand in that first letter after the Twitter files come out is identify all journalists. I'm quoting "identify all journalists and other members of the media" to whom Twitter worked with. You find that scary, Mr. Taibbi, that you got a federal government agency asking a private company who in the press are you talking with? Matt Taibbi: I do find it scary. I think it's none of the government's business which journalists a private company talks to and why. I think every journalist should be concerned about that. And the absence of interest in that issue by my fellow colleagues in the mainstream media is an indication of how low the business has sunk. There was once a real esprit de corps and camaraderie within Media. Whenever one of us was gone after, we all kind of rose to the challenge and supported -- Rep. Jim Jordan (R-OH): It used to be, used to be the case. Matt Taibbi: Yeah, that is gone now. 1:28:50 Rep. Stacey Plaskett (D-VI): How many emails did Mr. Musk give you access to? Michael Shellenberger: I mean, we went through thousands of emails. Rep. Stacey Plaskett (D-VI): Did he give you access to all of the emails for the time period in which? Michael Shellenberger: We never had a single, I never had a single request denied. And not only that, but the amount of files that we were given were so voluminous that there was no way that anybody could have gone through them beforehand. And we never found an instance where there was any evidence that anything had been taken out. Rep. Stacey Plaskett (D-VI): Okay. So you would believe that you have probably millions of emails and documents, right? That's correct, would you say? Michael Shellenberger: I don't know if -- I think the number is less than that. Matt Taibbi: Millions sounds too high. Rep. Stacey Plaskett (D-VI): Okay. 100,000? Matt Taibbi: That's probably closer. Michael Shellenberger: Probably, yeah. Rep. Stacey Plaskett (D-VI): So 100,000 that both of you were seeing. 1:37:10 Matt Taibbi: There were a couple of very telling emails that wepublished. One was by a lawyer named [Sasha Cardiel???], where the company was being so overwhelmed by requests from the FBI and in fact they, they gave each other a sort of digital High Five after one batch, saying "that was a monumental undertaking to clear all of these," but she noted that she believed that the FBI was essentially doing word searches keyed to Twitter's Terms of Service, looking for violations of the Terms of Service, specifically so that they could make recommendations along those lines, which we found interesting. 1:48:15 Michael Shellenberger: And we haven't talked about Facebook, but we now know that we have the White House demanding that Facebook take down factual information and Facebook doing that. 1:48:25 Michael Shellenberger: And with Matt [Taibbi]'s thread this morning we saw the government contractors demanding the same thing of Twitter: accurate information, they said, that needed to be taken down in order to advance a narrative. 1:49:55 Matt Taibbi: You know, in conjunction with our own research, there's a foundation, the Foundation for Freedom Online, which, you know, there's a very telling video that they uncovered where the Director of Stanford's Election Integrity Partnership (EIP) talks about how CISA, the DHS agency, didn't have the capability to do election monitoring, and so that they kind of stepped in to "fill the gaps" legally before that capability could be amped up. And what we see in the Twitter files is that Twitter executives did not distinguish between DHS or CISA and this group EIP, for instance, we would see a communication that said, from CISA, escalated by EIP. So they were essentially identical in the eyes of the company. EIP is, by its own data, and this is in reference to what you brought up, Mr. Congressman, according to their own data, they significantly targeted more what they call disinformation on the right than on the left, by a factor I think of about ten to one. And I say that as not a Republican at all, it's just the fact of what we're looking at. So yes, we have come to the realization that this bright line that we imagine that exists between, say the FBI or the DHS, or the GEC and these private companies is illusory and that what's more important is this constellation of kind of quasi private organizations that do this work. 1:52:10 Rep. Sylvia Garcia (D-TX): What was the first time that Mr. Musk approached you about writing the Twitter files? Matt Taibbi: Again, Congresswoman that would — Rep. Sylvia Garcia (D-TX): I just need a date, sir. Matt Taibbi: But I can't give it to you, unfortunately, because this this is a question of sourcing, and I don't give up... I'm a journalist, I don't reveal my sources. Rep. Sylvia Garcia (D-TX): It's a question of chronology. Matt Taibbi: No, that's a question of sourcing — Rep. Sylvia Garcia (D-TX): Earlier you said that someone had sent you, through the internet, some message about whether or not you would be interested in some information. Matt Taibbi: Yes. And I refer to that person as a source. Rep. Sylvia Garcia (D-TX): So you're not going to tell us when Musk first approached you? Matt Taibbi: Again, Congresswoman, you're asking me, you're asking a journalist to reveal a source. Rep. Sylvia Garcia (D-TX): You consider Mr. Musk to be the direct source of all this? Matt Taibbi: No, now you're trying to get me to say that he is the source. I just can't answer — Rep. Sylvia Garcia (D-TX): Either he is or he isn't. If you're telling me you can't answer because it's your source, well, then the only logical conclusion is that he is in fact, your source. Matt Taibbi: Well, you're free to conclude that. Rep. Sylvia Garcia (D-TX): Well, sir, I just don't understand. You can't have it both ways. But let's move on because -- Unknown Representative 1: No, he can. He's a journalist. Unknown Representative 2: He can't, because either Musk is the source and he can't talk about it, or Musk is not the source. And if Musk is not the source, then he can discuss [unintelligible] Rep. Jim Jordan (R-OH): No one has yielded, the gentlelady is out of order, you don't get to speak — Multiple speakers: [Crosstalk] Rep. Jim Jordan (R-OH): The gentlelady is not recognized...[crosstalk]...he has not said that, what he has said is he's not going to reveal his source. And the fact that Democrats are pressuring him to do so is such a violation of the First Amendment. Multiple speakers: [Crosstalk] Rep. Sylvia Garcia (D-TX): I have not yielded time to anybody. I want to reclaim my time. And I would ask the chairman to give me back some of the time because of the interruption. Mr. Chairman, I am asking you, if you will give me the seconds that I lost. Rep. Jim Jordan (R-OH): We will give you that 10 seconds. Rep. Sylvia Garcia (D-TX): Thank you. Now let's talk about another item. When you responded to the ranking member, you said that you had free license to look at everything but yet you yourself posted on your...I guess it's kind of like a web page...I don't quite understand what Substack is, but what I can say is that "in exchange for the opportunity to cover a unique and explosive story, I had to agree to certain conditions." What were those conditions? She asked you that question and you said you had none. But you yourself posted that you had conditions? Matt Taibbi: The conditions, as I've explained multiple times -- Rep. Sylvia Garcia (D-TX): No sir, you have not explained, you told her in response to her question that you had no conditions. In fact, you used the word licensed, that you were free to look at all of them. All 100,000 emails. Matt Taibbi: The question was posed, was I free to to write about — Rep. Sylvia Garcia (D-TX): Sir, did you have any conditions? Matt Taibbi: The condition was that we publish — Rep. Sylvia Garcia (D-TX): Sir, did you have any conditions? Yes or no? A simple question. Matt Taibbi: Yes. Rep. Sylvia Garcia (D-TX): All right. Could you tell us what conditions those were? Matt Taibbi: The conditions were an attribution of sources at Twitter and that we break any news on Twitter. Rep. Sylvia Garcia (D-TX): But you didn't break it on Twitter. Did you send the file that you released today to Twitter first? Matt Taibbi: Did I send the...actually I did, yes. Rep. Sylvia Garcia (D-TX): Did you send it to Twitter first? Matt Taibbi: The Twitter files thread? Rep. Sylvia Garcia (D-TX): That was one of the conditions? Yes or no, sir. Matt Taibbi: The Twitter files thread actually did come out first. Rep. Sylvia Garcia (D-TX): But sir, you said earlier that you had to attribute all the sources to Twitter first. What you released today, did you send that to Twitter first? Matt Taibbi: No, no, no, I post I posted it on Twitter Rep. Sylvia Garcia (D-TX): First. First, sir, or did you give it to the Chairman of the Committee or the staff of the Committee first? Matt Taibbi: Well, that's not breaking the story, that's giving...I did give — Rep. Sylvia Garcia (D-TX): So you gave all the information that you did not give to the Democrats, you gave it to the Republicans first, then you put it on Twitter? Matt Taibbi: Actually, no, the chronology is a little bit confused. Rep. Sylvia Garcia (D-TX): Well then tell us what the chronology was. Matt Taibbi: I believe the thread came out first. Rep. Sylvia Garcia (D-TX): Where? Matt Taibbi: On Twitter Rep. Sylvia Garcia (D-TX): On Twitter. So then you afterwards gave it to the Republicans, and not the Democrats? Matt Taibbi: Yes, because I'm submitting it for the record as my statement. Rep. Sylvia Garcia (D-TX): Did you give it to him in advance? Matt Taibbi: I gave it to them today. Rep. Sylvia Garcia (D-TX): You gave it to them today, but you still have not given anything to the Democrats. Well, I'll move on. 1:57:20 Rep. Sylvia Garcia (D-TX): Now in your discussion, in your answer, you also said that you were invited by a friend, Bari Weiss? Michael Shellenberger: My friend, Bari Weiss. Rep. Sylvia Garcia (D-TX): So this friend works for Twitter, or what is her....? Matt Taibbi: She's a journalist. Rep. Sylvia Garcia (D-TX): Sir, I didn't ask you a question. I'm now asking Mr. Shellenberger a question. Michael Shellenberger: Yes, ma'am, Bari Weiss is a journalist. Rep. Sylvia Garcia (D-TX): I'm sorry, sir? Michael Shellenberger: She's a journalist. Rep. Sylvia Garcia (D-TX): She's a journalist. So you work in concert with her? Michael Shellenberger: Yeah. Rep. Sylvia Garcia (D-TX): Do you know when she first was contacted by Mr. Musk? Michael Shellenberger: I don't know. Rep. Sylvia Garcia (D-TX): You don't know. So you're in this as a threesome? 2:00:10 Michael Shellenberger: Reading through the whole sweep of events, I do not know the extent to which the influence operation aimed at "pre-bunking" the Hunter Biden laptop was coordinated. I don't know who all was involved. But what we saw was, you saw Aspen and Stanford, many months before then, saying don't cover the material in the hack and leak without emphasizing the fact that it could be disinformation. Okay, so they're priming journalists to not cover a future hack and leak in a way that journalists have long been trained to in the tradition of the Pentagon Papers, made famous by the Steven Spielberg movie. They were saying [to] cover the fact that it probably came from the Russians. Then you have the former General Counsel to the FBI, Jim Baker, and the former Deputy Chief of Staff to the FBI, both arriving at Twitter in the summer of 2020, which I find, what an interesting coincidence. Then, when the New York Post publishes its first article on October 14, it's Jim Baker who makes the most strenuous argument within Twitter, multiple emails, multiple messages saying this doesn't look real. There's people, there's intelligence experts, saying that this could be Russian disinformation. He is the most strenuous person inside Twitter arguing that it's probably Russian disinformation. The internal evaluation by Yoel Roth, who testified in front of this committee, was that it was what it looked to be, which was that it was not a result of a hack and leak operation. And why did he think that? Because the New York Post had published the FBI subpoena taking the laptop in December of 2019. And they published the agreement that the computer store owner had with Hunter Biden that gave him permission, after he abandoned the laptop, to use it however he wanted. So there really wasn't much doubt about the provenance of that laptop. But you had Jim Baker making a strenuous argument. And then, of course, you get to a few days after the October 14 release, you have the president of the United States echoing what these former intelligence community officials were saying, which is that it looked like a Russian influence operation. So they were claiming that the laptop was made public by the conspiracy theory that somehow the Russians got it. And basically, they convinced Yoel Roth of this wild hack and leak story that somehow the Russians stole it, got the information, gave us the computer, it was bizarre. So you read that chain of events, and it appears as though there is an organized influence operation to pre-bunk.... Rep. Jim Jordan (R-OH): Why do you think they could predict the time, the method, and the person? Why could the FBI predict it? Not only did they predict this, they predicted it, so did the Aspen Institute, seemed like everyone was in the know saying, here's what's gonna happen, we can read the future. Why do you think, how do you think they were able to do that? Michael Shellenberger: I think the most important fact to know is that the FBI had that laptop in December 2019. They were also spying on Rudy Giuliani when he got the laptop and when he gave it to the New York Post. Now, maybe the FBI agents who are going to Mark Zuckerberg at Facebook and Twitter executives and warning of a hack and leak, potentially involving Hunter Biden, maybe those guys didn't have anything to do with the guys that had the top. We don't know that. I have to say, as a newcomer to this, as somebody that thought it was Russian disinformation in 2020, everybody I knew thought it was Russian disinformation, I was shocked to see that series of events going on. It looks to me like a deliberate influence operation. I don't have the proof of it, but the circumstantial evidence is pretty disturbing. 2:14:30 Matt Taibbi: We found, just yesterday, a Tweet from the Virality Project at Stanford, which was partnered with a number of government agencies, and Twitter, where they talked explicitly about censoring stories of true vaccine side effects and other true stories that they felt encouraged hesitancy. Now the imp— Unknown Representative: So these were true. Matt Taibbi: Yes. So they use the word truth three times in this email, and what's notable about this is that it reflects the fundamental misunderstanding of this whole disinformation complex, anti-disinformation complex. They believe that ordinary people can't handle difficult truths. And so they think that they need minders to separate out things that are controversial or difficult for them, and that's again, that's totally contrary to what America is all about, I think. 2:17:30 Rep. Dan Goldman (D-NY): Of course we all believe in the First Amendment, but the First Amendment applies to government prohibition of speech, not to private companies. 2:33:00 Rep. Dan Goldman (D-NY): And even with, Twitter you cannot find actual evidence of any direct government censorship of any lawful speech. 2:33:20 Rep. Jim Jordan (R-OH): I'd ask unanimous consent to enter into the record the following email from Clarke Humphrey, Executive Office of the Presidency, White House Office, January 23, 2021. That's the Biden Administration. 4:39am: "Hey folks," this goes to Twitter, "Hey folks, wanted..." they used the term Mr. Goldman just used, "wanted to flag the below Tweet, and I'm wondering if we can get moving on the process for having it removed ASAP." 2:35:40 Rep. Mike Johnson (R-LA): He said the First Amendment applies to government censorship of speech and not private companies, but what we're talking about and what the Chairman just illustrated is that what we have here and what your Twitter files show is the Federal government has partnered with private companies to censor and silence the speech of American citizens. 2:29:20 Matt Taibbi: In the first Twitter files, we saw an exchange between Representative Ro Khanna and Vijaya Gadde, where he's trying to explain the basics of speech law in America and she's completely, she seems completely unaware of what, for instance, New York Times v. Sullivan is. There are other cases like Bartnicki v. Vopper, which legalized the publication of stolen material, that's very important for any journalists to know. I think most of these people are tech executives, and they don't know what the law is around speech and around reporting. And in this case, and in 2016, you are dealing with true material. There is no basis to restrict the publication of true material no matter who the sources and how you get it. And journalists have always understood that and this has never been an issue or a controversial issue until very recently. 2:44:40 Rep. Kat Cammack (R-FL): Would you agree that there was a black list created in 2021? Michael Shellenberger: Sorry, yes, Jay Bhattacharya, the Stanford Professor, who I don't think anybody considers a fringe epidemiologist, was indeed -- I'm sorry, I couldn't, I didn't piece it together -- he was indeed visibility filtered. Rep. Kat Cammack (R-FL): Correct. And so this blacklist that was created, that really was used to de-platform, reduce visibility, create lists internally, where people couldn't even see their profiles, that was used against doctors and scientists who produced information that was contrary to what the CDC was putting out, despite the fact that we now know that what they were publishing had scientific basis and in fact was valid. Michael Shellenberger: Absolutely. And not only that, but these are secret blacklists, so Professor Bhattacharya had no idea he was on it. 43:05 Matt Taibbi: The original promise of the internet was that it might democratize the exchange of information globally. A free internet would overwhelm all attempts to control information flow, its very existence a threat to anti-democratic forms of government everywhere. What we found in the Files was a sweeping effort to reverse that promise and use machine learning and other tools to turn the Internet into an instrument of censorship and social control. Unfortunately, our own government appears to be playing a lead role. We saw the first hints and communications between Twitter executives before the 2020 election, when we read things like "flagged by DHS," or "please see attached report from FBI for potential misinformation." This would be attached to an Excel spreadsheet with a long list of names, whose accounts were often suspended shortly after. #1940 - Matt Taibbi February 13, 2023 The Joe Rogan Experience Clips Matt Taibbi: So this is another topic that is fascinating because it hasn't gotten a ton of press. But if you go back all the way to the early 70s, the CIA and the FBI got in a lot of trouble for various things, the CIA for assassination schemes involving people like Castro, the FBI for, you know, COINTELPRO and other programs, domestic surveillance, and they made changes after Congressional hearings, the Church Committee, that basically said the FBI, from now on, you have to have some kind of reason to be following somebody or investigating somebody, you have to have some kind of criminal predicate and we want you mainly to be investigating cases. But after 9/11 they peeled all this back. There was a series of Attorney General memos that essentially re-fashioned what the FBI does, and now they don't have to be doing crimefighting all the time. Now they can be doing basically 100% intelligence gathering all the time. They can be infiltrating groups for no reason at all, not to build cases, but just to get information. And so that's why they're there. They're in these groups, they're posted up outside of the homes of people they find suspicious, but they're not building cases and they're not investigating crimes. It's sort of like Minority Report there, right? It's pre-crime. Matt Taibbi: We see reports in these files of government agencies sending lists of accounts that are accusing the United States of vaccine corruption. Now, what they're really talking about is pressuring foreign countries to not use generic vaccines. Right. And, you know, that's a liberal issue, that's a progressive issue. The progressives want generic vaccines to be available to poor countries, okay? But, you know, you can use this tool to eliminate speech about that if you want too, right? I think that's what they don't get is that the significance is not who [it's used against], the significance is the tool. What is it capable of doing, right? How easily is it employed, and you know, how often is it used? And they don't focus on that. Joe Rogan: Has anything been surprising to you? Matt Taibbi: A little bit. I think going into it, I thought that the relationship between the security agencies like the FBI and the DHS and companies like Twitter and Facebook, I thought it was a little bit less formal. I thought maybe they had kind of an advisory role. And what we find is that it's not that, it's very formalized. They have a really intense structure that they've worked out over a period of years where they have regular meetings. They have a system where the DHS handles censorship requests that come up from the States and the FBI handles international ones, and they all float all these companies and it's a big bureaucracy. I don't think we expected to see that. Matt Taibbi: I was especially shocked by an email from a staffer for Adam Schiff, the Congressperson, the California Congressman. And they're just outright saying we would like you to suspend the accounts of this journalist and anybody who retweets information about this Committee. You know, I mean, this is a member of Congress. Joe Rogan: Yeah. Matt Taibbi: Right? Most of these people have legal backgrounds. They've got lawyers in the office for sure. And this is the House Intelligence Committee. Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter's Role in Suppressing the Biden Laptop Story February 8, 2023 House Committee on Oversight and Accountability Witnesses: Vijaya Gadde, Former Chief Legal Officer, Twitter James Baker, Former Deputy General Counsel, Twitter Yoel Roth, Former Global Head of Trust & Safety, Twitter Annika Collier Navaroli, Former Policy Expert for Content Moderation, Twitter Clips 14:50 Rep. Jamie Raskin (D-MD): What's more, Twitter's editorial decision has been analyzed and debated ad nauseam. Some people think it was the right decision. Some people think it was the wrong decision. But the key point here is that it was Twitter's decision. Twitter is a private media company. In America, private media companies can decide what to publish or how to curate content however they want. If Twitter wants to have nothing but Tweets commenting on New York Post articles run all day, it can do that. If it makes such tweets mentioning New York Post never see the light of day they can do that too. That's what the First Amendment means. 16:05 Rep. Jamie Raskin (D-MD): Officially Twitter happens to think they got it wrong about that day or two period. In hindsight, Twitter's former CEO Jack Dorsey called it a mistake. This apology might be a statement of regret about the company being overly cautious about the risks of publishing contents and potentially hacked or stolen materials, or it may reflect craven surrender to a right wing pressure campaign. But however you interpreted the apology just makes the premise of this hearing all the more absurd. The professional conspiracy theorists who are heckling and haranguing this private company have already gotten exactly what they want: an apology. What more do they want? And why does the US Congress have to be involved in this nonsense when we have serious work to do for the American people? 26:20 James Baker: The law permits the government to have complex, multifaceted, and long term relationships with the private sector. Law enforcement agencies and companies can engage with each other regarding, for example, compulsory legal process served on companies, criminal activity that companies, the government, or the public identify, such as crimes against children, cybersecurity threats, and terrorism, and instances where companies themselves are victims of crime. When done properly, these interactions can be beneficial to both sides and in the interest of the public. As you Mr. Chairman, Mr. Jordan, and others have proposed, a potential workable way to legislate in this area may be to focus on the actions of federal government agencies and officials with respect to their engagement with the private sector. Congress may be able to limit the nature and scope of those interactions in certain ways, require enhanced transparency and reporting by the executive branch about its engagements, and require higher level approvals within the executive branch prior to such engagements on certain topics, so that you can hold Senate confirmed officials, for example, accountable for those decisions. In any event, if you want to legislate, my recommendation is to focus first on reasonable and effective limitations on government actors. Thank you, Mr. Chairman. 31:05 Vijaya Gadde: On October 14, 2020, The New York Post tweeted articles about Hunter Biden's laptop with embedded images that looked like they may have been obtained through hacking. In 2018, we had developed a policy intended to prevent Twitter from becoming a dumping ground for hacked materials. We applied this policy to the New York Post tweets and blocked links to the articles embedding those sorts of materials. At no point to Twitter otherwise prevent tweeting, reporting, discussing or describing the contents of Mr. Biden's laptop. People could and did talk about the contents of the laptop on Twitter or anywhere else, including other much larger platforms, but they were prevented from sharing the primary documents on Twitter. Still, over the course of that day, it became clear that Twitter had not fully appreciated the impact of that policy on free press and others. As Mr. Dorsey testified before Congress on multiple occasions, Twitter changed its policy within 24 hours and admitted its initial action was wrong. This policy revision immediately allowed people to tweet the original articles with the embedded source materials, relying on its long standing practice not to retroactively apply new policies. Twitter informed the New York Post that it could immediately begin tweeting when it deleted the original tweets, which would have freed them to retweet the same content again. The New York Post chose not to delete its original tweets, so Twitter made an exception after two weeks to retroactively apply the new policy to the Post's tweets. In hindsight, Twitter should have reinstated the Post account immediately. 35:35 Yoel Roth: In 2020, Twitter noticed activity related to the laptop that at first glance bore a lot of similarities to the 2016 Russian hack and leak operation targeting the DNC, and we had to decide what to do. And in that moment with limited information, Twitter made a mistake. 36:20 Yoel Roth: It isn't obvious what the right response is to a suspected, but not confirmed, cyber attack by another government on a Presidential Election. I believe Twitter erred in this case because we wanted to avoid repeating the mistakes of 2016. 38:41 Annika Collier Navaroli: I joined Twitter in 2019 and by 2020 I was the most senior expert on Twitter's U.S. Safety Policy Team. My team's mission was to protect free speech and public safety by writing and enforcing content moderation policies around the world. These policies include things like abuse, harassment, hate speech, violence and privacy. 41:20 Annika Collier Navaroli: With January 6 and many other decisions, content moderators like me did the very best that we could. But far too often there are far too few of us and we are being asked to do the impossible. For example, in January 2020 after the US assassinated an Iranian General and the US president decided to justify it on Twitter, management literally instructed me and my team to make sure that World War III did not start on the platform. 1:08:20 Rep. Nancy Mace (R-SC): Did the US government ever contact you or anyone at Twitter to censor or moderate certain Tweets, yes or no? Vijaya Gadde: We receive legal demands to remove content from the platform from the US government and governments all around the world. Those are published on a third party website. 1:12:00 Yoel Roth: The number one most influential part of the Russian active measures campaign in 2016 was the hack and leak targeting John Podesta. It would have been foolish not to consider the possibility that they would run that play again. 1:44:45 Yoel Roth: I think one of the key failures that we identified after 2016 was that there was very little information coming from the government and from intelligence services to the private sector. The private sector had the power to remove bots and to take down foreign disinformation campaigns, but we didn't always know where to look without leads supplied by the intelligence community. That was one of the failures highlighted in the Senate Intelligence Committee's report and in the Mueller investigation, and that was one of the things we set out to fix in 2017. Rep. Gerry Connolly (D-VA): On September 8 2019, at 11:11pm, Donald Trump heckled two celebrities on Twitter -- John Legend and his wife Chrissy Teigen -- and referred to them as "the musician John Legend and his filthy mouth wife." Ms. Teigen responded to that email [Tweet] at 12:17am. And according to notes from a conversation with you, Ms. Navaroli's, counsel, your counsel, the White House almost immediately thereafter contacted Twitter to demand the tweet be taken down. Is that accurate? Annika Collier Navaroli: Thank you for the question. In my role, I was not responsible for receiving any sort of request from the government. However, what I was privy to was my supervisors letting us know that we had received something along those lines or something of a request. And in that particular instance, I do remember hearing that we had received a request from the White House to make sure that we evaluated this tweet, and that they wanted it to come down because it was a derogatory statement towards the President. Rep. Gerry Connolly (D-VA): They wanted it to come down. They made that request. Annika Collier Navaroli: To my recollection, yes. Rep. Gerry Connolly (D-VA): I thought that was an inappropriate action by a government official, let alone the White House. But it wasn't Joe Biden, about his son's laptop. It was Donald Trump because he didn't like what Chrissy Teigen had to say about him, is that correct? Annika Collier Navaroli: Yes, that is correct. Rep. Gerry Connolly (D-VA): My, my, my. 1:45:15 Rep. Shontel Brown (D-OH): Mr. Roth, were those communication channels useful to Twitter as they work to combat foreign influence operations? Yoel Roth: Absolutely, I would say they were one of the most essential pieces of how Twitter prepared for future elections. 2:42:35 Rep. Becca Balint (D-VA): Ms. Gadde, did anyone from the Biden campaign or the Democratic National Committee direct Twitter to remove or take action against the New York Post story? Vijaya Gadde: No. 4:15:45 Rep. Kelly Armstrong (R-ND): And now we forward to 2020. And earlier you had testified that you were having regular interactions with National Intelligence, Homeland Security and the FBI. Yoel Roth: Yes, I did. Rep. Kelly Armstrong (R-ND): And primarily to deal with foreign interference? Yoel Roth: Primarily, but I would say -- Rep. Kelly Armstrong (R-ND): But you had said earlier your contact with Agent Chang was primarily with foreign interference? Yoel Roth: Yes, that's right. Rep. Kelly Armstrong (R-ND): And these were emails....were there meetings? Yoel Roth: Yes, Twitter met quarterly with the FBI Foreign Interference Task Force and we had those meetings running for a number of years to share information about malign foreign interference. Rep. Kelly Armstrong (R-ND): Agents from Homeland Security or Intelligence, or just primarily the FBI? Yoel Roth: Our primary contacts were with the FBI and in those quarterly meetings, they were, I believe, exclusively with FBI personnel. 4:18:05 Rep. Kelly Armstrong (R-ND): Earlier today you testified that you were following national security experts on Twitter as a reason to take down the New York Post story on Hunter Biden's laptop. Yoel Roth: Yes, sir, I did. Rep. Kelly Armstrong (R-ND): So after 2016, you set up all these teams to deal with Russian interference, foreign interference, you're having regular meetings with the FBI, you have connections with all of these different government agencies, and you didn't reach out to them once? Yoel Roth: Is that question in reference to the day of the New York Post article? Rep. Kelly Armstrong (R-ND): Yeah. Yoel Roth: That's right. We generally did not reach out to the FBI to consult on content moderation decisions, especially where they related to domestic activity. It's not that we wouldn't have liked that information, we certainly would have. It's that I don't believe it would have been appropriate for us to consult with the FBI. Rep. Kelly Armstrong (R-ND): In December of 2020, you did a declaration to the Federal Election Commission that the intelligence community expected a leak and a hack operation involving Hunter Biden. Recently, Mark Zuckerberg confirmed that the FBI warned Meta that there was a high effort of Russian propaganda including language specific enough to fit the Hunter Biden laptop security story. You're talking to these people for weeks and months, years prior to this leaking. They have specifically told you in October, that there's going to be a leak potentially involving Hunter Biden's laptop. They legitimately and literally prophesized what happened. And you didn't contact any of them? Yoel Roth: No, sir, I did not. Rep. Kelly Armstrong (R-ND): Did they reach out to you? Yoel Roth: On and around that day, to the best of my recollection, no, they did not. Rep. Kelly Armstrong (R-ND): After the story was taken down and you guys did it, and you personally disagreed with it Ms. Gadde, did you contact them and say is "Hey, is this what you were talking about?" Yoel Roth: If that question was directed to me. No, I did not. Rep. Kelly Armstrong (R-ND): Ms. Gadde, did you talk to anybody from the FBI? Vijaya Gadde: Not to the best of my recollection. Rep. Kelly Armstrong (R-ND): So I guess my question is, what is the point of this program? You have constant communication, they're set up for foreign interference. They've legitimately warned you about this very specific thing. And then all of a sudden, everybody just walks away? 5:18:55 Rep. Melanie Stansbury (D-NM): We are devoting an entire day to this conspiracy theory involving Twitter. Now, the mission of this committee is to root out waste, fraud and abuse and to conduct oversight on behalf of the American people. And if you need any evidence of waste, fraud and abuse, how about the use of this committee's precious time, space and resources to commit to this hearing? 5:58:25 Rep. Eric Burlison (R-MO): Back to Mr. Roth, is it true that Twitter whitelisted accounts for the Department of Defense to spread propaganda about its efforts in the Middle East? Did they give you a list of accounts that were fake accounts and asked you to whitelist those accounts? Yoel Roth: That request was made of Twitter. To be clear, when I found out about that activity, I was appalled by it. I undid the action and my team exposed activity originating from the Department of Defense's campaign publicly. We've shared that data with the world and research about it has been published. 6:07:20 Rep. Jim Jordan (R-OH): Mr. Roth, I want to go back to your statement in your declaration to the FEC "I learned that a hack and leak operation would involve Hunter Biden," who did you learn that from? Yoel Roth: My recollection is it was mentioned by another technology company in one of our joint meetings, but I don't recall specifically whom. Rep. Jim Jordan (R-OH): You don't know the person's name? Yoel Roth: I don't even recall what company they worked at. No, this was a long time ago. Rep. Jim Jordan (R-OH): And you're confident that it was from a tech company, not from someone from the government? Yoel Roth: To the best of my recollection, yes. Rep. Jim Jordan (R-OH): Did anyone from the government, in these periodic meetings you had, did they ever tell you that a hack and leak operation involving Hunter Biden was coming? Yoel Roth: No. Rep. Jim Jordan (R-OH): Did Hunter Biden's name come up at all these meetings? Yoel Roth: Yes, his name was raised in those meetings, but not by the government to the best of my recollection. 6:09:30 Rep. Jim Jordan (R-OH): Mr. Roth, why were you reluctant, based on what I read in the Twitter files, why were you reluctant to work with the GEC? Yoel Roth: It was my understanding that the GEC, or the Global Engagement Center of the State Department, had previously engaged in at least what some would consider offensive influence operations. Not that they were offensive as in bad, but offensive as in they targeted entities outside of the United States. And on that basis, I felt that it would be inappropriate for Twitter to engage with a part of the State Department that was engaged in active statecraft. We were dedicated to rooting out malign foreign interference no matter who it came from. And if we found that the American government was engaged in malign foreign interference, we'd be addressing that as well. 6:13:50 Rep. James Comer (R-KY): Twitter is a private company, but they enjoy special liability protections, Section 230. They also, according to the Twitter files, receive millions of dollars from the FBI, which is tax dollars, I would assume. And that makes it a concern of the Oversight Committee. Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior? October 28, 2020 Senate Commerce, Science and Transportation Committee Witnesses: Jack Dorsey, [Former] CEO, Twitter Sundar Pichai, CEO, Alphabet and Google Mark Zuckerberg, CEO, Facebook [Meta] Clips 2:20:40 Sen. Ed Markey (D-MA): The issue is not that the companies before us today are taking too many posts down. The issue is that they're leaving too many dangerous posts up. In fact, they're amplifying harmful content so that it spreads like wildfire and torches our democracy. 3:15:40 Mark Zuckerberg: Senator, as I testified before, we relied heavily on the FBI, his intelligence and alert status both through their public testimony and private briefings. Sen. Ron Johnson (R-WI): Did the FBI contact you, sir, than your co star? It was false. Mark Zuckerberg: Senator not about that story specifically. Sen. Ron Johnson (R-WI): Why did you throttle it back? Mark Zuckerberg: They alerted us to be on heightened alert around a risk of hack and leak operations around a release and probe of information. Emerging Trends in Online Foreign Influence Operations: Social Media, COVID-19, and Election Security June 18, 2020 Permanent Select Committee on Intelligence Watch on YouTube Witnesses: Nathaniel Gleicher, Head of Security Policy at Facebook Nick Pickles, Director of Global Public Policy Strategy and Development at Twitter Richard Salgado, Director for Law Enforcement and Information Security at Google 1:40:10 Nathaniel Gleicher: Congressman, the collaboration within industry and with government is much, much better than it was in 2016. I think we have found the FBI, for example, to be forward leaning and ready to share information with us when they see it. We share information with them whenever we see indications of foreign interference targeting our election. The best case study for this was the 2018 midterms, where you saw industry, government and civil society all come together, sharing information to tackle these threats. We had a case on literally the eve of the vote, where the FBI gave us a tip about a network of accounts where they identified subtle links to Russian actors. Were able to investigate those and take action on them within a matter of hours. Cover Art Design by Only Child Imaginations Music Presented in This Episode Intro & Exit: Tired of Being Lied To by David Ippolito (found on Music Alley by mevio)
Trolls have been around for as long as the internet, but in recent years they've been getting professionalised. Indeed, companies have actually been set up with the specific aim of creating and spreading fake news online. Such organisations are known as troll farms or troll factories, and Russia has been home to some of the most prominent. Based in the city of St Petersburg, its Internet Research Agency is perhaps the most well-known and influential in the world. More worrying still, these keyboard armies often operate at the behest of governments, for propaganda purposes. Needless to say, since the Russian invasion of Ukraine in February, troll farms have been highly active in spreading disinformation. When did troll farms first appear? How did we find out about the Internet Research Agency's actions? How do troll farms operate? In under 3 minutes, we answer your questions! To listen to the last episodes, you can click here: What is CoreCore, the latest aesthetic taking over Tiktok? How can I meditate without meditating? What does my urine colour say about my health? A Bababam Originals podcast, written and produced by Joseph Chance. In partnership with upday UK. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Alex starts by discussing the insanity at CPAC and why the conference should really be called TPAC (for Trump). He also notes that the straw polls that gave Trump over 60% of support, should not be taken completely seriously at an event that is pretty much a Trump brownnosing event. Next, Alex discusses the brutal and “hell-like” situation that is unfolding in the city of Bakhmut, Ukraine. Russian forces, aided by the Wagner Group, have surrounded the city in multiple directions and a majority of the city has been decimated. There are reports of a coming Ukrainian withdrawl; however, Russian forces have experienced an arms shortage and have started using the shovels that they dug the trenches with as weapons. In general, the situation has turned into a violent and tragic form of trench warfare. Finally, Alex goes into Yevgeny Prigozhin. He is a shadowy criminal and oligarch that founded the Wagner Group and the Internet Research Agency. As his mercenary forces have fought in Ukraine, he has continued to critisize and blame the Russian Ministry of Defense for some of the failures in Ukraine. Alex discusses some reports that argue how Prigozhin could try and become the next leader of Russia as the Russian elites have constantly critisized Putin due to what they deem as inaction in Ukraine.
Anand Giridharadas narrates his work in a serious tone and journalistic style. Host Jo Reed and AudioFile's Alan Minskoff discuss an audiobook that tells stories of how committed activists use persuasion in organizing to attract supporters. He reads with a storyteller's sense as he introduces the Internet Research Agency—a Russian troll farm that won both liberal and conservative Americans' hearts and minds online. He also delves into deep canvassing, a process that encourages like-mindedness as a basis for connection and political action. Read the full review of the audiobook on AudioFile's website. Published by Random House Audio. Find more audiobook recommendations at audiofilemagazine.com Support for AudioFile's Behind the Mic comes from Rob White's THE MAESTRO MONOLOGUE from PUNCH AUDIO, creators of first-class audiobooks for independent authors the world over. Learn more about your ad choices. Visit megaphone.fm/adchoices
In the years following the 2016 U.S. presidential election, much effort has been put into understanding foreign influence campaigns, and into disrupting efforts by Russia and other countries, such as China and Iran, to interfere in U.S. elections. Political and other computational social scientists continue to whittle at questions as to how much influence such campaigns have on domestic politics. One such question is how much did the Russian Internet Research Agency's (IRA) tweets, specifically, affect voting preferences and political polarization in the United States? A new paper in the journal Nature Communications provides an answer to that specific question. Titled Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior, the paper matches Twitter data with survey data to study the impact of the IRA's tweets. To learn more about the paper, Justin Hendrix spoke with one of its authors, Joshua Tucker, a professor of politics at NYU, where he also serves as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics (CSMaP). Hendrix and Tucker talked about the study, as well as what can and cannot be understood about the impact of the broader campaign of the IRA, or certainly the broader Russian effort to interfere in the U.S. election, from its results.
What is a troll farm? Trolls have been around for as long as the internet, but in recent years they've been getting professionalised. Indeed, companies have actually been set up with the specific aim of creating and spreading fake news online. Such organisations are known as troll farms or troll factories, and Russia has been home to some of the most prominent. Based in the city of St Petersburg, its Internet Research Agency is perhaps the most well-known and influential in the world. More worrying still, these keyboard armies often operate at the behest of governments, for propaganda purposes. Needless to say, since the Russian invasion of Ukraine in February, troll farms have been highly active in spreading disinformation. When did troll farms first appear? How did we find out about the Internet Research Agency's actions? How do troll farms operate? In under 3 minutes, we answer your questions! To listen to the last episodes, you can click here: What is the perineum? Who is Rosalia? What is deepfake democracy? A podcast written and realised by Joseph Chance. In partnership with upday UK. Learn more about your ad choices. Visit megaphone.fm/adchoices
Hello Sheeple! This episode is going to be a little different than normal. My illustrious co-host Thom has been swamped at his day job as of late so we haven't had a chance to record new episodes. On the back of that I figured I'd do a short episode about the on-going invasion of Ukraine by Russia...it ended up being 45mins. Enjoy! We hope to return to our regular upload schedule in the next few weeks. Follow us on Twitter: @wethesheeplepod Have a topic you think we should cover? Email us: wethesheeplepod@gmail.com Music: Intro: Alien visitors Ads: The Bounty Hunter Outro: Trip with a UFO From: https://soundtrackuniverse.com Hear from an expert in online literacy: https://omny.fm/shows/factually-with-adam-conover/why-critical-thinking-can-t-beat-misinformation-an Sources: https://www.cnn.com/2022/03/02/politics/us-russia-ukraine-civilians-warning/index.html https://apnews.com/article/russia-ukraine-united-nations-general-assembly-volodymyr-zelenskyy-soccer-sports-7456cc05797d72960257a8ac75654d98 https://en.wikipedia.org/wiki/Ukraine https://en.wikipedia.org/wiki/History_of_Russia#Russian_Civil_War_(1917%E2%80%931922) https://www.cfr.org/global-conflict-tracker/conflict/conflict-ukraine https://apnews.com/article/russia-ukraine-kyiv-business-europe-media-3a131e7e0bdc006d697a3eb467a66d29 https://en.wikipedia.org/wiki/Internet_Research_Agency https://en.wikipedia.org/wiki/Assessing_Russian_Activities_and_Intentions_in_Recent_US_Elections https://www.intelligence.senate.gov/publications/committee-findings-2017-intelligence-community-assessment https://wid.world/country/usa/ https://wid.world/country/russian-federation/ https://en.wikipedia.org/wiki/Propaganda https://en.wikipedia.org/wiki/Loaded_language https://www.britannica.com/topic/propaganda
Journalist Andrei Soshnikoff, with the help of insider Lyudmila Savchuk, exposed the Internet Research Agency, a troll farm set up by a Russian Oligarch close to Putin. It was a new disturbing phenomenon in the global information wars. This was the moment American's came to know the concept of fake news and disinformation. This came to forefront thanks to the groundbreaking work of the aforementioned Russian investigative journalist Soshnikoff and whistleblower Savchuk. who was disgusted by the murder of Boris Nemtsov. But the ability of Russian journalists to cover the news has now been challenged as never before. In the wake of Putin's latest and more vicious assault on Ukraine, the Kremlin has launched a harsh new crackdown on the press. Threatening journalists with up to fifteen years in prison for spreading "false information" and banning them from even referring to the events in Ukraine as a war. Soshnikoff joins us to discuss the state of Russian media. And also, Maryan Zeblotskyy, a member of the Ukrainian Parliament joins to discuss the latest from Western Ukraine.GUEST:Maryan Zeblotzkyy, Member of Ukrainian ParliamentAndrei Soshnikoff (@Soshnikoff), Current Time TV, former BBC journalist/Transparency International RU analystHOSTS:Michael Isikoff (@Isikoff), Chief Investigative Correspondent, Yahoo NewsDaniel Klaidman (@dklaidman), Editor in Chief, Yahoo NewsVictoria Bassetti (@VBass), fellow, Brennan Center for Justice (contributing co-host)RESOURCES:Yahoo News' Tom LoBianco's latest piece on Putin - Here.Yaoo News' Niamh Cavanagh's latest piece on Zelensky - Here.Follow us on Twitter: @SkullduggeryPodListen and subscribe to "Skullduggery" on Apple Podcasts, Spotify, Google Podcasts or wherever you get your podcasts.Email us with feedback, questions or tips: SkullduggeryPod@yahoo.com. See acast.com/privacy for privacy and opt-out information.
Tällä kertaa sukellamme Putinin trollitehtaiden maailmaan. Pyrin jaksossa läpivalaisemaan Kremlin trollien vuosia jatkunutta informaatiovaikuttamista sekä Venäjän Federaation pyrkimyksiä horjuttaa länsimielisiä valtioita 2000-luvulla.Instagram: subjektiivinentodistaja
This week on Lawfare's Arbiters of Truth series on disinformation, Evelyn Douek and Quinta Jurecic spoke with Deen Freelon, an associate professor at the University of North Carolina Hussman School of Journalism and Media. Deen's work focuses on data science and political expression on social media, and they discussed research he conducted on tweets from the Internet Research Agency troll farm and their attempts to influence U.S. politics, including around the 2016 election. In a recent article, Deen and his coauthors found that IRA tweets from accounts presenting themselves as Black Americans received particularly high engagement from other users on Twitter—which raises interesting questions about the interaction of race and disinformation. They also talked about what the data show on whether the IRA actually succeeded in changing political beliefs and just how many reporters quoted IRA trolls in their news reports without realizing it. See acast.com/privacy for privacy and opt-out information.
This week on Lawfare's Arbiters of Truth series on disinformation, Alina Polyakova and Quinta Jurecic spoke with Ben Nimmo, the director of investigations at Graphika. Ben has come on the podcast before to discuss how he researches and identifies information operations, but this time, he talked about one specific information operation: a campaign linked to the Internet Research Agency “troll farm.” Yes, that's the same Russian organization that Special Counsel Robert Mueller pinpointed as responsible for Russian efforts to interfere in the 2016 election on social media. They're still at it, and Graphika has just put out a report on an IRA-linked campaign that amplified content from a fake website designed to look like a left-wing news source. Ben, Alina and Quinta discussed what Graphika found, how the IRA's tactics have changed since 2016 and whether the discovery of the network might represent the rarest of things on the disinformation beat—a good news story. See acast.com/privacy for privacy and opt-out information.
For audio only: Spotify: Aaron Wayne PodcastApple: Aaron Wayne PodcastWebsite: aaronwayneyoga.com Instagram: @aaronwayneyogaTikTok: @aaronwayneyoga Sources: "What is the Internet Research Agency?" Calamu, Krishnadev. The Atlantic. February 16, 2018. https://www.theatlantic.com/international/archive/2018/02/russia-troll-farm/553616/
Le dialogue franco-malien est entré dans une zone de turbulences et la désinformation s'engouffre dans la brèche. Sur les réseaux sociaux, de graves accusations sont proférées à l'encontre de la force française Barkhane, déployée au Sahel, accusations gratuites visant à alimenter le sentiment anti-français. À en croire l'auteur du tweet épinglé aujourd'hui, la France équiperait les jihadistes en armes et en véhicules. C'est en tout cas ce que Nathalie Yamb affirme sur twitter cette semaine, en ironisant sur la livraison par la France d'équipements de maintien de l'ordre aux autorités maliennes. Rien ne vient pourtant étayer ces accusations. Pour en savoir plus, nous sommes allés voir les nombreuses publications de l'auteure, très active sur internet. Elle est suivie par plus de 168 000 abonnés sur Twitter, 127 000 sur sa chaîne YouTube. Accusations gratuites et martelées En fait aucun élément tangible, n'est apporté à ce sujet, à aucun moment. Dans l'une de ses vidéos, Nathalie Yamb déplore l'échec de Barkhane à débarrasser le Sahel des groupes jihadistes - ce que l'on ne viendra pas contester ici - mais elle va beaucoup plus loin en accusant la France de financer les terroristes qu'elle combat, et au lieu d'apporter la preuve de ses accusations, elle développe un argumentaire aux accents complotistes, avec inversion de la cause et de l'effet. Pour elle, si les jihadistes prolifèrent au Sahel c'est du fait de la présence des troupes françaises et non l'inverse. Conclusion, les Français doivent quitter le Mali. C'est ce que martèle la relâche l'activiste suisso-camerounaise, peu lui importe le coup de gueule de l'exécutif malien, face à la réduction annoncée des effectifs de Barkhane, le Premier ministre de transition Choguel Maïga reprochant à la France un « abandon en plein vol ». Désinformation et appel aux mercenaires En réalité, Nathalie Yamb ne fait pas mystère de ses accointances russes. Fière de se présenter comme « la dame de Sotchi », elle a épinglé sur son mur twitter son intervention au forum Russie-Afrique de Sotchi à l'automne 2019 où elle déclare « nous voulons le démantèlement des bases militaires françaises ». Sans que l'on sache très bien à quoi le « nous » se réfère. Nathalie Yamb s'érige en panafricaniste « affranchie des tutelles », comme l'indique sa biographie sur Twitter. Mais semble avant tout défendre des intérêts russes. Dans une autre vidéo, mise en ligne le 26 septembre dernier, on l'entend plaider ouvertement pour l'intervention des mercenaires russes de la compagnie Wagner, qu'elle présente comme « la société de Yevgueni Prigozhin, proche de Vladimir Poutine ». Propagande russe non voilée Ces slogans très relayés sur les réseaux sont également entendus dans des manifestations, parfois accompagnés de discours de haine anti-française. Une partie de la rue y adhère. Car Nathalie Yamb n'est pas seule à propager ce type de narratif pro-russe et anti-français. L'homme d'affaire Yevgueni Prigozhin recrute à travers le réseau AFRIC, des agents d'influence à la solde de Moscou. L'acronyme AFRIC signifie Association for Free Research and International Cooperation. Parmi ses membres les plus éminents, Alexandre Malkevich connu pour avoir mis sur pied l'une des officines impliquées dans les campagnes de désinformation visant à propulser Donald Trump à la tête des États-Unis en 2016. Le travail d'enquête de Michael Weiss et Pierre Vaux, « The company you keep, Yevgeny Prigozhin's influence operations in Africa » met à nu le réseau organisé par l'homme d'affaire russe, et présente les photos de personnalités incluant notamment Nathalie Yamb aux côtés d'Alexandre Malkevich, également épinglé pour ses activités dans les usines à troll de l'IRA, Internet Research Agency. La stratégie informationnelle russe est parfaitement claire : exploiter le ressentiment contre l'ancienne puissance coloniale, au profit de l'implantation des hommes de la compagnie russe, qu'au passage, le Kremlin ne reconnaît pas officiellement.
In this special episode of Is that a fact? we explore why some people remain hesitant to get one of the COVID-19 vaccines, despite growing evidence that inoculation is the key to getting our lives and the economy back on track. We wanted to find out just how much misinformation might be to blame for that reluctance or if genuine concerns about the safety and effectiveness of the vaccines might be giving people pause.To answer this question and more, we spoke with Dr. Erica Pan, the deputy director of the California Department of Public Health Center for Infectious Diseases and Brandy Zadrozny, a senior reporter for NBC News, who covers misinformation, extremism and the internet.Dr. Pan has served as interim health officer and director of the Division of Communicable Disease Control and Prevention at the Alameda County Public Health Department since 2011 and was director of public health emergency preparedness and response at the San Francisco Department of Public Health in 2011. She was also director of the Bioterrorism and Infectious Disease Emergencies Unit at the San Francisco Department of Public Health from 2004 to 2010 and was a medical epidemiologist trainee there from 2003 to 2004. Dr. Pan earned a Doctor of Medicine degree and a Master of Public Health degree from the Tufts University School of Medicine.Before joining NBC News, Zadrozny was a senior researcher and writer at The Daily Beast for five years, where she broke stories about Russia’s Internet Research Agency, as well as President Donald Trump and some of his associates, but she started out as a teacher and librarian. For more information on combating COVID-19 vaccine misinformation, visit newslit.org/coronavirus. There you’ll find links to reliable sources of information on the virus and vaccines, articles addressing the full spectrum of vaccine hesitancy, sites that debunk many of the myths surrounding the shots and the virus and more.
This episode we're meeting Paul Kolbe for a chat about cyber security and much more! Some subjects we touch on are the Solar Winds cyber attack, the Internet Research Agency, the Stuxnet on Iran, and the possible vulnerabilities of renewable energy.Paul served for 25 years in the CIA’s Directorate of Operations in a variety of foreign and domestic roles, including as Chief of Station, Chief/Central Eurasia Division, and Balkans Group Chief. His overseas assignments included operational and leadership roles in the former Soviet Union, the Balkans, Southeast Asia, Southern Africa, and Central Europe. He was a member of the Senior Intelligence Service and is a recipient of the Intelligence Medal of Merit and the Distinguished Career Intelligence Medal.Mr. Paul Kolbe is the Director of the Intelligence Project at the Belfer Center. Books recommendedWe are BellingcatThe Billion dollar spyRotaract Talks is a project by Rotaract Sweden.Please remember to like, subscribe, and share this podcast with your friends and fellow Rotaract and Rotary members. If you want to support the podcast, please consider doing so on Patreon: https://www.patreon.com/rotaracttalksMusic credit: Blues Sting by Alexander NakaradaLink: https://filmmusic.io/song/4943-blues-stingLicense: http://creativecommons.org/licenses/by/4.0/Support the show (https://www.patreon.com/rotaracttalks)
A very special episode based on the Absolutely Positively True Story of how the Internet Research Agency used Pokemon Go to troll America and help get Donald Trump elected President. To read how they actually did it, check out this article: https://money.cnn.com/2017/10/12/media/dont-shoot-us-russia-pokemon-go/index.html
In this episode I speak about the trumpito incited coo on the capital and the final season of Mr. Robot The Spun Today Podcast is a Podcast that is anchored in Writing, but unlimited in scope. Give it a whirl. Twitter: https://twitter.com/spuntoday Instagram: https://www.instagram.com/spuntoday/ Website: http://www.spuntoday.com/home Newsletter: http://www.spuntoday.com/subscribe Links referenced in this episode: Rene DiResta Links: http://www.reneediresta.com/ The Tactics & Tropes of the Internet Research Agency: http://www.reneediresta.com/ira-report-4e8d0ff684.pdf Online Disinformation Could Spark Real World Wars | Joe Rogan and Renée DiResta https://youtu.be/62Hc82c4bRg Statement for the record from Renee DiResta, Director of Research, New Knowledge (to the Senate Intelligence committee): https://www.intelligence.senate.gov/sites/default/files/documents/os-rdiresta-080118.pdf 60 Minutes Piece: What is Section 230 and why do people want it repealed? https://youtu.be/2A2e35sIelM Mr. Robot: https://www.imdb.com/title/tt4158110/?ref_=ttfc_fc_tt Check out all of the Spun Today Merch, and other ways to help support this show! https://www.spuntoday.com/support Check out my Books: Make Way for You – Tips for getting out of your own way & FRACTAL – A Time Travel Tale http://www.spuntoday.com/books/ (e-Book & Paperback are now available). Fill out my Spun Today Questionnaire if you’re passionate about your craft. I’ll share your insight and motivation on the Podcast: http://www.spuntoday.com/questionnaire/ Shop on Amazon using this link, to support the Podcast: http://www.amazon.com//ref=as_sl_pc_tf_lc?&tag=sputod0c-20&camp=216797&creative=446321&linkCode=ur1&adid=104DDN7SG8A2HXW52TFB&&ref-refURL=http%3A%2F%2Fwww.spuntoday.com%2Fcontact%2F Shop on iTunes using this link, to support the Podcast: https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewTop?genreId=38&id=27820&popId=42&uo=10 Shop at the Spun Today store for Mugs, T-Shirts and more: https://viralstyle.com/store/spuntoday/tonyortiz Outro Background Music: https://www.bensound.com Spun Today Logo by: http://pcepeda.com/ Sound effects are credited to: http://www.freesfx.co.uk Listen on: iTunes | Spotify | Stitcher | Pocket Casts | Google Play | YouTube
يجادل الكتاب في أنّ "التآكل المنهجي" لمنظومة المعلومات في الغرب بدأ منذ زمن بعيد وكان نقطة ضعف منذ الحرب الباردة بين الغرب والاتحاد السوفيتي من الخمسينيات إلى الثمانينيات. ووفرت التطورات التكنولوجية خلال الثلاثين عامًا الماضية المنصات التي يحتاجها القادة في روسيا وخارجها لنشر المعلومات المضللة، وبدلًا من أن تحمي المنظومة الديمقراطية والحرية الناس من المعلومات المضللة والتزييف، يمكن أن تجعلهم أكثر عرضة لها من اولئك الذين يعيشون في مجتمعات مغلقة حيث تحتكر سلطة مركزية المعلومات. ويقول الكتاب أنها واحدة من أهم القضايا التي يتعين على الديمقراطيات الليبرالية حول العالم التعامل معها في السنوات العشرة القادمة. فعندما يصير الذكاء الإصطناعي قويًّا لدرجة يستطيع معها توليد "ميديا تركيبية" بالكامل ومن دون تدخل بشري، سيكون لذلك آثار كبيرة على كيفية إنتاج المحتوى والتواصل وتفسير العالم. وفي عالم التزييف العميق يمسي كشف المزيف من الحقيقي وتمييزه مهمة بالغة الصعوبة. إعداد وتقديم عمر السعدي مراجعة همام الخطيب الرابط لدعم البرنامج على بتريون https://www.patreon.com/FiAlm3na تويتر @omarksaadi سبوتيفايhttps://open.spotify.com/show/2C6PHpp... ابل بودكاست https://podcasts.apple.com/us/podcast... الكتاب https://www.goodreads.com/en/book/show/54828853-deep-fakes-and-the-infocalypse رابط إلى الصور التي يتم توليدها عن طريق الذكاء الإصطناعي https://thispersondoesnotexist.com/ فيديو أوباما المزيف على يوتيوب https://www.youtube.com/watch?v=cQ54GDm1eL0&t=1s وكالة أبحاث الإنترنت الروسية https://en.wikipedia.org/wiki/Internet_Research_Agency
Everyone who uses Facebook, Google, and Twitter has probably noticed the disappearance of posts and the appearance of labels, especially during the 2020 election season. In this episode, hear the highlights from six recent House and Senate hearings where executives from the social media giants and experts on social media testified about the recent changes. The incoming 117th Congress is promising to make new laws that will affect our social media experiences; these conversations are where the new laws are being conceived. Please Support Congressional Dish – Quick Links Click here to contribute monthly or a lump sum via PayPal Click here to support Congressional Dish via Patreon (donations per episode) Send Zelle payments to: Donation@congressionaldish.com Send Venmo payments to: @Jennifer-Briney Send Cash App payments to: $CongressionalDish or Donation@congressionaldish.com Use your bank’s online bill pay function to mail contributions to: 5753 Hwy 85 North, Number 4576, Crestview, FL 32536 Please make checks payable to Congressional Dish Thank you for supporting truly independent media! Recommended Episodes CD196: The Mueller Report CD186: National Endowment for Democracy Articles/Documents Article: President Trump’s latest claims about Wis. absentee ballots debunked by election officials WTMJ-TV Milwaukee, November 24, 2020 Article: Don’t Blame Section 230 for Big Tech’s Failures. Blame Big Tech. By Elliot Harmon, Electronic Frontier Foundation, November 16, 2020 Article: Biden, the Media and CIA Labeled the Hunter Biden Emails "Russian Disinformation." There is Still No Evidence. By Glenn Greenwald, November 12, 2020 Article: Ad Library - Spending Tracker: US 2020 Presidential Race Facebook, November 3, 2020 Article: What’s the deal with the Hunter Biden email controversy? By Kaelyn Forde and Patricia Sabga, Aljazeera, October 30, 2020 Article: Congress Fails to Ask Tech CEOs the Hard Questions By Elliot Harmon and Joe Mullin, Electronic Frontier Foundation, October 29, 2020 Article: With the Hunter Biden Expose, Suppression is a Bigger Scandal Than The Actual Story, by Matt Taibbi, TK News, October 24, 2020 Article: Read the FBI's letter to Sen. Ron Johnson The Washington Post, October 20, 2020 Article: DNI Ratcliffe: Russia disinformation not behind published emails targeting Biden; FBI reviewing, by Kevin Johnson, USA Today, October 19, 2020 Article: Twitter changes its hacked materials policy in wake of New York Post controversy By Natasha Lomas, Tech Crunch, October 16, 2020 Article: Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad By Emma-Jo Morris and Gabrielle Fonrouge, New York Post, October 14, 2020 Article: The Decline of Organic Facebook Reach & How to Adjust to the Algorithm By Sophia Bernazzani, HubSpot, May 3, 2020 Article: Facebook launches searchable transparency library of all active ads By Josh Constine, TechCrunch, March 28, 2019 Article: MAERES Alumna Nina Jankowicz Awarded Fulbright-Clinton Fellowship to Ukraine SFS, Center for Eurasian, Russian and East European Studies, June 21, 2016 Article: Organic Reach on Facebook: Your Questions Answered By Brian Boland, Facebook for Business, June 5, 2014 Article: NSA slides explain the PRISM data-collection program The Washington Post, October 4, 2013 Additional Resources General Guidelines and policies: Distribution of hacked materials policy, Twitter, October 2020 Business Help Center: Fact-Checking on Facebook Facebook Business Business Help Center: Rating Options for Fact-Checkers Facebook Business Commit to transparency — sign up for the International Fact-Checking Network's code of principles, IFCN Code of Principles Section 230 of the Communications Decency Act, Electronic Frontier Foundation Mission Statement: OUR MISSION Open Markets About News Media Alliance Leadership News Corp Clint Watts Foreign Policy Research Institute About FPRI Foreign Policy Research Institute Nina Jankowicz Wicszipedia Sound Clip Sources Hearing: Breaking the News: Censorship, Suppression and the 2020 Election, Senate Judiciary Committee, November 17, 2020 Witnesses: Jack Dorsey, Twitter, Inc. Mark Zuckerberg, Facebook, Inc. Transcript: [30:50] Jack Dorsey: We were called here today because of an enforcement decision we made against New York Post, based on a policy we created in 2018. To prevent Twitter from being used to spread hacked materials. This resulted in us blocking people from sharing a New York Post article, publicly or privately. We made a quick interpretation, using no other evidence that the materials in the article were obtained through hacking, and according to our policy, we blocked them from being spread. Upon further consideration, we admitted this action was wrong and corrected it within 24 hours. We informed the New York Post of our air and policy update and how to unlock their account by deleting the original violating tweet, which freed them to tweet the exact same content and news article again. They chose not to, instead insisting we reverse our enforcement action. We do not have a practice around retro actively overturning prior enforcement's, since then it demonstrated that we needed one and so we created one we believe is fair and appropriate. [35:13] Mark Zuckerberg: At Facebook, we took our responsibility to protect the integrity of this election very seriously. In 2016, we began to face new kinds of threats and after years of preparation, we were ready to defend against them. We built sophisticated systems to protect against election interference, that combined artificial intelligence, significant human review, and partnerships with the intelligence community, law enforcement and other tech platforms. We've taken down more than 100 networks of bad actors, we're trying to coordinate and interfere globally, we established a network of independent fact checkers that covers more than 60 languages. We made political advertising more transparent on Facebook than anywhere else, and including TV, radio and email. And we introduced new policies to combat voter suppression and misinformation. Still, the pandemic created new challenges, how to handle misinformation about COVID and voting by mail, how to prepare people for the reality, the results would take time, and how to handle if someone prematurely declared victory or refused to accept the result. So in September, we updated our policies again to reflect these realities of voting in 2020. And make sure that we were taking precautions given these unique circumstances. We worked with local election officials to remove false claims about polling conditions that might lead to voter suppression. We partnered with Reuters and the national election pool to provide reliable information about results. We attach voting information to posts by candidates on both sides and additional contexts to posts trying to de legitimize the outcome. We lock down new political ads and the week before the election to prevent misleading claims from spreading when they couldn't be rebutted. We strengthened our enforcement against militias and conspiracy networks like QAnon to prevent them from using our platforms to organize violence or civil unrest altogether. I believe this was the largest election integrity effort by any private company in recent times. [40:50] Jack Dorsey: We have transparency around our policies, we do not have transparency around how we operate content moderation, the rationale behind it, the reasoning. And as we look forward, we have more and more of our decisions of our operations moving to algorithms, which are, have a difficult time explaining why they make decisions, bringing transparency around those decisions. And that is why we believe that we should have more choice in how these algorithms are applied to our content, whether we use them at all so we can turn them on or off and have clarity around the outcomes that they're projecting and how they affect our experience. [45:39] Mark Zuckerberg: We work with a number of independent organizations that are accredited by the Poynter Institute. And they include Reuters, the Associated Press. AJans France presse, United States, USA Today, factcheck.org, Science Feedback, PolitiFact, Check Your Fact, Leadstories and the Dispatch in the United States. [48:54] Sen. Lindsay Graham (SC): Do both of you support change to 230? Reform of Section 230? Mark Zuckerberg: Senator I do. Sen. Lindsay Graham (SC): Mr. Dorsey? Jack Dorsey: Yes. Sen. Lindsay Graham (SC): Thank you. [54:10] Sen. Richard Blumenthal (CT): How many times is Steve Bannon allowed to call for the murder of government officials before Facebook suspends his account? Mark Zuckerberg: Senator, as you say, the content in question did violate our policies and we took it down. Having a content violation does not automatically mean your account gets taken down. And the number of strikes varies depending on the amount and type of offense. So if people are posting terrorist content or child exploitation content, then the first time they do it, then we will take down their account. For other things. It's multiple, I'd be happy to follow up afterwards. We try not to disclose these... Sorry, I didn't hear that. Sen. Richard Blumenthal (CT): Will you commit to taking down that account? Steve Bannon? Mark Zuckerberg: Senator, no, that's not what our policies would suggest that we should do in this case. [1:07:05] Jack Dorsey: What we saw and what the market told us was that people would not put up with abuse, harassment and misleading information that would cause offline harm, and they would leave our service because of it. So our intention is to create clear policy, clear enforcement that enables people to feel that they can express themselves on our service, and ultimately trust it. Sen. John Cornyn (TX): So it was a business decision. Jack Dorsey: It was a business decision. [2:56:34] Mark Zuckerberg: We do coordinate on and share signals on security related topics. So for example, if there is signal around a terrorist attack or around child exploitation imagery or around a foreign government, creating an influence operation, that is an area where the companies do share signals about what they see. But I think it's important to be very clear that that is distinct from the content moderation policies that we or the other companies have, where once we share intelligence or signals between the companies, each company makes its own assessment of the right way to address and deal with that information. [3:59:10] Sen. Mazie Hirono (HI): I don't know what it what are both of you prepared to do regarding Donald Trump's use of your platforms after he stops being president it? Will he still be deemed newsworthy? And will he still get to use your platform to spread this misinformation? Mark Zuckerberg: Senator, let me clarify my last answer. We are also having academic study, the effective of all of our election measures and they'll be publishing those results publicly. In terms of President Trump and moving forward. There are a small number of policies where we have exceptions for politicians under the principle that people should be able to hear what their elected officials are saying and candidates for office. But by and large, the vast majority of our policies have no newsworthiness or political exception. So if the President or anyone else is spreading hate speech, or inciting violence, or posting content, that delegitimizes the election or valid forms of voting, those will receive the same treatment is anyone else saying those things, and that will continue to be the case Sen. Mazie Hirono (HI): Remains to be seen. Jack Dorsey: So we do have a policy around public interest, where for global leaders, we do make exceptions in terms of whether if a tweet violates our terms of service, we leave it up behind an interstitial, and people are not allowed to share that more broadly. So a lot of the sharing is disabled with the exception of quoting it so that you can add your own conversation on top of it. So if an account suddenly becomes, is not a world leader anymore, that particular policy goes away. [4:29:35] Sen. Marsha Blackburn (TN): Do you believe it's Facebook's duty to comply with state sponsored censorship so it can keep operating doing business and selling ads in that country? Mark Zuckerberg: Senator in general, we try to comply with the laws in every country where we operate and do business. Hearing: BIG TECH AND SECTION 230 IMMUNITY, Senate Commerce, Science and Transportation Committee, October 28, 2020 Witnesses: Jack Dorsey, Twitter, Inc. Sundar Pichai, Alphabet Inc. Mark Zuckerberg, Facebook, Inc. Transcript: [10:10] Sen. Roger Wicker (MS): In policing, conservative sites, then its own YouTube platform or the same types of offensive and outrageous claims. [45:50] Jack Dorsey: The goal of our labeling is to provide more context to connect the dots so that people can have more information so they can make decisions for themselves. [46:20] Sen. Roger Wicker (MS): I have a tweet here from Mr. Ajit Pai. Mr. Ajit Pai is the chairman of the Federal Communications Commission. And he recounts some four tweets by the Iranian dictator, Ayatollah Ali Khamenei, which Twitter did not place a public label on. They all four of them glorify violence. The first tweet says this and I quote each time 'the Zionist regime is a deadly cancerous growth and a detriment to the region, it will undoubtedly be uprooted and destroyed.' That's the first tweet. The second tweet 'The only remedy until the removal of the Zionist regime is firm armed resistance,' again, left up without comment by Twitter. The third 'the struggle to free Palestine is jihad in the way of God.' I quote that in part for the sake of time, and number four, 'we will support and assist any nation or any group anywhere who opposes and fights the Zionist regime.' I would simply point out that these tweets are still up, Mr. Dorsey. And how is it that they are acceptable to be to be there? Alan, I'll ask unanimous consent to enter this tweet from Ajit Pai in the record at this point that'll be done. Without objection. How Mr. Dorsey, is that acceptable based on your policies at Twitter? Jack Dorsey: We believe it's important for everyone to hear from global leaders and we have policies around world leaders. We want to make sure that we are respecting their right to speak and to publish what they need. But if there's a violation of our terms of service, we want to label it and... Sen. Roger Wicker (MS): They're still up, did they violate your terms of service? Mr. Dorsey? Jack Dorsey: We did not find those two violate our terms of service because we consider them saber rattling, which is, is part of the speech of world leaders in concert with other countries. Speech against our own people, or a country's own citizens we believe is different and can cause more immediate harm. [59:20] Jack Dorsey: We don't have a policy against misinformation. We have a policy against misinformation in three categories, which are manipulated media, public health, specifically COVID and civic integrity, election interference and voter suppression. [1:39:05] Sen. Brian Schatz (HI): What we are seeing today is an attempt to bully the CEOs of private companies into carrying out a hit job on a presidential candidate, by making sure that they push out foreign and domestic misinformation meant to influence the election. To our witnesses today, you and other tech leaders need to stand up to this immoral behavior. The truth is that because some of my colleagues accuse you, your companies and your employees of being biased or liberal, you have institutionally bent over backwards and over compensated, you've hired republican operatives, hosted private dinners with Republican leaders, and in contravention of your Terms of Service, given special dispensation to right wing voices, and even throttled progressive journalism. Simply put, the republicans have been successful in this play. [1:47:15] Jack Dorsey: This one is a tough one to actually bring transparency to. Explainability in AI is a field of research but is far out. And I think a better opportunity is giving people more choice around the algorithms they use, including to turn off the algorithms completely which is what we're attempting to do. [2:15:00] Sen. Jerry Moran (KS): Whatever the numbers are you indicate that they are significant. It's a enormous amount of money and an enormous amount of employee time, contract labor time in dealing with modification of content. These efforts are expensive. And I would highlight for my colleagues on the committee that they will not be any less expensive, perhaps less than scale, but not less in cost for startups and small businesses. And as we develop our policies in regard to this topic, I want to make certain that entrepreneurship, startup businesses and small business are considered in what it would cost in their efforts to meet the kind of standards to operate in a sphere. [2:20:40] Sen. Ed Markey (MA): The issue is not that the companies before us today are taking too many posts down. The issue is that they're leaving too many dangerous posts up. In fact, they're amplifying harmful content so that it spreads like wildfire and torches our democracy. [3:04:00] Sen. Mike Lee (UT): Between the censorship of conservative and liberal points of view, and it's an enormous disparity. Now you have the right, I want to be very clear about this, you have every single right to set your own terms of service and to interpret them and to make decisions about violations. But given the disparate impact of who gets censored on your platforms, it seems that you're either one not enforcing your Terms of Service equally, or alternatively, to that you're writing your standards to target conservative viewpoints. [3:15:30] Sen. Ron Johnson (MA): Okay for both Mr. Zuckerberg and Dorsey who censored New York Post stories, or throttled them back, did either one of you have any evidence that the New York Post story is part of Russian disinformation? Or that those emails aren't authentic? Did anybody have any information whatsoever? They're not authentic more than they are Russian disinformation? Mr. Dorsey? Jack Dorsey: We don't. Sen. Ron Johnson (MA): So why would you censor it? Why did you prevent that from being disseminated on your platform that is supposed to be for the free expression of ideas, and particularly true ideas... Jack Dorsey: we believe to fell afoul of our hacking materials policy, we judged... Sen. Ron Johnson (MA): They weren't hacked. Jack Dorsey: We we judge them moment that it looked like it was hacked material. Sen. Ron Johnson (MA): You were wrong. Jack Dorsey: And we updated our policy and our enforcement within 24 hours. Sen. Ron Johnson (MA): Mr. Zuckerberg? Mark Zuckerberg: Senator, as I testified before, we relied heavily on the FBI, his intelligence and alert status both through their public testimony and private briefings. Sen. Ron Johnson (MA): Did the FBI contact you, sir, than your co star? It was false. Mark Zuckerberg: Senator not about that story specifically. Sen. Ron Johnson (MA): Why did you throttle it back. Mark Zuckerberg: They alerted us to be on heightened alert around a risk of hack and leak operations around a release and probe of information. And to be clear on this, we didn't censor the content. We flagged it for fact checkers to review. And pending that review, we temporarily constrained its distribution to make sure that it didn't spread wildly while it was being reviewed. But it's not up to us either to determine whether it's Russian interference, nor whether it's true. We rely on the fact checkers to do that. [3:29:30] Sen. Rick Scott (FL): That's becoming obvious that your that your companies are unfairly targeting conservatives. That's clearly the perception today, Facebook is actively targeting as by conservative groups ahead of the election, either removing the ads completely or adding their own disclosure if they claim that didn't pass their fact check system. [3:32:40] Sen. Rick Scott (FL): You can't just pick and choose which viewpoints are allowed on your platform an expect to keep immunity granted by Section 230. News Clip: Adam Schiff on CNN, CNN, Twitter, October 16, 2020 Hearing: MISINFORMATION, CONSPIRACY THEORIES, AND `INFODEMICS': STOPPING THE SPREAD ONLINE, Committee on the Judiciary: Subcommittee on Antitrust, Commercial, and Administrative Law, October 15, 2020 Watch on Youtube Hearing Transcript Witnesses: Dr. Joan Donovan: Research Director at the Shorenstein Center on Media, Politics, and Public Policy at Harvard Kennedy School Nina Jankowicz: Disinformation Fellow at the Wilson Center Cindy Otis: Vice President of the Althea Group Melanie Smith: Head of Analysis, Graphika Inc Transcript: 41:30 Rep. Jim Himes (CT): And I should acknowledge that we're pretty careful. We understand that we shouldn't be in the business of fighting misinformation that's probably inconsistent with the First Amendment. So what do we do? We ask that it be outsourced to people that we otherwise are pretty critical of like Mark Zuckerberg, and Jack Dorsey, we say you do it, which strikes me as a pretty lame way to address what may or may not be a problem. 42:00 Rep. Jim Himes (CT): Miss Jankowicz said that misinformation is dismantling democracy. I'm skeptical of that. And that will be my question. What evidence is that is out there that this is dismantling democracy, I don't mean that millions of people see QAnon I actually want to see the evidence that people are seeing this information, and are in a meaningful way, in a material way, dismantling our democracy through violence or through political organizations, because if we're going to go down that path, I need something more than eyeballs. So I need some evidence for how this is dismantling our democracy. And secondly, if you persuade me that we're dismantling our democracy, how do we get in the business of figuring out who should define what misinformation or disinformation is? Nina Jankowicz: To address your first question related to evidence of the dismantling of democracy. There's two news stories that I think point to this from the last couple of weeks alone. The first is related to the kidnapping plot against Michigan Governor Gretchen Whitmer. And the social media platforms played a huge role in allowing that group to organize. It allowed, that group to, it ceded the information that led them to organize and frankly, as a woman online who has been getting harassed a lot lately, lately, with sexualized and gender disinformation, I am very acutely aware of how those threats that are online can transfer on to real world violence. And that make no mistake is meant to keep women and minorities from not only participating in the democratic process by exercising our votes, but also keeping us from public life. So that's one big example. But there was another example just recently from a channel for in the UK documentary that looked at how the Trump campaign used Cambridge Analytica data to selectively target black voters with voter suppression ads during the 2016 election. Again, this is it's affecting people's participation. It's not just about fake news, stories on the internet. In fact, a lot of the best disinformation is grounded in a kernel of truth. And in my written testimony, I go through a couple of other examples of how online action has led to real world action. And this isn't something that is just staying on the internet, it is increasingly in real life. Rep. Jim Himes (CT): I don't have a lot of time. Do you think that both examples that you offered up Gov the plot to kidnap governor, the governor of Michigan, and your other example passed the but for test? I mean, this country probably got into the Spanish American War over 130 years ago because of the good works of William Randolph Hearst. So how do we, we've had misinformation and yellow journalism and terrible media and voter suppression forever. And I understand that these media platforms have scale that William Randolph Hearst didn't have. But are you sure that both of those examples pass the buck for they wouldn't have happened without the social media misinformation? Nina Jankowicz: I believe they do, because they allow the organization of these groups without any oversight, and they allow the targeting the targeting of these messages to the groups and people that are going to find the most vulnerable and are most likely to take action against them. And that's what our foreign adversaries do. And increasingly, it's what people within our own country are using to organize violence against the democratic participation of many of our fellow citizens. Rep. Jim Himes (CT): Okay, well, I'm out of time I would love to continue this conversation and pursue what you mean by groups being formed quote, without oversight, that's language I'd like to better understand but I'm out of time, but I would like to continue this conversation into, well, if this is the problem that you say it is, what do we actually do about it? Hearing: ONLINE PLATFORMS AND MARKET POWER, PART 2: INNOVATION AND ENTREPRENEURSHIP, Committee on the Judiciary: Subcommittee on Antitrust, Commercial, and Administrative Law, July 16, 2020 Watch on Youtube Witnesses: Adam Cohen: Director of Economic Policy at Google Matt Perault: Head of Global Policy Development at Facebook Nate Sutton: Associate General Counsel for Competition at Amazon Kyle Andeer: Vice President for Corporate Law at Apple Timothy Wu: Julius Silver Professor of Law at Columbia Law School Dr. Fiona Scott Morton: Theodore Nierenberg Professor of Economics at Yale School of Management Stacy Mitchell: Co-Director at the Institute for Local Self-Reliance Maureen Ohlhausen: Partner at Baker Botts LLP Carl Szabo: Vice President and Gneral Counsel at NetChoice Morgan Reed: Executive Director at the App Association Transcript: [55:15] Adam Cohen: Congresswoman we use a combination of automated tools, we can recognize copyrighted material that creators upload and instantaneously discover it and keep it from being seen on our platforms. [1:16:00] Rep. David Cicilline (RI): Do you use consumer data to favor Amazon products? Because before you answer that, analysts estimate that between 80 and 90% of sales go to the Amazon buy box. So you collect all this data about the most popular products where they're selling. And you're saying you don't use that in any way to change an algorithm to support the sale of Amazon branded products? Nate Sutton: Our algorithms such as the buy box is aimed to predict what customers want to buy, apply the same criteria whether you're a third party seller, or Amazon to that because we want customers to make the right purchase, regardless of whether it's a seller or Amazon. Rep. David Cicilline (RI): But the best purchase to you as an Amazon product. Nate Sutton: No, that's not true. Rep. David Cicilline (RI): So you're telling us you're under oath, Amazon does not use any of that data collected with respect to what is selling, where it's on what products to inform the decisions you make, or to change algorithms to direct people to Amazon products and prioritize Amazon and D prioritize competitors. Nate Sutton: The algorithms are optimized to predict what customers want to buy regardless of the seller. We provide this same criteria and with respect to popularity, that's public data on each product page. We provide the ranking of each product. [3:22:50] Dr. Fiona Scott Morton: As is detailed in the report that I submitted as my testimony, there are a number of characteristics of platforms that tend to drive them toward concentrated markets, very large economies of scale, consumers exacerbate this with their behavioral biases, we don't scroll down to the second page, we don't. We accept default, we follow the framing the platform gives us and instead of searching independently, and what that does is it makes it very hard for small companies to grow and for new ones to get traction against the dominant platform. And without the threat of entry from entrepreneurs and growth from existing competitors, the dominant platform doesn't have to compete as hard. If it's not competing as hard, then there are several harms that follow from that. One is higher prices for advertisers, many of these platforms are advertising supported, then there's higher prices to consumers who may think that they're getting a good deal by paying a price of zero. But the competitive price might well be negative, the consumers might well be able to be paid for using these platforms in a competitive market. Other harms include low quality in the form of less privacy, more advertising and more exploitative content that consumers can't avoid. Because, as Tim just said, there isn't anywhere else to go. And lastly, without competitive pressure, innovation is lessened. And in particular, it's channeled in the direction the dominant firm prefers, rather than being creatively spread across directions chosen by entrance. And this is what we learned both from at&t and IBM and Microsoft, is that when the dominant firm ceases to control innovation, there's a flowering and it's very creative and market driven. So the solution to this problem of insufficient competition is complimentary steps forward in both antitrust and regulation. Antitrust must recalibrate the balance it strikes between the risk of over enforcement and under enforcement. The evidence now shows we've been under enforcing for years and consumers have been harmed. [3:22:50] Stacy Mitchell: I hope the committee will consider several policy tools as part of this investigation. In particular, we very much endorse the approach that Congress took with regard to the railroads, that if you operate essential infrastructure, you can't also compete with the businesses that rely on that infrastructure. [3:45:00] Morgan Reed: Here on the table, I have a copy of Omni page Pro. This was a software you bought, if you needed to scan documents. If you wanted to turn it into a processor and you could look at it in a word processor. I've also got this great review from PC World, they loved it back in 2005. But the important fact here in this review is that it says the street price of this software in 2005 was $450. Now, right here, I've got an app from a company called Readdle, that is nearly the same product level has a bunch of features that this one doesn't, it's $6. Basically now consumers pay less than 1% of what they used to pay for some of the same capability. And what's even better about that, even though I love the product from Readdle, there are dozens of competitors in the app space. So when you look at it from that perspective, consumers are getting a huge win. How have platforms made this radical drop in price possible? Simply put, they've provided three things a trusted space, reduced overhead, and given my developers nearly instant access to a global marketplace with billions of customers, before the platforms to get your software onto a retail store shelf. companies had to spend years and thousands of dollars to get to the point where a distributor would handle their product, then you'd agree agree to a cut of sales revenue, write a check for upfront marketing, agree to refund the distributor the cost of any unsold boxes and then spend 10s of thousands of dollars to buy an end cap. Digging a little bit on this, I don't know how many of you know or aware that the products you see on your store shelf or in the Sunday flyer aren't there because the manager thought it was a cool product. Those products are displayed at the end of an aisle or end cap because the software developer or consumer goods company literally pays for the shelf space. In fact, for many retailers the sale of floor the sale of floor space and flyers makes a huge chunk of their profitability for their store. And none of this takes into consideration printing boxes, manuals, CDs, dealing with credit cards if you go direct translation services, customs authorities if you want to sell abroad in the 1990s it cost a million dollars to start up a software company. Now it's $100,000 in sweat equity. And thanks to these changes, the average cost for consumer software has dropped from $50 to three. For developers. Our cost to market has dropped enormously and the size of our market has expanded globally. [3:48:55] Stacy Mitchell: I've spent a lot of time interviewing and talking with independent retailers, manufacturers of all sizes. Many of them are very much afraid of speaking out publicly because they fear retaliation. But what we consistently hear is that Amazon is the biggest threat to their businesses. We just did a survey of about 550 independent retailers nationally, Amazon ranked number one in terms of being what they said was the biggest threat to their business above, rising healthcare costs, access to capital, government, red tape, anything else you can name. Among those who are actually selling on the platform, only 7% reported that it was actually helping their bottom line. Amazon has a kind of godlike view of a growing share of our commerce and it uses the data that it gathers to advantage its own business and its own business interests in lots of ways. A lot of this, as I said, comes from the kind of leverage its ability to sort of leverage the interplay between these different business lines to maximize its advantage, whether it's promoting its own product because that's lucrative or whether it's using the manufacturer of a product to actually squeeze a seller or vendor into giving it bigger discounts. [3:53:15] Rep. Kelly Armstrong (ND): When we recognize, I come from very rural area, the closest, what you would consider a big box store is Minneapolis or Denver. So and so when we're talking about competition, all of this I also think we've got to remember, at no point in time from my house in Dickinson, North Dakota have I had more access to more diverse and cheap consumer products. I mean, things that often would require a plane ticket or a nine hour car ride to buy can now be brought to our house. So I think when we're talking about consumers, we need to remember that side of it, too. Hearing: EMERGING TRENDS IN ONLINE FOREIGN INFLUENCE OPERATIONS: SOCIAL MEDIA, COVID–19, AND ELECTION SECURITY, Permanent Select Committee on Intelligence, June 18, 2020 Watch on Youtube Hearing transcript Witnesses: Nathaniel Gleicher: Head of Security Policy at Facebook Nick Pickles: Director of Global Public Policy Strategy and Development at Twitter Richard Salgado: Director for Law Enforcement and Information Security at Google Transcript: [19:16] Nathaniel Gleicher: Facebook has made significant investments to help protect the integrity of elections. We now have more than 35,000 people working on safety and security across the company, with nearly 40 teams focused specifically on elections and election integrity. We're also partnering with federal and state governments, other tech companies, researchers and civil society groups to share information and stop malicious actors. Over the past three years, we've worked to protect more than 200 elections around the world. We've learned lessons from each of these, and we're applying these lessons to protect the 2020 election in November. [21:58] Nathaniel Gleicher: We've also been proactively hunting for bad actors trying to interfere with the important discussions about injustice and inequality happening around our nation. As part of this effort, we've removed isolated accounts seeking to impersonate activists, and two networks of accounts tied to organize hate groups that we've previously banned from our platforms. [26:05] Nick Pickles: Firstly, Twitter shouldn't determine the truthfulness of tweets. And secondly, Twitter should provide context to help people make up their own minds in cases where the substance of a tweet is disputed. [26:15] Nick Pickles: We prioritize interventions regarding misinformation based on the highest potential for harm. And the currently focused on three main areas of content, synthetic & manipulated media, elections and civic integrity and COVID-19. [26:30] Nick Pickles: Where content does not break our rules and warrant removal. In these three areas, we may label tweets to help people come to their own views by providing additional context. These labels may link to a curated set of tweets posted by people on Twitter. This include factual statements, counterpoint opinions and perspectives, and ongoing public conversation around the issue. To date, we've applied these labels to thousands of tweets around the world across these three policy areas. [31:10] Richard Salgado: In search, ranking algorithms are an important tool in our fight against disinformation. Ranking elevates information that our algorithms determine is the most authoritative, above information that may be less reliable. Similarly, our work on YouTube focuses on identifying and removing content that violates our policies and elevating authoritative content when users search for breaking news. At the same time, we find and limit the spread of borderline content that comes close but just stops short of violating our policies. [53:28] Rep. Jackie Speier (CA): Mr. Gliecher, you may or may not know that Facebook is headquartered in my congressional district. I've had many conversations with Sheryl Sandberg. And I'm still puzzled by the fact that Facebook does not consider itself a media platform. Are you still espousing that kind of position? Nathaniel Gleicher: Congresswoman, we're first and foremost a technology company. We may be a technology company, but it's your technology company is being used as a media platform. Do you not recognize that? Congresswoman, we're a place for ideas across the spectrum. We know that there are people who use our platforms to engage and in fact that is the goal of the platform's to encourage and enable people to discuss the key issues of the day and to talk to family and friends. [54:30] Rep. Jackie Speier (CA): How long or or maybe I should ask this when there was a video of Speaker Pelosi that had been tampered with - slowed down to make her look like she was drunk. YouTube took it down almost immediately. What did Facebook do and what went into your thinking to keep it up? Nathaniel Gleicher: Congresswoman for a piece of content like that, we work with a network of third party fact checkers, more than 60 3rd party fact checkers around the world. If one of them determines that a piece of content like that is false, and we will down rank it, and we will put an interstitial on it so that anyone who would look at it would first see a label over it saying that there's additional information and that it's false. That's what we did in this context. When we down rank, something like that, we see the shares of that video, radically drop. Rep. Jackie Speier (CA): But you won't take it down when you know it's false. Nathaniel Gleicher: Congresswoman, you're highlighting a really difficult balance. And we've talked about this amongst ourselves quite a bit. And what I would say is, if we simply take a piece of content like this down, it doesn't go away. It will exist elsewhere on the internet. People who weren't looking for it will still find it. Rep. Jackie Speier (CA): But it you know, there will always be bad actors in the world. That doesn't mean that you don't do your level best to show the greatest deal of credibility. I mean, if YouTube took it down, I don't understand how you couldn't have taken down but I'll leave that where it lays. [1:40:10] Nathaniel Gleicher: Congressman, the collaboration within industry and with government is much, much better than it was in 2016. I think we have found the FBI, for example, to be forward leaning and ready to share information with us when they see it. We share information with them whenever we see indications of foreign interference targeting our election. The best case study for this was the 2018 midterms, where you saw industry, government and civil society all come together, sharing information to tackle these threats. We had a case on literally the eve of the vote, where the FBI gave us a tip about a network of accounts where they identified subtle links to Russian actors. Were able to investigate those and take action on them within a matter of hours. [1:43:10] Rep. Jim Himes (CT): I tend to be kind of a First Amendment absolutist. I really don't want Facebook telling me what's true and what's not true mainly because most statements are some combination of both. [1:44:20] Nathaniel Gleicher: Certainly people are drawn to clickbait. They're drawn to explosive content. I mean, it is the nature of clickbait, to make people want to click on it, but what we found is that if you separate it out from the particular content, people don't want a platform or experience, just clickbait, they will click it, if they see it, they don't want it prioritized, they don't want their time to be drawn into that and all emotional frailty. And so we are trying to build an environment where that isn't the focus, where they have the conversations they want to have, but I agree with you. A core piece of this challenge is people seek out that type of content wherever it is. I should note that as we're thinking about how we prioritize this, one of the key factors is who your friends are the pages and accounts that you follow and the assets that you engage with. That's the most important factor in sort of what you see. And so people have direct control over that because they are choosing the people they want to engage. Hearing: ONLINE PLATFORMS AND MARKET POWER, PART 1: THE FREE AND DIVERSE PRESS, Committee on the Judiciary: Subcommittee on Antitrust, Commercial, and Administrative Law, June 11, 2020 Watch on Youtube Witnesses: David Chavern: President of the News Media Alliance Gene Kimmelman: President of Public Knowledge Sally Hubbard: Director of Enforcement Strategy at the Open Markets Institute Matthew Schrurers: Vice President of Law and Policy at the Computer and Communications Industry Association David Pitofsky: General Counsel at News Corp Kevin Riley: Editor at the Atlanta Journal-Constitution Transcript: [55:30] David Chavern: Platforms and news organizations mutual reliance would not be a problem, if not for the fact that the concentration among the platforms means a small number of companies now exercise an extreme level of control over the news. And in fact, a couple of dominant firms act as regulators of the news industry. Only these regulators are not constrained by legislative or democratic oversight. The result has been to siphon revenue away from news publishers. This trend is clear if you compare the growth in Google's total advertising revenue to the decline in the news industry's ad revenue. In 2000, Google's US revenue was 2.1 billion, while the newspaper industry accounted for 48 billion in advertising revenue. In 2017, in contrast, Google's US revenue had increased over 25 times to 52.4 billion, the newspaper industry's ad revenue had fallen 65% to 16.4 billion. [56:26] David Chavern: The effect of this revenue decline in publishers has been terrible, and they've been forced to cut back on their investments in journalism. That is a reason why newsroom employment has fallen nearly a quarter over the last decade. One question might be asked is if the platforms are unbalanced, having such a negative impact on the news media, then why don't publishers do something about it? The answer is they cannot, at least under the existing antitrust laws, news publishers face a collective action problem. No publisher on its own can stand up to the tech giants. The risk of demotion or exclusion from the platform is simply too great. And the antitrust laws prevent news organizations from acting collectively. So the result is that publishers are forced to accept whatever terms or restrictions are imposed on them. [1:06:20] Sally Hubbard: Facebook has repeatedly acquired rivals, including Instagram and WhatsApp. And Google's acquisition cemented its market power throughout the ad ecosystem as it bought up the digital ad market spoke by spoke, including applied semantics AdMob and Double Click. Together Facebook and Google have bought 150 companies in just the last six years. Google alone has bought nearly 250 companies. [1:14:17] David Pitofsky: Unfortunately, in the news business, free riding by dominant online platforms, which aggregate and then reserve our content has led to the lion's share of online advertising dollars generated off the back of news going to the platforms. Many in Silicon Valley dismissed the press as old media failing to evolve in the face of online competition. But this is wrong. We're not losing business to an innovator who has found a better or more efficient way to report and investigate the news. We're losing business because the dominant platforms deploy our news content, to target our audiences to then turn around and sell that audience to the same advertisers we're trying to serve. [1:15:04] David Pitofsky: The erosion of advertising revenue undercuts our ability to invest in high quality journalism. Meanwhile, the platforms have little if any commitment to accuracy or reliability. For them, a news article is valuable if viral, not if verified. [1:16:12] David Pitofsky: News publishers have no good options to respond to these challenges. Any publisher that tried to withhold its content from a platform as part of a negotiating strategy would starve itself of reader traffic. In contrast, losing one publisher would not harm the platform's at all since they would have ample alternative sources for news content. [1:36:56] Rep. Pramila Jayapal (WA): So Miss Hubbard, let me start with you. You were an Assistant Attorney General for New York State's antitrust division. You've also worked as a journalist, which online platforms would you say are most impacting the public's access to trustworthy sources of journalism? And why? Sally Hubbard: Thank you for the question. Congresswoman, I think in terms of disinformation, the platforms that are having the most impact are Facebook and YouTube. And that's because of their business models, which are to prioritize engagement, engaging content because of the human nature that you know survival instinct, we tend to tune into things that make us fearful or angry. And so by prioritizing engagement, these platforms are actually prioritizing disinformation as well. It serves their profit motives to keep people on the platforms as long as possible to show them ads and collect their data. And because they don't have any competition, they're free to pursue these destructive business models without having any competitive constraint. They've also lacked regulation. Normally, corporations are not permitted to just pursue profits without regard to the consequences. [1:38:10] Rep. Pramila Jayapal (WA): The Federal Trade Commission has repeatedly declined to interfere, as Facebook and Google have acquired would be competitors. Since 2007, Google has acquired Applied Semantics, Double Click and AdMob. And since 2011, Facebook has acquired Instagram and WhatsApp. What do these acquisitions mean for consumers of news and information? I think sometimes antitrust is seen and regulation is seen as something that's out there. But this has very direct impact for consumers. Can you explain what that means as these companies have acquired more and more? Sally Hubbard: Sure, so in my view, those, of all of the acquisitions that you just mentioned, were illegal under the Clayton Act, which prohibits mergers that may lessen competition. Looking back, it's clear that all of those mergers did lessen competition. And when you lessen competition, the harms to consumers are not just high prices, which was which are harder to see when in the digital age. But its loss of innovation is loss of choice, and loss of control. So when we approve anti competitive mergers, consumers are harmed. [1:55:48] Rep. Matt Gaetz (FL): Section 230, as I understand it, and I'm happy to be corrected by others, would say that if a technology platform is a neutral public platform, that they enjoy certain liability protections that newspapers don't enjoy, that Newscorp doesn't enjoy with its assets. And so does it make the anti competitive posture of technology platforms more pronounced, that they have access to this special liability protection that the people you represent don't have access to? David Chavern: Oh, absolutely. There's a huge disparity. Frankly, when our contents delivered through these platforms, we get the liability and they get the money. So that's a good deal from that end. We are responsible for what we publish, we publishers can and do get sued. On the other hand, the platforms are allowed to deliver and monetize this content with complete lack of responsibility. Hearing: Election Interference: Ensuring Law Enforcement is Equipped to Target Those Seeking to Do Harm, Senate Judiciary Committee, June 12, 2018 Watch on C-SPAN Witnesses: Adam Hickey - Deputy Assistant Attorney General for the National Security Division at the Department of Justice Matthew Masterson - National Protection and Programs Directorate at the Department of Homeland Security Kenneth Wainstein - Partner at Davis Polk & Wardwell, LLP Prof. Ryan Goodman - New York University School of Law Nina Jankowicz - Global Fellow at the Wilson Center Transcript: [9:00] Senator Dianne Feinstein (CA): We know that Russia orchestrated a sustained and coordinated attack that interfered in our last presidential election. And we also know that there’s a serious threat of more attacks in our future elections, including this November. As the United States Intelligence Community unanimously concluded, the Russian government’s interference in our election—and I quote—“blended covert intelligence operations, such as cyber activity, with overt efforts by the Russian government agencies, state-funded media, third-party intermediaries, and paid social-media users or trolls.” Over the course of the past year and a half, we’ve come to better understand how pernicious these attacks were. Particularly unsettling is that we were so unaware. We were unaware that Russia was sowing division through mass propaganda, cyber warfare, and working with malicious actors to tip scales of the election. Thirteen Russian nationals and three organizations, including the Russian-backed Internet Research Agency, have now been indicted for their role in Russia’s vast conspiracy to defraud the United States. Hearing: Facebook, Google and Twitter Executives on Russian Disinformation, Senate Judiciary Subcommittee on Crime and Terrorism, October 31, 2017 Watch on Youtube Witnesses: Colin Stretch - Facebook Vice President and General Counsel Sean Edgett - Twitter Acting General Counsel Richard Salgado - Google Law Enforcement & Information Security Director Clint Watts - Foreign Policy Research Institute, National Security Program Senior Fellow Michael Smith -New America, International Security Fellow Transcript: [2:33:07] Clint Watts: Lastly, I admire those social-media companies that have begun working to fact-check news articles in the wake of last year’s elections. These efforts should continue but will be completely inadequate. Stopping false information—the artillery barrage landing on social-media users comes only when those outlets distributing bogus stories are silenced. Silence the guns, and the barrage will end. I propose the equivalent of nutrition labels for information outlets, a rating icon for news-producing outlets displayed next to their news links and social-media feeds and search engines. The icon provides users an assessment of the news outlet’s ratio of fact versus fiction and opinion versus reporting. The rating system would be opt-in. It would not infringe on freedom of speech or freedom of the press. Should not be part of the U.S. government, should sit separate from the social-media companies but be utilized by them. Users wanting to consume information from outlets with a poor rating wouldn’t be prohibited. If they are misled about the truth, they have only themselves to blame. Cover Art Design by Only Child Imaginations Music Presented in This Episode Intro & Exit: Tired of Being Lied To by David Ippolito (found on Music Alley by mevio)
Ludmilla used to be Human Resources at Gulag, but now she works at the Internet Research Agency.
This episode covers the structure of the Internet Research Agency, the funding and oversight of the IRA, and the methods in which the IRA targeted the 2016 U.S. Election, as part of the "Report on the Investigation into Russian Interference in the 2016 Presidential Election (Volume 1)." The November 2020 update to this section uncovers significant amounts of previously redacted material from pages 14 to 35 of the report. Russian "Active Measures" Social Media Campaign (1:14) Structure of the Internet Research Agency (4:16) Funding and Oversight from Concord and Prigozhin (7:13) The IRA Ramps Up U.S. Operations As Early As 2014 (8:58) U.S. Operations Through IRA-Controlled Social Media Accounts (12:16) U.S. Operations Through Facebook (15:57) Individualized Accounts (21:03) IRA Botnet Activities (24:03) U.S. Operations Through Twitter (20:11) U.S. Operations Involving Political Rallies (25:39) Targeting and Recruitment of U.S. Persons (28:16) Trump Campaign Promotion of IRA Political Materials (32:15) Contact with Trump Campaign Officials in Connection to Rallies (34:02) Interactions and Contacts with the Trump Campaign (31:32) The IRA Targets U.S. Elections (8:52) Mueller Report Audio - muellerreportaudio.com Presented by Timberlane Media - patreon.com/timberlanemedia Donate anonymously - glow.fm/insider Or Donate with Crypto Music by Lee Rosevere
This episode covers the structure of the Internet Research Agency, the funding and oversight of the IRA, and the methods in which the IRA targeted the 2016 U.S. Election, as part of the "Report on the Investigation into Russian Interference in the 2016 Presidential Election (Volume 1)." The November 2020 update to this section uncovers significant amounts of previously redacted material from pages 14 to 35 of the report. Russian "Active Measures" Social Media Campaign (1:14) Structure of the Internet Research Agency (4:16) Funding and Oversight from Concord and Prigozhin (7:13) The IRA Ramps Up U.S. Operations As Early As 2014 (8:58) U.S. Operations Through IRA-Controlled Social Media Accounts (12:16) U.S. Operations Through Facebook (15:57) Individualized Accounts (21:03) IRA Botnet Activities (24:03) U.S. Operations Through Twitter (20:11) U.S. Operations Involving Political Rallies (25:39) Targeting and Recruitment of U.S. Persons (28:16) Trump Campaign Promotion of IRA Political Materials (32:15) Contact with Trump Campaign Officials in Connection to Rallies (34:02) Interactions and Contacts with the Trump Campaign (31:32) The IRA Targets U.S. Elections (8:52) Mueller Report Audio - muellerreportaudio.com Presented by Timberlane Media - patreon.com/timberlanemedia Donate anonymously - glow.fm/insider Or Donate with Crypto Music by Lee Rosevere
Yelena has a job interview at the Internet Research Agency, but she's not so sure she wants the gig.
This week we talk about COINTELPRO, Occupy Wall Street, and the Internet Research Agency.We also discuss botnets, social media warnings, and antitrust. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Since Russia's stunning influence operations during the 2016 United States presidential race, state and federal officials, researchers, and tech companies have been on high alert for a repeat performance. With the 2020 election now just seven months away, though, newly surfaced social media posts indicate that Russia's Internet Research Agency is adapting its methods to circumvent those defenses.
It is 2014 in St. Petersburg, Russia. In the heart of the city, a small nondescript office building sits beside the Bolshaya Nevka River. Inside, workers stare at computer screens open to Facebook and Twitter, furiously typing. Their task: Sow discord, disinformation, and doubt. Their target: The United States of America. Through fake social media accounts and armies of bots, they are flooding online media with disinformation. This is a Troll Farm. It's name: The Internet Research Agency. Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.
Sam Harris speaks with Renée DiResta about Russia’s “Internet Research Agency” and its efforts to amplify conspiracy thinking and partisan conflict in the United States. Renée DiResta is the Director of Research at New Knowledge and Head of Policy at the nonprofit organization Data for Democracy where she investigates the spread of malignant narratives across social networks. She regularly writes and speaks about the role that tech platforms and curatorial algorithms play in the proliferation of disinformation and conspiracy theories. She is the author of The Hardware Startup: Building your Product, Business, and Brand. Website: www.reneediresta.com Twitter: @noUpside
The week started with bombshell Senate reports on the Russian campaign to influence the 2016 presidential election. We dived deep to explain how Russians used meme warfare to divide America, why Instagram was the Internet Research Agency's go-to social media platform for spreading misinformation, and how Russians specifically targeted black Americans in an effort to exploit racial wounds.
Twitter dropped an almost unfathomably large archive of tweets connected to two alleged influence campaigns on Wednesday. The trove included over 9 million tweets associated with 3,841 accounts connected to Russia's notorious Internet Research Agency, or IRA, as well as more than a million tweets attributed to a network of 770 Iranian propaganda-pushing accounts. Twitter has never before released an archive of this size.
Facebook has taken down 32 fake pages and accounts that it says were involved in coordinated campaigns on both Facebook and Instagram. Though the company has not yet attributed the accounts to any group, it says the campaign does bear some resemblance to the propaganda campaign run by Russia's Internet Research Agency in the run-up to the 2016 presidential election. Facebook is now working with law enforcement to determine where the campaign originated.
An army of professional Russian trolls tried to change the outcome of the 2016 US election from a four storey building in St Petersburg. Did the dirty trickster Roger Stone help them? And was the Troll Factory ultimately successful? Language warning: This episode contains some swearing. You can get in touch at russia@abc.net.au.
Earlier this week, the Democrats on the House Intelligence Committee released roughly 3,500 Facebook and Instagram ads purchased by the Internet Research Agency, a notorious Russian troll farm. Among them: Ads purchased in May of 2016 that promoted a suspicious Chrome extension that gained wide access to the Facebook accounts and web browsing behavior of those who installed it.
When Young Mie Kim began studying political ads on Facebook in August of 2016—while Hillary Clinton was still leading the polls— few people had ever heard of the Russian propaganda group, Internet Research Agency. Not even Facebook itself understood how the group was manipulating the platform's users to influence the election.
Robert Mueller's indictment of Russia's Internet Research Agency—also known as the "troll factory"—feels like years ago at this point. It's only been a week! And we took a deep dive into what it really says about Russia's propaganda efforts during the 2016 presidential campaign and beyond. Trump campaign advisor Rick Gates has also copped a plea deal with Mueller's team—which could have big implications for the investigation going forward.
Special counsel Robert Mueller's indictment against Russia's Internet Research Agency contains a number of striking moments, from the inflammatory ads bought by the so-called “troll factory” to the rampant identity theft against US citizens. But what stands out most may be the reminder that for Russia, subverting the foundations of US democracy was just another 9 to 5.
When it comes to Russian propaganda, things are seldom what they seem. Consider the case of the Internet Research Agency. The shadowy St. Petersburg-based online-influence operation came under fresh scrutiny this week after Facebook disclosed that entities linked to Russia had placed some 5,000 phony political ads on its platform during the 2016 election cycle.