POPULARITY
This is Part 2 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the New York Times and the Guardian published articles on March 17, 2018. The Times headline was “How Trump Consultants Exploited the Data of Millions,” while the Guardian went with “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.”That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we'll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.In this second episode, we'll hear a panel discussion hosted by the Bipartisan Policy Center that I helped moderate at the end of March. The panel featured Katie Harbath, a former Facebook executive who is now a Fellow in the Digital Democracy Project at the Bipartisan Policy Center; Alex Lundry, Co-Founder, Tunnl, Deep Root Analytics; and Matthew Rosenberg, a Washington-based Correspondent for the New York Times and one of the individuals on the byline of that first story on Cambridge Analytica.
This is Part 1 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the New York Times and the Guardian published articles on March 17, 2018. The Times headline was “How Trump Consultants Exploited the Data of Millions,” while the Guardian went with “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.”That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we'll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.In this first episode, Justin Hendrix speaks with David Carroll, a professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School. Carroll legally challenged Cambridge Analytica in the UK courts to recapture his 2016 voter profile using European data protection law, events that were chronicled in the 2019 Netflix documentary The Great Hack.
The Cybercrime Magazine Podcast brings you a rundown of the latest breaking news stories we're following in technology. On this episode, Amazon has begun delivering orders by drone, Facebook parent company Meta will pay $725 million to settle a class-action lawsuit, and more. Airs weekly on WCYB Digital Radio and our podcast. For more on technology and cybersecurity, visit us at https://cybersecurityventures.com/
This week we bring you part 2 of our exploration of the Cambridge Analytica Scandal — cast in a very different light. In part 1 we took a look at Christopher Wylie's origin story and the sketchy science behind the supposed “psychological warfare mindfuck tool” Steve Bannon employed within Cambridge Analytica on behalf of Robert Mercer, which, the story goes, won the election for Donald Trump by manipulating the masses. In part 2, we're gonna be taking a look at Wylie's attempt to create his own Cambridge Analytica, how he turned into the supposed do-gooder whistleblower in the first place, and what it means for the accepted narrative about the scandal. We are joined again by guest writer Anthony Mansuy, a French reporter for Society Magazine. For this two-parter, Anthony conducted months of research and forty exclusive interviews. You'll be hearing from Cambridge Analytica employees, data scientists, former Obama, Trump and Cruz campaign staffers, as well as friends and associates of Chris Wylie. The evidence lays out how Wylie spread numerous fabrications and exaggerations to minimize his contribution to the development of Cambridge Analytica's tools and conceal the true causes of his departure from the organization. More importantly, Wylie capitalized on the deepest fears held by the liberal media about the far-right, social media, and Russia; allowing him to craft the perfect narrative to fit the political moment — one that persists to this day. Subscribe for $5 a month to get an extra episode of QAA every week + access to ongoing series like 'Manclan' and 'Trickle Down': http://www.patreon.com/QAnonAnonymous Anthony Mansuy: https://twitter.com/AnthonyMansuy Les Dissidents (Anthony's book): https://bit.ly/3jgCFfK Merch: http://merch.qanonanonymous.com Music by Pontus Berghe, Nick Sena, DJ Death. Editing by Corey Klotz.
The Cambridge Analytica Scandal involves data science being described as potent black magic, claims of psychological voter manipulation in swing states, micro-targeting, secret meetings, massive amounts of stolen data and a seemingly nefarious plan to use Brexit as a blueprint and catapult Donald Trump into the White House. But we're here to tell you that the existing and broadly accepted narrative often reads more like a conspiracy theory. We are joined by guest writer Anthony Mansuy, a French reporter for Society Magazine, who in this first of two parts helps us tell the tale of a social media behemoth, a libertarian hedge fund billionaire, a sketchy political consulting firm, Donald Trump, Steve Bannon, and a host of Russian figures. For this two-parter, Anthony conducted months of research and forty exclusive interviews. You'll be hearing from Cambridge Analytica employees, data scientists, former Obama, Trump and Cruz campaign staffers, as well as friends and associates of Chris Wylie. There's also a long interview with Wylie himself and even one with Steve Bannon, who Anthony tracked down in Rome in 2018, just after the scandal broke. The evidence lays out how Wylie spread numerous fabrications and exaggerations to minimize his contribution to the development of Cambridge Analytica's tools and conceal the true causes of his departure from the organization. More importantly, Wylie capitalized on the deepest fears held by the liberal media about the far-right, social media, and Russia; allowing him to craft the perfect narrative to fit the political moment — one that persists to this day. Subscribe for $5 a month to get an extra episode of QAA every week + access to ongoing series like 'Manclan' and 'Trickle Down': http://www.patreon.com/QAnonAnonymous Anthony Mansuy: https://twitter.com/AnthonyMansuy Les Dissidents (Anthony's book): https://bit.ly/3jgCFfK Merch: http://merch.qanonanonymous.com Music by Pontus Berghe and Nick Sena. Editing by Corey Klotz.
Remember Cambridge Analytica? The British company allegedly behind "influencing" US voters to vote for Trump in 2016? Now, Washington DC's Attorney General is suing Mark Zuckerburg, FACEBOOK (Meta now) founder personally, saying that he managed the whole deal. A must-watch! #MarkZuckerberg #KarlRacine #cambridgeanalytica #technews
English Learning for Curious Minds | Learn English with Podcasts
In March of 2018, it was revealed that 50 million Facebook profiles had been "hacked" and used to target American voters. The story involved Donald Trump, a supposed Russian spy, a Cambridge researcher and a political consulting company.The only problem was, how much of it was actually true? The breaking of the story on March 18th 2018 A summary of the Cambridge Analytica report in The Guardian The academic: Aleksandr Kogan The role of Facebook in 2012 Quiz apps on Facebook (and their real purpose) The role of Cambridge Analytica (the company) Building "This Is Your Digital Life" How many people took the quiz? Was there really a "data breach"? What was Aleksandr Kogan's involvement in the story? The results come in: was the Facebook data effective at guessing someone's personality? Cambridge Analytica's message to its clients: "5,000 data points on over 220 million Americans" Working for Ted Cruz Working for the Donald Trump Campaign The news breaking in 2018 Was Cambridge Analytica involved in Brexit? Conclusion: Did Cambridge Analytica actually have any involvement in the US election? Full transcript, subtitles and key vocabulary available on the website: https://www.leonardoenglish.com/podcasts/cambridge-analytica---You might like:
How is the new “Cold War 2.0”, which currently characterizes the ever less collaborative relations between the United States and China, developing? Join us for a brief history of CYBER WARFARE and the silent battle: USA Vs. China (and Russia). Nick P. is a Cyber Security Specialist who investigated the Cambridge Analytica Scandal among other wild security breaches. He will take us through the immensely powerful and underground world of international espionage and cyber hacking. Join us as we discuss this modern warfare and how the USA, China, and Russia are throwing away all the rules. Financially Support the Show with Patreon: https://www.patreon.com/thesingularitylab ►Link Tree: https://linktr.ee/michaelmataluni ►Apple Podcast: https://podcasts.apple.com/jp/podcast/the-singularity-lab/id1578521813?l=en ►Spotify: https://open.spotify.com/show/1gE7LEI8R8v4iExhWVnT7w?si=EIjmEDQ-TeadsC_-IReFqQ&dl_branch=1 ►Twitter: https://twitter.com/singularitymike ►Instagram: https://www.instagram.com/thesingularitylab/ ►Facebook: https://www.facebook.com/mataluni/ ►Page: https://www.facebook.com/michaelmataluni ►Discord: https://discord.gg/J2AQxqjusv
In this episode, we talk about the Cambridge Analytica Scandal that influenced the 2016 American Presidential Elections. Definitely a podcast filled with knowledge about big tech companies having so much power with your data.
In this episode I talk with my friend Alex Kogan (formerly Alex Spectre), the former professor of psychology from Cambridge University who was embroiled in the Cambridge Analytica scandal in 2018. If you want to know what really happened with Cambridge Analytica, or if you've ever wondered how a happy-go-lucky pro-sociality researcher could end up crashing Facebook's market value by $100 billion dollars, this is the episode for you.
By Walt HickeyWelcome to the Numlock Sunday edition. This week, another podcast edition! This week, I spoke to MIT Technology Review editor Karen Hao, who frequently appears in Numlock and wrote the bombshell story “How Facebook Got Addicted to Spreading Misinformation.”The story was a fascinating look inside one of the most important companies on the planet and their struggles around the use of algorithms on their social network. Facebook uses algorithms for far more than just placing advertisements, but has come under scrutiny for the ways that misinformation and extremism have been amplified by the code that makes their website work. Karen's story goes inside Facebook's attempts to address that, and how their focus on rooting out algorithmic bias may ignore other, more important problems related to the algorithms that print them money.Karen can be found on Twitter, @_Karenhao at MIT Technology Review, and at her newsletter, The Algorithm, that goes out every week on Fridays. This interview has been condensed and edited. You wrote this really outstanding story quite recently called, “How Facebook Got Addicted to Spreading Misinformation.” It's a really cool profile of a team within Facebook that works on AI problems, and extensively was working towards an AI solution. But as you get into the piece, it's really complicated. We talk a lot about algorithms. Do you want to go into what algorithms are in the context of Facebook?What a question start with! In the public conversation when people say that Facebook uses AI, I think most people are thinking, oh, they use AI to target users with ads. And that is 100 percent true, but Facebook is also running thousands of AI algorithms concurrently, not just the ones that they use to target you with ads. They also have facial recognition algorithms that are recognizing your friends in your photos. They also have language translation algorithms, the ones when someone posts something in different language there's that little option to say, translate into English, or whatever language you speak. They also have Newsfeed ranking algorithms which are ordering what you see in Newsfeed. And other recommendation algorithms that are telling you, hey, you might like this page, or you might want to join this group. So, there's just a lot of algorithms that are being used on Facebook's platform in a variety of different ways. But essentially, every single thing that you do on Facebook is somehow supported in part by algorithms.You wrote they have thousands of models running concurrently, but the thing that you also highlighted, and one reason that this team was thrown together, was that almost none of them have been vetted for bias.Most of them have not been vetted for bias. In terms of what algorithmic bias is, it's this field of study that has recognized that when algorithms learn from historical data they will often perpetuate the inequities that are present in that historical data. Facebook is currently under a lawsuit from the Housing and Urban Development agency where HUD alleges that Facebook's ad targeting algorithms are showing different people different housing opportunities based on their race, which is illegal. White people more often see houses for sale, whereas minority users more often see houses for rent. And it's because the algorithms are learning from this historical data. Facebook has a team called Responsible AI, but there's also a field of research that's called responsible AI that's all about understanding how do algorithms impact society, and how can we redesign them from the beginning to make sure that they don't have harmful unintended consequences?And so this team, when they spun up, they were like "none of these algorithms have been audited for bias and that is an unintended consequence that can happen that can legitimately harm people, so we are going to create this team and study this issue." But what's interesting, and what my main critique is in the piece, is there are a lot of harms, unintended harms, that Facebook's algorithms have perpetuated over the years, not just bias. And it's very interesting why they specifically chose to just focus on bias and not other things like misinformation amplification, or polarization exacerbation. Or, the fact that their algorithms have been weaponized by foreign actors to disrupt our democracy. So, that's the main thrust of the piece, is that Facebook has all these algorithms and it's trying, supposedly, to fix them in ways that mitigate their unintended harmful consequences, but it's going about it in a rather narrow minded way.Yeah. It definitely seems to be a situation in which they're trying to address one problem and then alluding to a much larger problem in that. Can you talk a little bit about like, again, one of the issues that they have is that there's this metric that you write about called L6/7. How does their desire for engagement, or more specifically not ever undermining engagement, kneecap some of these efforts?Facebook used to have this metric called L6/7. I'm actually not sure if it's used anymore, but the same principle holds true, that it has all of these business metrics that are meant to measure engagement on the platform. And that is what it incentivizes its teams to work towards. Now I know for a fact that some of these engagement metrics are the number of likes that users are hitting on the platform, or the number of shares, or the number of comments. Those are all monitored. There was this former engineering manager at Facebook who had actually tweeted about his experience saying that his team was on call, every few days they would get an alert from the Facebook system saying like, comments are down or likes or down, and then his team would then be deployed to figure out what made it go down so that they could fix it. All of these teams are oriented around this particular engagement maximization, which is ultimately driven by Facebook's desire to grow as a company. What's interesting is I realized, through the course of my reporting, that this desire for growth is what dictates what Facebook is willing to do in terms of its efforts around social good. In the case of AI bias, the reason why it is useful for them to be working on AI bias is actually for two reasons.One is they're already under fire for this legally. They're already being sued by the government. But two, when this responsible AI team was created, it was in the context of big tech being under fire already from the Republican-led government about it allegedly having anti-conservative bias. This was a conversation that began in 2016 as the presidential campaign was ramping up, but then it really picked up its volume in 2018 in the lead up to the midterm elections. About a week after Trump had tweeted #stopthebias in reference to this particular allegation towards big tech, Mark Zuckerberg called a meeting with the head of the responsible AI team and was like, "I need to know what you know about AI bias. And I need to know how we're going to get rid of it in our content moderation algorithms."And I'm not sure if they explicitly talked about the #stopthebias stuff, but this is the context in which all of these efforts were ramping up. My understanding is Facebook wanted to invest in AI bias so that they could definitively say, "Okay, our algorithms do not have anti-conservative bias when they're moderating content on the platform." And, use that as a way to keep regulation at bay from a Republican-led government.On the flip side, they didn't pursue many of these other things that you would think would fall under the responsible AI jurisdiction. Like the fact that their algorithms have been shown to amplify misinformation. During a global pandemic, we now understand that that can be life and death. People are getting COVID misinformation, or people were getting election misinformation that then led to the US Capitol riots. They didn't focus on these things because that would require Facebook to fundamentally change the way that it recommends content on the platform, and it would fundamentally require them to move away from an engagement centric model. In other words, it would negatively impact its growth. It would hinder Facebook's growth. And that's what I think is the reason why they didn't do that.One part that's interesting is Facebook was not instantaneously drawn to AI. When The Facebook was made it didn't involve AI. AI is a solution to another suite of problems that it had in terms of how do you moderate a social network with billions of people, an order of magnitude larger than anyone has ever moderated before, I suppose.It's interesting. At the time that Facebook started, AI was not really a thing. AI is a very recent thing, it really started to show value for companies in 2014. It's actually really young as a technology, and obviously Facebook started way before 2014. At the time they adopted AI in late 2013, early 2014, because they had this sense that Facebook was scaling really rapidly. There was all of this content on the platform, images, videos, posts, ads, all this stuff. AI, as an academic research field, was just starting to see results in the way that AI could recognize images, and it could potentially one day recognize videos and recognize text and whatever.And the CTO of the company was like, "Hey, this technology seems like it would be useful for us in general, because we are an information rich company. And AI is on a trajectory to being really good at processing information." But then also what happened at the same time was there were people within the company as well that started realizing that AI was really great at targeting users, at learning users' preferences and then targeting them, whether it was targeting them with ads to or targeting them with groups that they like or pages that they like or targeting them with the posts from the friends that they liked the most.They very quickly started to realize that AI is great for maximizing engagement on the platform, which was a goal that Facebook had even before they adopted AI, but AI just became a powerful tool for achieving that goal. The fact that AI could help process all of this information on Facebook, and the fact that it could really ramp up user engagement on the platform, collided and Facebook decided we're going to heavily invest in this technology.There's another stat in your article that really just was fascinating to see laid out. You wrote about how there's 200 traits that Facebook knows about its users, give or take. And a lot of those are estimated. The thing that's interesting about that is I feel like I've told Facebook fairly limited amount of information about myself in the past couple of years. I unambiguously directly told them like, yes, this is my birthday. This is where I live now. This is where I went to college. And then from that, and from their algorithms, and from obviously their myriad cookie tech, they've built this out into a suite of 200 traits that you wrote about. How did AI factor in in that, and how does that lead into this idea of fairness that you get at in the piece?Yeah, totally. The 200 traits are all about, they're both estimated by AI models, and they're also used to feed AI models. It's dicey territory to ask for race data. Like when you go to a bank, that's part of the reason why they'll never ask you race data because they can't decide banking decisions based on your race. With Facebook that's the same, but they do have a capability to estimate your race by taking a lot of different factors that could highly correlate with certain races. They'll say like, if you are college educated, you like pages about traveling, and you engage a lot with videos of guys playing guitar, and you're male, and you're like within this age, and you live in this town, you are most likely white.I was about to say, you are a big fan of the band Phish. We're kind of barking up the same tree here, I suspect.They can do that because they have so much data on all the different things that we've interacted with on the platform. They can estimate things like your political affiliation, if you're engaging with friends' posts that are specifically pro-Bernie or whatever, you are most likely on the left of the political spectrum in the US. Or, they can estimate things like, I don't know, just random interests that you might have. Maybe they figure out that you really like healthy eating, and then they can use that to target you with ads about new vegan subscriptions, whatever it is. They use all of these AI models to figure out these traits. And then those traits are then used to measure how different demographics on Facebook, how different user groups engage with different types of content in aggregate.The way that this ties to fairness, and then ties to this broader conversation around misinformation? So this Responsible AI team was really working on building these tools to make sure that their algorithms were more fair, and make sure that they won't be accidentally discriminating against users, such as in the HUD lawsuit case, by creating these tools to allow engineers to measure once they've trained up this AI model. Okay, now let's like subdivide these users into different groups that we care about usually based on protected class, based off of these traits that we've estimated about them, and then see whether or not these algorithms impact one group more than another.So, it allows them a chance to stress test algorithms that are in development against what it hypothetically would do?Exactly.Sick.The issue is that even before this tool existed, the policy team, which sits separately from the Responsible AI team, they were already evoking this idea of anti-conservative bias, and this idea of fairness, to undermine misinformation efforts. So, they would say, "Oh, this AI model that's designed to detect anti-vax information and limit the distribution of anti-vax misinformation, that model, we shouldn't deploy it because we've run some tests and we see that it seems to impact conservative users more than liberal users, and that discriminates against conservatives, so unless you can create this model to make sure it doesn't impact conservative and liberal users differently, then you can't deploy it." And there was a former researcher that I spoke to who worked on this model, who had those conversations, who was then told to edit the model in a way that basically made the model completely meaningless. This was before the pandemic, but what he said to me was like, this is anti-vax misinformation. If we have been able to deploy that model at full efficacy, then it could be quickly repurposed to anti-vax COVID misinformation. But now we're seeing that there's a lot of vaccine hesitancy around getting these COVID vaccines. And there were things that basically the policy team actively just did in the past that led to this issue not being addressed with full effectiveness.You talk about this in the piece where instead of fairness being, “we shouldn't have misinformation on the platform, period," it's like, "well, if there's something that could happen that would disproportionately affect one side or the other, we can't do it." Even if one side — I'm making this up — but let's say that liberals were 80 percent of the people who believed in UFOs. And if we had a policy that would roll out a ban on UFO content and it would disproportionately affect liberals, then that would be stymied by this team?Yes and no. The responsible AI team, what's really interesting is they sent me some documentation of their work. The responsible AI team, they create these tools to help these engineers measure bias in their models. But they also create a lot of educational materials to teach people how to use them. And one of the challenges of doing AI bias work is that fairness can mean many, many different things. You can interpret it to me in many different things.They have this specific case study about misinformation and political bias, where they're like, if conservatives posted more misinformation than liberals, then fairness does not mean that this model should impact these two groups equally. And similarly, if liberals posted more misinformation than conservatives, fairness means that each piece of content is treated equally. And therefore the model would, by virtue of treating each piece of content equally, impact liberals more than conservatives. But all of these terms are really spongy. Like "fairness," you can interpret it in so many different ways. Then the policy team was like, “we think fairness means that conservatives and liberals cannot be treated differently." And that was what they were using to dismiss, weaken, completely stop a lot of different efforts to try and tamp down misinformation and extremism on the platform.Where are we at moving forward? It seems like you had alluded to AI being really central to Facebook's policy. And this team, even physically, was close to Mark Zuckerberg's desk. Where are we at moving forward now? Is there a chance that the policy team will lose sway here? Or is there a chance that this is just, it was what it was?I think what I learned from this piece is that there's just a huge incentive misalignment problem at Facebook. Where as much as they publicly tell us, "we're going to fix these problems, we're going to fix these problems," they don't actually change their business incentives in a way that would allow any of the efforts trying to fix these problems to succeed. So, AI fairness sounds great, but AI fairness in service of business growth can be perverted.And if the company is unwilling to change those incentive structures, such that truly responsible AI efforts can succeed, then the problems are just going to keep getting worse. The other thing that I realized is we should not be waiting around anymore for Facebook to be doing this stuff because they promised, after the Cambridge Analytica scandal three years ago, that they were going to fix all these things. And the responsible AI team was literally created a couple of weeks after the Cambridge Analytica scandal, as a response to a lot of the allegations that Facebook was facing then about their algorithms harming democracy, harming society.And in three years, they've just made the problem worse. We went from the Cambridge Analytica Scandal to the US Capitol riots. So, what I learned was, the way that the incentive structures change moving forward will have to come from the outside.Yeah. Because it is also bigger than just the United States. You alluded in your piece to the genocide in Myanmar. There are much bigger stakes than just elections in a developed democracy.That was one of the other things that I didn't really spend as much time talking in my piece about, but it is, I think, pretty awful that some of Facebook's misinformation efforts, which impact its global user population, are being filtered based off of US interests. And that's just not in the best interest of the world's population.Karen, where can people find your work? You write about this stuff all the time and you are the senior editor for AI at MIT Technology Review. So where can folks get ahold of you and find out more about this?They can follow me on Twitter, @_Karenhao. they can find me on LinkedIn. They could subscribe to MIT Technology Review and once they subscribe they would get access to my subscriber only AI newsletter, The Algorithm, that goes out every week on Fridays. If you have anything you'd like to see in this Sunday special, shoot me an email. Comment below! Thanks for reading, and thanks so much for supporting Numlock.Thank you so much for becoming a paid subscriber! Send links to me on Twitter at @WaltHickey or email me with numbers, tips, or feedback at walt@numlock.news. Get full access to Numlock News at www.numlock.com/subscribe
This episode features Carole Cadwalladr, a journalist for the Guardian and Observer in the United Kingdom. Carole worked for a year with whistleblower Christopher Wylie to publish her report into Cambridge Analytica. The investigation resulted in Mark Zuckerberg being called before Congress and Facebook losing more than $100 billion from its share price. Carole's work has won a Polk Award and the Orwell Prize for political journalism, and she was named a Pulitzer Prize finalist for National Reporting in 2019.
Are you online? Have you ever been online? Do you have a phone? You have a digital footprint. Many of us have heard of this phrase, but what actually is it, and what does it mean. In this episode, Therese Markow and Kade Crockford discuss just that. From the Cambridge Analytica Scandal, to the changing purpose of fusion centers and government databases, and what companies know about us from just three main sources. The government has tried to use these databases as surveillance to create predictive models, however, many reports have come out and disproven the effectiveness of these models, yet they continue to be in use based on those sources. This creates additional problems with the 1st and 4th Amendment rights. Key Takeaways: In the United States, unlike in Europe, we lack any comprehensive consumer privacy law that is sufficiently protective of our rights and our interests in an era in which digital technologies are ubiquitous. The idea that the FBI should be wiretapping every single person who has anti-black or anti-Jewish, or anti-gay political views is not possible, because there are so many of those people. And it wouldn't make sense from a public safety perspective, because the vast majority of people who hold even extremist political views will never commit extremist political violence. The Department of Homeland Security found that predictive modeling was not a useful tactic when it comes to anti-terrorism or counterterrorism. House Republicans put out a report, finding that fusion centers had produced little to no intelligence of value, had violated civil rights and civil liberties of both people on the left and the right, and had contributed nothing to the nation's fight against terrorism. These are political problems. These are all infrastructures and architectures and political and legal systems that human beings created. So we can change them, we absolutely have the power to change them. "It shouldn't be that we leave it up to the user, the end-user, the consumer of consumer technologies, to be a privacy expert, to be a lawyer, to be able to read and understand these complex terms of service agreements or privacy policies, particularly in a situation in which people don't have a lot of alternatives to using those products." — Kade Crockford Connect with Kade Crockford: ACLUM.org Bio: Kade Crockford Twitter: @onekade Blog: PrivacySOS.org Connect with Therese: Website: www.criticallyspeaking.net Twitter: @CritiSpeak Email: theresemarkow@criticallyspeaking.net Audio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it.
Discussing the effects of social media on the mind and subsequent behavior. Discussed in this episode: How the environment modulates the brain, Harvard Research on self-disclosure on social networking sites, unattainable beauty standards and their potential effects on young women, possible benefits of social media, tactics used to make social media addictive, operant conditioning and its reinforcement schedules, the anxiety-inducing effects of social media, cancel culture, and the Cambridge Analytica Scandal. https://www.insightfulthinkersmedia.com/
This episode features an interview from How I Found My Voice, another podcast by Intelligence Squared. Presented by the prominent BBC journalist Samira Ahmed, the podcast explores how some of the world's greatest artists and thinkers became such compelling – and unique – communicators.In this episode Samira speaks to the Cambridge Analytica whistleblower, Chris Wylie, about data micro-targeting, leaking documents that led to the Facebook-Cambridge Analytica data scandal and the part he played in developing technologies that can help shape election results. Subscribe to How I Found My Voice and listen to more episodes here: https://bit.ly/2XpFPyMSeason 2 of How I Found My Voice is proudly supported by THE OUT, innovative car rental powered by Jaguar Land Rover. See acast.com/privacy for privacy and opt-out information.
Paul-Olivier Dehaye talks with Idka's Elizabeth Perry - Behind the scenes of The Great Hack Documentary, and uncovering the Cambridge Analytica Scandal. Find the right segment:
Cambridge Analytica’s former business development director Brittany Kaiser talks with Recode’s Kara Swisher about her new book TARGETED: The Cambridge Analytica Whistleblower's Inside Story of How Big Data, Trump, and Facebook Broke Democracy and How It Can Happen Again. Kaiser talks about how she first observed the dangers of social media while working on Barack Obama's 2008 campaign, why she mistakenly believed Cambridge Analytica was using technology as a force for good, and what happened when she decided to turn on the company and testify about its abuses in the UK Parliament. She also deconstructs Facebook's excuses for the scandal and its slow response, and predicts that 2020 could be "exponentially worse" than 2016 for election interference. Featuring: Brittany Kaiser, author of TARGETED: The Cambridge Analytica Whistleblower's Inside Story of How Big Data, Trump, and Facebook Broke Democracy and How It Can Happen Again. Hosts: Kara Swisher (@karaswisher), Recode co-founder and editor-at-large More to explore: Listen to Kara's interview with the creators of The Great Hack, a Netflix documentary about Cambridge Analytica, including former COO Julian Wheatland. Read Vox's Alissa Wilkinson's review of The Great Hack. Subscribe for free to Reset, Recode's new podcast that explores why — and how — tech is changing everything. About Recode by Vox: Recode by Vox helps you understand how tech is changing the world — and changing us. Follow Us: Newsletter: Recode Daily Twitter: @Recode and @voxdotcom Learn more about your ad choices. Visit megaphone.fm/adchoices
This week's guest is Oscar-nominated filmmaker Jehane Noujaim, who alongside her husband Karim Amer, directed the new Netflix documentary THE GREAT HACK which premiered at the 2019 Sundance Film Festival. It's out now, and the Guardian has already given it five stars.The documentary uncovers the dark world of data exploitation online. It follows the personal stories of some key players on different sides of the explosive Cambridge Analytica/Facebook data scandal. Carole Cadwalladr from the Guardian who has spend years investigating, Brittany Kaiser who went from interning on the Barack Obama campaign to working on the Cambridge Analytica Trump campaign. It's informative, entertaining with some pretty scary revelations. Even if you know a bit about the Cambridge Analytica scandal, it's fascinating to see the people behind it, and their individual motivations. Data has surpassed oil as the world’s most valuable asset. People and companies everywhere are in a battle for control of our most intimate personal details. THE GREAT HACK forces us to question the origin of the information we consume daily. What do we give up when we click the yes box on the T&Cs on social media apps? It is available to watch now on Netflix, I recommend a viewing. See acast.com/privacy for privacy and opt-out information.
Coming up in this week's episode of the GDPR Weekly Show: Facebook Receives Record $5 Billion Fine for its part in the Cambridge Analytica Scandal, No Win No Fee Lawyers set up British Airways Data Breach Compensation Website, We Carry Out a Comparison of GDPR Penalties Across Europe, London Estate Agent Fined £80,000 for Data Breach, Listen to the Final Part of our Interview with Mark Wellins from 1Touch.io
Host: Taylor Wiegert (www.twitter.com/twiegert)This week’s headlines: Facebook Cites Big Growth for Watch and Fortnite Goes Social.The week’s news quick hits: Salesforce Acquires Tableau, Snapchat Adding Shop Button to Influencer Profiles, Facebook Plans Portal Follow-up, Facebook Launches New Research App, Emails Connect Zuckerberg to Cambridge Analytica Scandal and Facebook Reaches Settlement for Video Measurement Errors with Advertisers.Download or subscribe to this podcast at www.braveadworld.com/podcast.Find more social media marketing insights on the Brave Ad World blog at www.braveadworld.com.You can contact Brave Ad World at BraveAdWorld [at] gmail.com, or connect with Taylor on Twitter @twiegert.
In early 2018, the data analysis and marketing firm Cambridge Analytica came under fire in both the United States and the United Kingdom for different reasons. What exactly happened and how have things changed as a result? Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
In this interview, we ask Brown University's Executive Master in Cybersecurity faculty member, Deborah Hurley, how the Facebook Cambridge Analytica scandal should be viewed from the perspective of laws in the US and beyond. Based on the current facts, Hurley tells us, Facebook probably wouldn’t be liable for violating personal data and privacy protections of the American Facebook users. But, in a world where users and their data are global, US laws aren’t the only ones Facebook needs to worry about. Listen to this podcast to learn more about the potential legal fallout for Facebook from the Cambridge Analytica scandal under different nations’ laws as they exist today and post May 25, 2018 when the European Union’s General Data Protection Regulation goes into effect. Hurley is an adjunct professor of the practice of computer science at Brown University, Fellow at the Institute for Quantitative Social Science at Harvard University, and seminal player in the development of industry standards for information technology for the past 30 years. If you’d like to hear more, listen to our other episode with Hurley where she talks about the Seven Stages of GDPR. Music - "As Colourful As Ever" by Broke For Free (CC BY 3.0)
Cambridge Analytica has moved from the shadows to Headline News after it was revealed that they profited from misappropriated facebook data of 50 Million Americans. That data was used to help the Trump Campaign win the 2016 election using experimental psychology and manipulation. In this episode, we learn about the tactics and methods used such as Psychographics, Behavior Modeling, segmentation, and targeting. Produced at Seedling Inc. Find me on Twitter @JasonZagami and LinkedIn
This week, investigative journalist Greg Foyster leads us over the highways and byways of social media to show that almost every aspect of what's come to be known as the Cambridge Analytica - Facebook data harvesting scandal could have been predicted from the recent history of various digital media platforms and tech companies.
As Atlantic staff writer Robinson Meyer recently wrote, Facebook “is currently embroiled in the worst crisis of trust in its 14-year history.” This week, the company’s CEO Mark Zuckerberg testified before the U.S. Congress for the first time. It’s not clear whether Congress will seek to exert more regulatory control over the company, even after revelations that as many as 87 million people unwittingly had their Facebook data given to the political firm Cambridge Analytica, which may have used some of that data to influence the 2016 U.S. election. And the questions senators asked of Zuckerberg suggest they may not yet understand Facebook well enough to regulate it effectively, even if they wanted to. In this Radio Atlantic news update, Rob shares what he learned from his exclusive interview with Zuckerberg, and from the CEO’s testimony before Congress. We discuss with Atlantic senior editor Gillian White whether Facebook can be regulated, and whether it will. Links - “Mark Zuckerberg Says He’s Not Resigning” (Robinson Meyer, April 9, 2018) - “The 3 Questions Mark Zuckerberg Hasn’t Answered” (Robinson Meyer, April 10, 2018) - “How Facebook’s Ad Tool Fails to Protect Civil Rights” (Gillian B. White, October 28, 2016) - “Facebook Lets Advertisers Exclude Users by Race” (Julia Angwin and Terry Parris Jr., ProPublica, October 28, 2016) - Sarah Jeong on Twitter - “The Most Important Exchange of the Zuckerberg Hearing” (Alexis C. Madrigal, April 11, 2018) - “Mark Zuckerberg Is Halfway to Scot-Free” (Alexis C. Madrigal, April 11, 2018) - “My Facebook Was Breached by Cambridge Analytica. Was Yours?” (Robinson Meyer, April 10, 2018) - “Can Anyone Unseat Mark Zuckerberg?” (Robinson Meyer, March 22, 2018) - “The Cambridge Analytica Scandal, in 3 Paragraphs” (Robinson Meyer, March 20, 2018) Learn more about your ad choices. Visit megaphone.fm/adchoices
Cambridge Analytica has moved from the shadows to Headline News after it was revealed that they profited from misappropriated facebook data of 50 Million Americans. That data was used to help the Trump Campaign win the 2016 election using experimental psychology and manipulation. In this episode, we learn about the tactics and methods used such as Psychographics, Behavior Modeling, segmentation, and targeting. Produced at Seedling Inc. Find me on Twitter @JasonZagami and LinkedIn
In this week’s episode of Intellicast, we have our first returning guest, EMI’s political polling expert, Adam Dietrich! Join Adam Jolley, Brian Lamar and Adam Dietrich as they discuss: Facebook / Cambridge Analytica data scandal How it impacts market research Impact to the future of political polling AAPOR (American Association for Public Opinion Research Mount Rushmore of Chicago eateries And much more! Check out Adam’s blog, Facebook and Cambridge Analytica – A Turing Point in Data Collection, here:
Marketing News staff writers Sarah, Zach and Hal discuss whether users or advertisers will leave Facebook after the Cambridge Analytica scandal. Nate Elliott, principal of Nineteen Insights, weighs in with his own thoughts on the matter. PLUS: Sarah and Hal give a recap of their April Marketing News features: Inside the Walled Garden of Social Media Communities https://bit.ly/2GnlBfE A Night in the Life of the Chicago Bulls Digital Media Team https://bit.ly/2H567Oz All Marketing News April stories: https://bit.ly/1srUhjm
Beyond Social Media: The Marketing, Advertising & Public Relations Podcast
During episode 223, co-hosts B.L. Ochman and David Erickson discussed bulletproof vests for kids; the growing Cambridge Analytica Scandal involves Plantir and Peter Thiele; driving a car with VR; brands on HQ Trivia; long-forms audio; Google image optimization; Google Lens comes to iOS; a Boring development; the mountain of data you gave away to Google and Facebook will blow your mind; Desperately Seeking Counsel on Craig's List; no more Lytro; a social network with no humans allowed; plus great new apps and important stats and a lot of stuff in between. Show Notes & Links: http://beyondsocialmediashow.com/223 Connect with the show on social media Subscribe to the weekly eNewsletter
This Sirius XM interview of Dr. Gleb Tsipursky discusses how to address fake news in the context of Facebook's Cambridge Analytica Scandal
We talk to cyber crime and digital inspector Ryan Duquette about the Cambridge Analytica scandal and what it means for your future on Facebook. We also teach you how to become a mashup master with Rave, a machine learning tool for music lovers. Plus, we talk to @Brittlestar about which short-form content platform will rise above the rest in a post-Vine world. In Socially Speaking, we discuss why the posts you follow on Instagram are now in chronological order (once again!).
The whole Facebook scandal 2018 regarding Cambridge analytica doesn't bother me even a tiny bit. And I think that anyone who get bothered are mighty naive... ______________________ Here You Can Find All My Gear/Equipment: ► http://Kit.com/vingin Here Is Where You Can Contact Me: ► https://goo.gl/bw7w9x Here Is How You Can Support Me: ► https://goo.gl/8diJ1j Here You Can Subscribe To My YouTube Channel: ► https://goo.gl/9f9Sgb ______________________ NEW EPISODE IS UPLOADED EVERY DAY / SAME TIME: New York(USA 12.00 Los Angeles(USA) 09.00 Tokyo(japan) 02.00 Sydney(Australia) 04.00 London(UK) 17.00 Stockholm(SE) 18.00 ______________________ ps. Don’t forget to subscribe and turn on notifications.
Beyond Social Media: The Marketing, Advertising & Public Relations Podcast
During episode 222, co-hosts B.L. Ochman and David Erickson discussed the global anti-gun violence #MarchForOurLives; Facebook's terrible week as news about the Cambridge Analytica data scandal finally hit mainstream media; how to live Tweet your presentation; the Marlon Bundo bunny book war between VP Pence and John Oliver; NYTimes planned TV series; YouTube's new Quickstream for Chrome; marketing of Crusoe the Celebrity Puppy;#hashtags in Instagram bios at last; animated paint chips; a "Girls Only" job ad from Outsourcely; China's social scoring; #FilmsMadeByFarmers; plus great new apps and important stats and a lot of stuff in between. Show Notes & Links: http://beyondsocialmediashow.com/222 Connect with the show on social media Subscribe to the weekly eNewsletter
Today, a Happy Hour Hustle special report. Our crack team is reporting and commentating on breaking news from last week, the Cambridge Analytica Scandal. What happened? Who's to blame? Join Kim, Alex, and Kayden to find out! News Theme 1 by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Artist: http://audionautix.com/
In this week’s podcast, Ayesha Hazarika was joined by UnHerd technology editor Nigel Cameron and playwright and commentator Bonnie Greer. Each of the panellists brought along a recent story they felt had gone underreported by the mainstream media. Stories include, the wider issue of big data amongst the Cambridge Analytica scandal, the gender pay gap and a world ending asteroid.
On Party Politics this week, co-hosts Jay Aiyer and Brandon Rottinghaus are going to catch you up on the week's political news: The war on opioids Former FBI Deputy Director Andrew McCabe's firing Spending bill caught in partisan fight ahead of deadline Vice President Mike Pence's bunny Marlon Bundo And finally, Brandon and Jay talk about Cambridge Analytica scandal. Don't forget about our weekly Texas-centered episode too, you can subscribe to it on Apple Podcasts. Tweet us using #PartyPoliticsPod or email partypoliticspod@houstonpublicmedia.org.
Mid-Atlantic - conversations about US, UK and world politics
Facebook exposed data on 50 million Facebook users to a researcher who worked at Cambridge Analytica, which worked for the Trump campaign and a similar number of facebook users in the UK. Did Cambridge Analytica help swing the last US election and the vote Leave campaign in the UK? Why are we more worried about being manipulated politically but not our consumer habits, isn't Facebook just a bunch of cookies on steroids? See acast.com/privacy for privacy and opt-out information.
Andrew Prokop, senior politics reporter for Vox.com, joins Dara and Matt to break down the scandal that’s rocking the worlds of technology and politics. References: The Jane Mayer piece on the Mercers Dara mentioned A study on Big Five personality attributes and voting behavior SCIENCE! on the Cambridge Analytica model A big five personality test you can do at home Andrew's Cambridge Analytica explainer Matt's good Facebook take Learn more about your ad choices. Visit megaphone.fm/adchoices
Nothing is as it appears to be with the Facebook Cambridge Analytica scandal. Plus, to silence dissent, the left has begun suing "conspiracy theorists." And finally, members of the deep state express their outrage over a new poll which finds thats nearly three out of four Americans believe in the existence of the Deep State. If you haven’t, Subscribe to the Propaganda Report podcast on iTunes. Rate and review us because it helps us move up in the search rankings and we’d love to hear from you. https://itunes.apple.com/us/podcast/t… Or subscribe on your Android listening platform of choice. http://www.subscribeonandroid.com/pro… If you want to help support the show, you can by donating at Paypal at https://www.paypal.com/donate/?token=2TvY2zlEkKJXp_rkEgJs0B4QdCkt30MwOIxlp4vM4stbrgoTdmPqTfQ6wmEYFAkBcUgsDG&country.x=US&locale.x= or better yet, you can become a Patron. https://www.patreon.com/propagandareport Thank you to our current Patrons and everyone who’s donated to the show via Paypal or supported the show in other ways. You are why we are able to continue doing the show. THANK YOU. Every little bit helps. Like the Propaganda Report Podcast on Facebook https://www.facebook.com/thepropagand…
In the wake of the Cambridge Analytica imbroglio 50 million Facebook profiles stand compromised. One upshot: Users must learn to protect themselves experts say. See acast.com/privacy for privacy and opt-out information.
Jonathan Hackenbroich covers for Mark Leonard and speaks with Professor John Naughton about Facebook’s data leak and its monetisation of data. Bookshelf: East West Street by Philippe Sands https://www.amazon.co.uk/East-West-Street-Non-fiction-Book/dp/1474601901 95 Thesis about Technology by John Naughton http://95theses.co.uk/ China’s Big Data Big Brother by Mark Leonard https://www.project-syndicate.org/commentary/xi-jinping-surveillance-state-by-mark-leonard-2018-02 Picture credits: Cambridge Analytica by Thought Catalogue via Flickr https://www.flickr.com/photos/lookcatalog/26029490417/in/photolist-FE8WDx-bWN3ja-21kifhB-21htHYs-FHycfF-21kgHZp-Dcy5Jw-Dczwm1-DczwPW-21kgHBF-24iDy1d-HfhRq7-Pi78SZ, CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/)