2013–2018 British political consulting firm
POPULARITY
Categories
At Donald Trump's inauguration earlier this year, the returning president made a striking break from tradition. The seats closest to the president – typically reserved for family – went instead to the most powerful tech CEOs in the world: Elon Musk, Mark Zuckerberg, Jeff Bezos and Sundar Pichai. Between them, these men run some of the most profitable companies in history. And over the past two decades, they've used that wealth to reshape our public sphere.But this felt different. This wasn't discreet backdoor lobbying or a furtive effort to curry favour with an incoming administration. These were some of the most influential men in the world quite literally aligning themselves with the world's most powerful politician – and his increasingly illiberal ideology.Carole Cadwalladr has been tracking the collision of technology and politics for years. She's the investigative journalist who broke the Cambridge Analytica story, exposing how Facebook data may have been used to manipulate elections. Now, she's arguing that what we're witnessing goes beyond monopoly power or even traditional oligarchy. She calls it techno-authoritarianism – a fusion of Trump's authoritarian political project with the technological might of Silicon Valley.So I wanted to have her on to make the case for why she believes Big Tech isn't just complicit in authoritarianism, but is actively enabling it.Mentioned:The First Great Disruption 2016-2024, by Carole CadwalladrTrump Taps Palantir to Compile Data on Americans, by Sheera Frenkel and Aaron Krolik (New York Times)This is What a Digital Coup Looks Like, by Carole Cadwalladr (TED)The Nerve NewsMachines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Är det del av Moderaternas nya mediestrategi? Lyssna på alla avsnitt i Sveriges Radio Play. I veckan publicerade Aftonbladet en granskning som fick moderaternas presstab att sparka bakut. Aftonbladet publicerade att Ulf Kristerssons dotter fått låna Harpsund när hennes studentförening skulle ha kick-off för två år sen. Ett tveksamt sätt att använda statsministerns residens menade en professor som tidningen intervjuat. Men det dröjde inte länge innan motattacken var ett faktum. På X gick statsministerns biträdande presschef ut och menade att Aftonbladet jagat statsministerns dotter och hennes vänner och att dom ska skämmas. Aftonbladet i sin tur började gasta om att Moderaternas pressis helt enkelt ljuger.Erik Petersson har dykt rakt in i veckans största skitstorm och intervjuat Aftonbladets samhälls- och grävchef Jonathan Jeppson.Bråk mellan Leksands IF och lokala medier - vapenvila eller fred? Det har stormat kring storlaget Leksands IF. Inte bara för att det för tillfället går riktigt dåligt i hockeyligan SHL, där de ligger i botten efter flera förluster i rad - utan också för att Leksand har haft en minst sagt turbulent relation till medier och då i synnerhet till lokaltidningen Falukuriren. I oktober låste klubben dörrarna till träningarna alla dagar förutom onsdagar, någonting som blev en riksnyhet. Men Leksands konflikt med medier går tillbaka längre än så - till april i år.Tonchi Percan har åkt till Dalarna i en tid av vapenvila och pratat med Markus Josefsson, reporter Falukuriren, Markus Bäckström, chef för sporten på Falukuriren, Thomas ”Tjomme” Johansson, general manager Leksands IF, Helena Nyman, chefredaktör och ansvarig utgivare Falukuriren, Thorbjörn Carlsson, chefredaktör och ansvarig utgivare Siljan News.Ny EU-lagstiftning om politisk reklamDen 10 oktober trädde EU:s nya lagstiftning om politisk reklam i kraft. Förordningen ska öka transparensen, minska risken för utländsk valpåverkan och undvika en ny Cambridge Analytica-skandal. Men - kritiker har beskrivit det som ökad byråkrati som missar målet, och dominerande aktörer som Meta och Google har meddelat att dom på grund av det här stoppar all politisk annonsering i sina annonssystem.Vad innebär då det här för den svenska annonsmarknaden när vi strax går in i ett valår? Ökade möjligheter för svenska mediehus att tjäna pengar eller ett hot mot demokratin?Joanna Korbutiak har den här veckan vadat sig igenom EU:s snåriga lagparagrafer och intervjuat Jacob Dexe, Public Affairs Manager på IAB Sverige och Calle Boija, Head of Adtech på NTM.
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Eli Lopian, author of AICracy and founder of aicracy.ai, about how artificial intelligence could transform the way societies govern themselves. They explore the limitations of modern democracy, the idea of AI-guided lawmaking based on fairness and abundance, and how technology might bring us closer to a more participatory, transparent form of governance. The conversation touches on prediction markets, social media's influence on truth, the future of work in an abundance economy, and why human creativity, imperfection, and connection will remain central in an AI-driven world.Check out this GPT we trained on the conversationTimestamps00:00 Eli Lopian introduces his book AICracy and shares why democracy needs a new paradigm for governance in the age of AI. 05:00 They explore AI-driven decision-making, fairness in lawmaking, and the abundance measure as a new way to evaluate social well-being. 10:00 Discussion turns to accountability, trust, and Eli's idea of three AIs—government, opposition, and NGO—balancing each other to prevent corruption. 15:00 Stewart connects these ideas to non-linearity and organic governance, while Eli describes systems evolving like cities rather than rigid institutions. 20:00 They discuss decade goals, city-state models, and the role of social media in shaping public perception and truth. 25:00 The focus shifts to truth detection, prediction markets, and feedback systems ensuring “did it actually happen?” accountability. 30:00 They talk about abundance economies, AI mentorship, and redefining human purpose beyond traditional work. 35:00 Eli emphasizes creativity, connection, and human error as valuable, contrasting social media's dopamine loops with genuine human experience. 40:00 The episode closes with reflections on social currency, self-healing governance, and optimism about AI as a mirror of humanity.Key InsightsDemocracy is evolving beyond its limits. Eli Lopian argues that traditional democracy—one person, one vote—no longer fits an age where individuals have vastly different technological capacities. With AI empowering some to act with exponential influence, he suggests governance should evolve toward systems that are more adaptive, participatory, and continuous rather than episodic.AI-guided lawmaking could ensure fairness. Lopian's concept of AICracy imagines an AI system that drafts laws based on measurable outcomes like equity and happiness. Using what he calls the abundance measure, this system would assess how proposed laws affect societal well-being—balancing freedoms, security, and fairness across all citizens.Trust and accountability must be engineered. To prevent corruption or bias in AI governance, Lopian envisions three independent AIs—a coalition, an opposition, and an NGO—cross-verifying results and exposing inconsistencies. This triad ensures transparency and keeps human oversight meaningful.Governance should be organic, not mechanical. Drawing inspiration from cities, Lopian and Alsop compare governance to an ecosystem that adapts and self-corrects. Like urban growth, effective systems arise from real-world feedback, where successful ideas take root and failing ones fade away naturally.Truth requires new forms of verification. The pair discuss how lies spread faster than truth online and propose an algorithmic “speed of a lie” metric to flag misinformation. They connect this to prediction markets and feedback loops as potential ways to keep governance accountable to real-world outcomes.The abundance economy redefines purpose. As AI reduces the need for traditional jobs, Lopian imagines a society centered on creativity, mentorship, and personal fulfillment. Governments could guarantee access to mentors—human or AI—to help people discover their passions and contribute meaningfully without economic pressure.Human connection is the new currency. In contrast to social media's exploitation of human weakness, the future Lopian envisions values imperfection, authenticity, and shared experience. As AI automates production, what remains deeply human—emotion, error, and presence—becomes the most precious and sustaining form of wealth.
Tuesday Headlines: Trump says he ‘would love’ to run for a third term, pre-sentence hearing for Hannah McGuire’s killer resumes today, neo-Nazi leader Tom Sewell faces court in Victoria, Aussies can claim share of $50m Cambridge Analytica payout and the BOM’s new website bombs. Deep Dive: From athlete doping to criminal syndicate match fixing, for as long as there’s been professional sport, there’s been those trying to cheat and make big bucks. As the Australian 2032 Olympics edges closer, police and sporting agencies are already working to prevent such threats from hindering the integrity of the games. In this episode of The Briefing, Tara Cassidy speaks with Sport Integrity Australia’s Chris Butler and James Moller about how the body catches those doing the wrong thing, and how they’re preparing to crack down on cheats and criminals for the biggest sporting spectacle in the world. Follow The Briefing: TikTok: @thebriefingpodInstagram: @thebriefingpodcast YouTube: @TheBriefingPodcastFacebook: @LiSTNR Newsroom See omnystudio.com/listener for privacy information.
This episode of The New Abnormal podcast features Dr Emma Louise Briant, an internationally recognised expert and professor of information warfare & propaganda.She helps policymakers, governments, business leaders, journalists and the public understand and respond to the challenge of contemporary online threats.Her research, testimony and solutions on the rapid evolution of surveillant propaganda and its implications for democracy were central in exposing the Cambridge Analytica scandal. Her work has informed civil society organisations, the UN, US Congress, UK Parliament, Canadian Parliament, New Zealand Parliament, and EU Parliament. Emma's books include the Routledge Handbook of the Influence Industry, Bad News for Refugees, and Propaganda & Counter-Terrorism. She also served as advisor for Emmy-nominated ‘People You May Know', and as Senior Researcher for Oscar-shortlisted ‘The Great Hack'.She's an absolutely fascinating person, so I really enjoyed listening to Emma's perspectives, and hope you will too!
Nick Cleveland-Stout, a Research Associate in the Democratizing Foreign Policy program at the Quincy Institute for Responsible Statecraft, discusses his research into Israel's having hired a conservative-aligned firm, Clock Tower X LLC, led by former Trump campaign manager Brad Parscale, to create websites and content designed to train AI models like ChatGPT with pro-Israel messaging aimed primarily at Gen Z audiences. Discussing how Israel is also paying a cohort of 14-18 social media influencers around $7,000 per post, Cleveland-State observes how none of these influencers are neither registering under the Foreign Agents Registration Act (FARA) nor are they marking their social media posts as being distributed on behalf of Israel, Cleveland-Stout notes how Parscale, having previously engaged the services of the microtargeting firm Cambridge Analytica, is influencing how AI GPT models like ChatGPT are being trained to frame topics and respond to them on behalf of Israel. Cleveland-Stout notes how Larry Ellison is poised to establish a media dynasty with his recent purchase of CBS News, of which his son David has taken control, while Ellison is planning a bid for CNN's parent company, as well as Trump having tapped Ellison to purchase TikTok. All this in addition to Ellison having donated $16.6 million in 2017 to Friends of the IDF, which was the largest-ever donation to the organisation.Cleveland-Stout also details his research into how tens of millions of dollars have been flooding American think tanks directly from foreign governments and defence contractors, with the hope of influencing the analysis of these think tanks, which usually are rubber-stamped as objective analysis and whose experts frequently are invited onto legacy media programmes to disseminate their research. Get full access to Savage Minds at savageminds.substack.com/subscribe
Aliveness: Earth Medicine and Deep Inner Work to Connect us With Who We Are
This episode asks: how do we stay human inside each of the social media platforms?I trace the ways some of the platforms undermine my values: from Meta and its documented harms, to TikTok's dopamine machine, to Substack's conflicted role in the attention economy. I bring in Shoshana Zuboff's theory of surveillance capitalism, the story of Cambridge Analytica, teen suicides linked to Instagram, and a chilling case of a man's fatal entanglement with a Meta AI.This is an episode about radical ethics. It's about what we consent to, what we refuse, and how we choose to connect in ways that don't hollow us out. Mostly it's about responsibility. I walk through my decisions platform by platform, and why podcasting has become my chosen container.What you will hearWhat “surveillance capitalism” means, and why it threatens democracyThe blurred line between influence, manipulation, and consent onlineMeta's record: Cambridge Analytica, teen mental health, and AI experiments gone wrongWhy TikTok and reels destroy our dopamine balanceWhy Substack works for me—and where it fails on anti-censorshipThe lure and limits of LinkedIn, BlueSky, and YouTubeWhy podcasting is my chosen space: no ads, no algo, no intrusionWeb3: I forgot to mention this in the podcast! Web3 social is an area of active research for me and while I'm not sharing my work on Forecaster or Base it's in the works. This is where we own our work.Timestamps02:00 Why the container matters as much as the content04:30 Surveillance capitalism (Shoshana Zuboff)06:00 Not all data use is surveillance, but the line blurs09:20 The escalation: clicks → biometrics → behaviour shaping11:00 Substack as a model, but also its flaws13:40 Meta's record: misinformation, Cambridge Analytica, teen suicides15:17 Meta AI harms: the New Jersey stroke patient case19:00 X as cesspool, TikTok as dopamine hijack25:00 Why feeds are different from algorithms28:00 LinkedIn, Blue Sky, YouTube—different shadows35:00 Substack's anti-censorship stance and my conflict42:00 Why podcasting: intimacy without intrusion46:00 Closing: real life is localLinks mentionedShoshana Zuboff, The Age of Surveillance Capitalism: Book linkCambridge Analytica scandal (Facebook fined $5B, 2019): WikipediaResearch on Instagram and teen mental health / suicide risk: Wall Street Journal coverageMeta AI and the New Jersey man's death (stroke patient case): ReutersGenAI Content Risk Standards leak (Meta, 2025): ReutersOn TikTok's dopamine effects and algorithm design: Brown UniversityRobert Reich's Substack (“Coffee Klatch”):
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Charlie Kirk, the Kirk assassination, George Zinn, Zinn's arrest for child porn, June 14 SLC No Kings march, Armed Queers LLC, Utah County Sheriff's Department, David Lee Hamblin, Satanic ritual abuse (SRA), Hamblin's use of narco-hypnosis, David Leavitt, Ukraine, Gordon Bowen, Sundance, Robert Redford, M. Russell Ballard, Timothy Ballard, Operation Underground Railroad (OUR), Minnesota and OUR's move to, OUR's links to Ukraine, Kash Patel, Patel's links to Ballard, Richard C. O'Brien, Kirk's links to Ballard, Medvedev's Ukraine accusations, Jason Goodman, George Webb, attempts to link Ukraine to the assassination, Cambridge Analytica and its network, Russia-Israel links, intrigues in Trump 2.0, OUR as the Sword of Damocles in Trump 2.0, the similarities between Hamblin's techniques and CIA/Pentagon behavior modificationResourcesMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/ Hosted on Acast. See acast.com/privacy for more information.
Peut-on concilier innovation de pointe et souveraineté technologique ? Alors que les utilisateurs européens exigent des performances égales aux solutions américaines, les acteurs de la souveraineté numérique doivent relever un défi technique majeur : créer des alternatives crédibles sans compromettre l'excellence. Alexia Gaudron de ChapsVision partage les secrets de cette équation complexe.L'épisode explore les multiples facettes de cette transformation : l'impact des politiques publiques et du programme France 2030, les stratégies de R&D et d'acquisitions pour construire un écosystème technologique complet, les partenariats européens pour mutualiser les efforts, et les approches "privacy by design".
For nyligt fik vi en ny version af ChatGPT - version 5.0. Og som altid lavede direktøren for OpenAI - firmaet bag ChatGPT - en lang videopræsentation på nettet om fortræffelighederne i den nye version.Direktøren Sam Altmans mission er at skabe en AGI - en artificial general intelligence - altså en kunstig intelligens på højde med eller bedre end den menneskelige intelligens.Nogle AI-forskere hævder, at det ikke kan lade sig gøre med en sprogmodel, der basalt set “bare” er trænet på menneskers kommunikation og derfor laver en slags sandsynlighedsberegning af ords placering i en tekst.Andre AI-forskere, som fx nobelprismodtager og grand old man indenfor AI - Geoffrey Hinton - mener, at vi er ved at skabe en aintelligens, som vil overstige vores, og at konsekvenserne er uoverskuelige og muligvis katastrofale.Har vi fået et værktøj, som rummer menneskehedens og klodens frelse eller er vi på vej mod decideret selvdestruktion? Eller skaber vi måske en alternativ bevidsthed, som vi ikke helt forstår, men som vil indgå i verden i samklang med biologisk skabt bevidsthed? Og hvad er intelligens og bevidsthed egentlig? Og hvordan skaber man sådan det, når vi ikke engang til fulde forstår, hvordan pattedyrs hjerne fungerer?Adjunkt Michal Kosinski fra Stanford University i Californien mener, at vi allerede har skabt en AGI gennem de store sprogmodeller, der ligger bag tjenester som ChatGPT, Gemini og DeepSeek. Hans forskningsinteresser omfatter både menneskelig og kunstig kognition. Hans nuværende arbejde fokuserer på at undersøge de psykologiske processer i store sprogmodeller.Michal var den første, der advarede mod Cambridge Analytica, som førte til skandalen, hvor Facebook blev brugt til at udnytte brugernes data uretmæssigt.Techtopia har mødt ham.Gæst: Mihal Kosinski, assistant professor, Stanford UniversityDesuden et klip fra Instagram med et foredrag af Geoffrey Hinton, AI-udvikler og modtager af Nobelprisen.Link: Mihal Kosinski https://www.michalkosinski.com
Der Titel dieser Episode lautet »Desinformiere Dich!« – orientiert sich am Buch meines Gastes, Jakob Schirrmacher. Es freut mich ganz besonders, Jakob zum Gespräch begrüßen zu dürfen. Jakob Schirrmacher ist Referent für Medienbildung und Digitalisierung, Publizist und Gründer der NGO Free Speech Aid. Er beschäftigt sich mit Fragen rund um Meinungsfreiheit, Desinformation und den gesellschaftlichen Folgen digitaler Technologien. In seinen Essays – unter anderem für die WELT – analysiert er kritisch den Umgang von Politik und Medien mit Wahrheit und öffentlicher Debatte. Mit Free Speech Aid setzt er sich für mehr Meinungsfreiheit ein – und dafür, wie wir diese in Zeiten von Zensur- und Regulierungsdruck schützen können. In dieser Episode sprechen wir über Wahrheit und das vermeintliche Gegenteil, die Desinformation. Aber tatsächlich geht es, glaube ich, um die fundamentalere Frage, wie man mit Unsicherheit und mit unterschiedlichen Einschätzungen der Welt umgeht. In diesem Gespräch verhandeln wir hauptsächlich die gesellschaftlich/politischen Komponenten, aber die wissenschaftliche Dimension ist ebenso offensichtlich und wird von uns auch angesprochen. Wir beginnen mit der Frage, was eine moderne und offene Gesellschaft ausmacht, welche Rolle Individuum und Freiheit spielen und welche zahlreichen Angriffe auf die offene Gesellschaft und die Demokratie wir aktuell erleben. Was sollten wir als Bürger beachten und wie damit in der Zukunft umgehen? Ist offener Diskurs eine Bedingung für eine moderne Gesellschaft? Warum ist ein Fokus auf das Individuum und individuelle Rechte von Bedeutung? Was ist Wahrheit? Gibt es wesentliche Unterschiede zwischen Naturwissenschaft und Aspekten des individuellen gesellschaftlichen Lebens? »An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out and that the growing generation is familiarized with the idea from the beginning…«, Max Planck Damit kommen wir zum Versuch der Definition verschiedener Begriffe und deren Etablierung in gesellschaftlichen Strukturen: »Wer entscheidet eigentlich, was Desinformation ist?« Was bedeutet der Begriff Desinformation eigentlich und wofür benötigen wir ihn? Ist er nützlich oder eher ein ideologischer Kampfbegriff – also selbst in einem gewissen Sinne Meta-Desinformation? Wie steht Desinformation in Bezug zum Begriff »Fake News«? »Elias Canetti in Masse und Macht diagnostiziert hatte: Wenn ein Begriff zu viele Deutungsvarianten hat, kann er politisch umso leichter instrumentalisiert werden.« Ist es also gar der Versuch, sprachlich Verwirrung zu stiften? Fallen viele Menschen gerade auf ein Machtspiel herein, das durch Umdefinition und immer neue Begriffsverwirrungen gespielt wird? »Es ist ein Herrschaftsinstrument – wir sehen, welche Maßnahmen ergriffen werden, um Desinformation einzudämmen.« Handelt es sich nur um einen wenig relevanten akademischen Diskurs, oder hat diese Frage konkrete Folgen für unsere Gesellschaft? »Der Umbau unserer Informationslandschaft ist schon lange im Gange« Wir diskutieren dies anhand konkreter Gesetzesvorhaben. Was ist der Digital Services Act und das vorausgehende Netzwerkdurchsetzungsgesetz – beide im Grunde Made in Germany? »D.h. die Regulierung, die wir heute sehen, ist eigentlich ein deutsches Produkt.« Sollte Deutschland stolz darauf sein? Oder erleben wir eher einen schweren Angriff auf Freiheitsrechte, die Vorbildwirkung für zahlreiche totalitäre Staaten haben? Wurde mit dem Netzwerkdurchsetzungsgesetz außerdem ein weiterer Begriff etabliert, oder gar erfunden, nämlich Hate Speech oder Hassrede im Deutschen? Welche schwerwiegenden (negativen) Folgen, wie Overblocking, haben diese Regularien für die freie Meinungsäußerung im Netz? Wird also das, was in demokratischen Gesellschaften eigentlich ein Tabu ist – Zensur – durch geschickte, aber perfide Regulierung und Anreizsysteme an Internetplattformen ausgelagert? Ist auch Hassrede ein Gummibegriff, der wenig nützt, aber viel Schaden anrichtet? Wie haben wir die stetige Krisenrhetorik zu bewerten, mit der vermeintlich harte Maßnahmen und immer neue Gesetze gerechtfertigt werden? »Die Erfahrung zeigt, dass Gesetze und Verordnungen nur selten wieder abgeschafft werden, sobald Machtstrukturen erst einmal gefestigt sind.« Wird mit Angst (durch tatsächliche oder vermeintliche Krisen ausgelöst) gearbeitet, um immer härtere Maßnahmen umzusetzen, die aber unsere Demokratie und die offene Gesellschaft untergraben und zersetzen? Nicht nur langfristige Effekte sind zu bedenken: Nur weil sich etwas gut anhört, bedeutet das noch lange nicht, dass es auch das Ziel erreicht, beziehungsweise mit angemessenen Nebenwirkungen erreicht. »Lofty goals have long distracted attention from actual consequences«, Tom Sowell Im Extremfall der Cancel Culture brauchen wir oftmals gar keine Gesetze mehr: »Wir schaffen ein soziales Klima, das auf bestimmte Fragen dermaßen emotional reagiert, dass […] man sofort in eine Ecke geschoben wird. Da wollen die wenigsten rein und dann sagt man besser nichts.« Immer mehr wird direkt oder indirekt »nach oben« delegiert, und führt zu immer stärkerer Machtansammlung. Davor hat Karl Popper, der Autor der »Offenen Gesellschaft«, aber schon vor Jahrzehnten eindringlich gewarnt: »Das Wichtigste ist es, all jenen großen Propheten zu misstrauen, die eine Patentlösung in der Tasche haben, und euch sagen, wenn ihr mir nur volle Gewalt gebt, dann werde ich euch in den Himmel führen. Die Antwort darauf ist: Wir geben niemandem volle Gewalt über uns, wir wollen, dass die Gewalt auf ein Minimum reduziert wird. Gewalt ist selbst ein Übel. Und wir können nicht ein Übel mit einem anderen austreiben.« […] »Die Grundidee der Demokratie ist es, die Macht zu beschränken.« Warum schauen so viele Menschen tatenlos zu, wie unsere Demokratie substanziell beschädigt wird? »Wir haben es uns schon bequem gemacht in unserer Demokratie und sind mittlerweile in Strukturen angekommen, in denen es relativ unsexy geworden ist, gegen den Staat zu sein.« Besonders kritisch wird es, wenn man die Rolle betrachtet, die der Journalismus spielen sollte. Staatskritisch zu agieren ist das Kerngeschäft von politischen Journalisten. Stellen sich aber nicht weite Teile des Journalismus immer stärker als Bollwerk vor den Staat und verteidigen alle möglichen staatlichen Übergriffe? Was ist die Rolle, die der Staat in einer offenen Gesellschaft einnehmen sollte? Haben wir uns zum Nanny-Staat entwickelt, den wir bei allem und jedem um Erlaubnis fragen, statt Eigeninitiative zu entwickeln? Sind wir als Untertanen sozialisiert worden und haben vergessen, dass die Idee der offenen Gesellschaft war, dass wir frei sind und dass der Staat die Aufgabe hat, uns maximale individuelle Freiheit zu ermöglichen, die staatlichen Übergriffe auf ein absolutes Mindestmaß zu reduzieren? Haben wir den kritischen Umgang mit Herrschaftsstrukturen verlernt? Wie sieht das über Generationen aus? Woher kommt diese Hörigkeit? Was macht die ständige Krisenrhetorik mit uns, besonders auch mit jüngeren Menschen – selbst wenn es dafür oftmals wenig Grund gibt? Sind wir krisenmüde geworden? Wird das strategisch eingesetzt, um uns zu zermürben? Ist das Internet eine unfassbar mächtige Manipulationsmaschine? Oder ist das alles übertrieben? Was ist der Censorship-Industrial-Complex? Warum hat das mit klassischer Zensur weniger zu tun, war aber – gerade unter einer vermeintlich liberalen Regierung in den USA – ein etabliertes Mittel, um Information zu unterdrücken, die staatlichen Stellen oder bestimmten Eliten nicht in den Kram gepasst hat? Cambridge Analytica und Konsorten werden als Beispiel für die Macht der Wahlbeeinflussung diskutiert, oder handelt es sich eher um einen millionenschweren Marketing-Gag? Ist dieser Desinformationshype ein Geldsegen für soziale Medien? Wenn man angeblich über die Mechanismen der Internetdienste den Wahlausgang verändern kann, dann wird es wohl auch dazu reichen, mehr Cola zu verkaufen. Sind die Menschen nur Schafe, die schlicht dem nächsten Propagandisten folgen? Brauchen wir daher die Experten, die diese Schafe mit der richtigen Wahrheit auf den guten Weg führen? Wozu dann aber Demokratie – dann können wir das mühsame Getue auch gleich abschaffen und die Experten entscheiden lassen, oder? Was haben wir von NGOs zu halten, die in erheblichem Umfang von staatlichen Mitteln leben, aber vorgeben, im Interesse der »Zivilgesellschaft« zu handeln? Was hat es mit dem sogenannten post-faktischen Zeitalter auf sich? Welche Rolle spielen hier die verschiedenen Akteure? Von Regierungsorganisationen über Medien, Internetdienste, selbst ernannte Faktenchecker, sogenannte NGOs und viele andere mehr. »Man schafft es, den Eindruck zu erwecken, dass bestimmte Perspektiven aus der Mitte der Gesellschaft kommen, schlussendlich ist es aber genau das Gegenteil der Fall.« Wie sieht es mit der Lüge aus – soll diese verboten werden, oder hat der Mensch gar ein Recht zu lügen? Ist es manchmal vielleicht sogar Pflicht zu lügen? »In einer offenen Gesellschaft ist nicht die Lüge selbst das größte Risiko, sondern die Existenz einer Institution, die das ausschließliche Recht hat, Wahrheit zu definieren. […] Wer heute Lügen verbieten will, schafft morgen den Präzedenzfall für das Verbot unbequemer Wahrheiten« Zum Abschluss: Wie hat sich die Medienlandschaft über die letzten Jahrzehnten verändert – Frank Schirrmacher, Jakobs Vater, war ja Herausgeber der FAZ. Dazu ein Zitat von Hanns Joachim Friedrichs, das wie aus der Zeit gefallen wirkt: »Einen guten Journalisten erkennt man daran, dass er sich nicht gemein macht mit einer Sache, auch nicht mit einer guten Sache.« Wo gilt das heute noch? Es scheinen eher Haltung und Aktivismus, als die Suche nach der Wahrheit zu gelten – manchmal sogar verblüffend offen ausgesprochen, wie etwa von Katherine Maher, CEO von NPR, über Wikipedia: »The people who write these articles, they are not focused on the truth. They are focused on something else: what is the best that we can know right now […] Perhaps for our most tricky disagreements, seeking the truth and seeking to convince others of the truth, might not be the right place to start.« »I think our reverence for the truth might have become a bit of a distraction that is preventing us from finding consensus and getting important things done.« Findet die Reibung, der Versuch, Wahrheit zu finden, sich ernsthaft mit harten Themen auseinanderzusetzen, in den früheren Leitmedien oder gar im ÖRR noch statt? Oder erleben wir in Medien und Politik eine Konsenskultur statt harter thematischer Arbeit? Werden Medienorganisationen, die sich früher selbst ernst genommen haben und tatsächlich eine wesentliche Rolle in der Gesellschaft gespielt haben, immer mehr zu polarisierenden und nicht ernst zu nehmenden Randerscheinungen? Denken wir an das Etablieren von Fact-Checking bei der BBC? »Der Journalismus, wie wir ihn kennen, hat sich stark entkernt.« Ist die zunehmende »Demokratisierung« der Medienlandschaft – damit auch der Bedeutungsverlust klassischer Medien – eine positive oder negative Entwicklung? »Mein Vater [Frank Schirrmacher] hat mir früher immer gesagt: So lange wird es die FAZ nicht mehr geben.« Wo laufen wir als Gesellschaft hin, und was können wir selbst tun, um die Situation zu verbessern? Referenzen Weitere Episoden Episode 131: Wot Se Fack, Deutschland? Ein Gespräch mit Vince Ebert Episode 130: Populismus und (Ordo)liberalismus, ein Gespräch mit Nils Hesse Episode 125: Ist Fortschritt möglich? Ideen als Widergänger über Generationen Episode 117: Der humpelnde Staat, ein Gespräch mit Prof. Christoph Kletzer Episode 111: Macht. Ein Gespräch mit Christine Bauer-Jelinek Episode 94: Systemisches Denken und gesellschaftliche Verwundbarkeit, ein Gespräch mit Herbert Saurugg Episode 93: Covid. Die unerklärliche Stille nach dem Sturm. Ein Gespräch mit Jan David Zimmermann Episode 88: Liberalismus und Freiheitsgrade, ein Gespräch mit Prof. Christoph Möllers Jakob Schirrmacher Jakob Schirrmacher, Desinformiere dich! Eine Streitschrift Jakob Schirrmacher auf X Free Speech Aid NGO Frank Schirrmacher (FAZ) Fachliche Referenzen Thomas Sowell, Knowledge and Decision, Basic Books (1996) Karl Popper, die offene Gesellschaft und ihre Feinde 1 & 2, Routledge (1945) Max Planck Zitat: The Philosophy of Physics Chapter III (p. 97) W.W. Norton & Company, Inc. New York, New York, USA. 1936 Whistleblower der Cambridge Analytica – Brittany Kaiser im Interview; SRF Sternstunde Philosophie (2020) Matt Taibi, Michael Shellenberger, Censorship-Industrial-Complex, US Congress EU-Umfragen, was denkt Europa Streisand Effekt (Reason, Unintended Consequences) Hanns Joachim Friedrichs Katherine Maher, CEO von NPR, What Wikipedia teaches us about balancing truth and beliefs, TED Talk (2021)
Dive into the terrifying truth of how your personal data became the weapon that manipulated global democracy. Through explosive revelations from Cambridge Analytica whistleblowers and leaked internal documents, we expose the sinister machinery that turned your digital footprint into a psychological warfare tool. From Brexit to presidential elections, discover how shadowy organizations harvested billions of data points to create detailed psychological profiles, manipulating entire populations through precisely targeted digital warfare. This episode rips open the dark underbelly of social media, revealing how tech giants, intelligence agencies, and political operatives transformed your likes, shares, and personal information into the most powerful mind control weapon in history.
Vous aimez A la folie pas du tout, le podcast Bababam Originals ? Vous allez adorer nos autres créations originales ! Aujourd'hui, on vous invite à découvrir Les Fabuleux destins, le podcast qui vous plonge dans les histoires de vies les plus incroyables. Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Brittany Kaiser (also known as the Whistleblower for Cambridge Analytica) shared with us at The Bitcoin 2025 Conference her view on Bitcoin, Blockchain and tokenization. She also talked about her new project that allows you to own a portion of a real world castle. EnjoyHere is my Net Worth Tracker (must have to stay in control of your expenses): https://michaelanthonytv.com/networth...
Behavioral Science For Brands: Leveraging behavioral science in brand marketing.
In this episode we speak with Patrick Fagan, behavioral scientist and co-founder of the consultancy Capuchin Behavioral Science. Patrick is the former lead psychologist at Cambridge Analytica, author of Hooked and co-author of Free Your Mind. He shares insights into the hidden drivers of consumer behavior, how data and psychology intersect and why marketers should tap into the irrational mind.
The UK sanctions Russian military intelligence officers tied to GRU cyber units. An AI-powered malware called LameHug targets Windows systems. Google files a lawsuit against the operators of the Badbox 2.0 botnet. A pair of healthcare data breaches impact over 3 million individuals. Researchers report a phishing attack that bypasses FIDO authentication by exploiting QR codes. A critical flaw in Nvidia's Container Toolkit threatens managed AI cloud services. A secure messaging app is found exposing sensitive data due to outdated configurations. Meta investors settle their $8 billion lawsuit. Our guest is Will Markow, CEO of FourOne Insights and N2K CyberWire Senior Workforce Analyst, with a data-driven look at how AI is affecting jobs. Belgian police provide timely cyber tips, baked right in. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we have Will Markow, CEO of FourOne Insights and N2K CyberWire Senior Workforce Analyst, discussing how AI is affecting jobs. Got cybersecurity, IT, or project management certification goals? For the past 25 years, N2K's practice tests have helped more than half a million professionals reach certification success. Grow your career and reach your goals faster with N2K's full exam prep of practice tests, labs, and training courses for Microsoft, CompTIA, PMI, Amazon, and more at n2k.com/certify. Selected Reading Breaking: UK sanctions Russian cyber spies accused of facilitating murders (The Record) Russia Linked to New Malware Targeting Email Accounts for Espionage (Infosecurity Magazine) New “LameHug” Malware Deploys AI-Generated Commands (Infosecurity Magazine) Google Sues Operators of 10-Million-Device Badbox 2.0 Botnet (SecurityWeek) 1.4 Million Affected by Data Breach at Virginia Radiology Practice (SecurityWeek) Anne Arundel Dermatology Data Breach Impacts 1.9 Million People (SecurityWeek) Phishing attack abuses QR codes to bypass FIDO keys (SC Media) Critical Nvidia Toolkit Flaw Exposes AI Cloud Services to Hacking (SecurityWeek) New TeleMessage SGNL Flaw Is Actively Being Exploited by Attackers (Hackread) Meta investors, Zuckerberg settle $8 billion privacy lawsuit tied to Cambridge Analytica scandal (The Record) Loaf and order: Belgian police launch bread-based cybersecurity campaign (Graham Cluley) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
The U.K. and Germany sign their first friendship treaty since World War II, U.K. special forces and spy details were reportedly compromised in the 2022 Afghan data breach, The U.K. lowers the voting age to 16 for the next general election, Meta settles an $8 billion privacy lawsuit over the Cambridge Analytica scandal, Puerto Rico's governor signs a ban on transgender treatments for those under 21, Trump signs the HALT Fentanyl Act into law, Italy considers releasing 10,000 prisoners to ease overcrowding, A fire at an Iraqi mall kills dozens, Epstein prosecutor Maurene Comey is fired, and eight healthy babies are born in the U.K. using DNA from three people. Sources: www.verity.news
The Senate approved the Trump administration's rescission package today, which proposes taking back $9 billion in federal funding already allocated for programs such as public media and foreign aid. We'll get into the implications for the bipartisan nature of the budgeting process. Plus, we explain the latest development of a lawsuit against Meta CEO Mark Zuckerberg and company leaders, which stems back to the 2018 privacy scandal involving Cambridge Analytica. Then, we'll smile at a piece of new technology that claims to shoot mosquitoes using lasers. And, do you believe in supernatural forces?Here's everything we talked about today:“Russ Vought: Appropriations process ‘has to be less bipartisan'” from POLITICO“The Appropriations Process Was Already Broken. The Rescission Bill May Have Just Shattered It.” from NOTUS“Meta investors, Zuckerberg reach settlement to end $8 billion trial over Facebook privacy litigation” from Reuters“This ‘Iron Dome' for mosquitoes shoots down bugs with lasers” from Fast Company“Angels, witches, crystals and black cats: How supernatural beliefs vary across different groups in the US” from The ConversationWe love hearing from you. Leave us a voicemail at 508-U-B-SMART or email makemesmart@marketplace.org.
The Senate approved the Trump administration's rescission package today, which proposes taking back $9 billion in federal funding already allocated for programs such as public media and foreign aid. We'll get into the implications for the bipartisan nature of the budgeting process. Plus, we explain the latest development of a lawsuit against Meta CEO Mark Zuckerberg and company leaders, which stems back to the 2018 privacy scandal involving Cambridge Analytica. Then, we'll smile at a piece of new technology that claims to shoot mosquitoes using lasers. And, do you believe in supernatural forces?Here's everything we talked about today:“Russ Vought: Appropriations process ‘has to be less bipartisan'” from POLITICO“The Appropriations Process Was Already Broken. The Rescission Bill May Have Just Shattered It.” from NOTUS“Meta investors, Zuckerberg reach settlement to end $8 billion trial over Facebook privacy litigation” from Reuters“This ‘Iron Dome' for mosquitoes shoots down bugs with lasers” from Fast Company“Angels, witches, crystals and black cats: How supernatural beliefs vary across different groups in the US” from The ConversationWe love hearing from you. Leave us a voicemail at 508-U-B-SMART or email makemesmart@marketplace.org.
The US Congress has passed its first major cryptocurrency legislation — the GENIUS Act, aimed at regulating stablecoins and paving the way for wider adoption of digital assets in everyday finance. Also on the programme, Meta settles an $8bn lawsuit over data privacy breaches tied to the Cambridge Analytica scandal. Roger Hearing asks what the outcome means for Big Tech. And Arsenal smash records with the world's most expensive women's football transfer.
This Day in Legal History: Residence ActOn July 16, 1790, the U.S. Congress passed the Residence Act, establishing the District of Columbia as the permanent seat of the federal government. The decision was the product of a political compromise between Alexander Hamilton and Thomas Jefferson, brokered in part by James Madison, whereby southern states would support federal assumption of state debts in exchange for locating the capital along the Potomac River. The land for the new district was ceded by both Maryland and Virginia, and the Constitution allowed for a federal district not exceeding ten miles square. President George Washington personally selected the site, which straddled the Potomac and included portions of Alexandria and Georgetown.Pierre Charles L'Enfant was tasked with designing the city's layout, envisioning broad avenues and grand public spaces to reflect the dignity of the new republic. In the early years, however, Washington, D.C. remained underdeveloped and muddy, with many of the federal buildings still under construction. Over time, most major institutions and monuments were built on the Maryland side of the river, causing concern among residents on the Virginia side. In 1846, responding to economic neglect and the declining significance of Alexandria as a port, Congress approved Virginia's request to retrocede its portion of the district. This land, now Arlington County and part of the city of Alexandria, rejoined Virginia, reducing the size of D.C. to its current boundaries.The Residence Act and subsequent development of Washington, D.C. created a unique legal and political entity—neither a state nor part of one. This status continues to affect the rights and representation of its residents, a legal debate that remains active today.An $8 billion shareholder lawsuit against Meta CEO Mark Zuckerberg and other current and former company leaders began this week in Delaware's Chancery Court, focusing on alleged failures to uphold Facebook's 2012 privacy agreement with the Federal Trade Commission (FTC). The plaintiffs argue that Zuckerberg, Sheryl Sandberg, Peter Thiel, Marc Andreessen, Reed Hastings, and others knowingly allowed Facebook user data to be harvested—specifically in relation to the Cambridge Analytica scandal that surfaced in 2018. That breach led to a record $5 billion FTC fine, which shareholders now want the defendants to personally reimburse, along with additional legal costs.The trial, presided over by Chief Judge Kathaleen McCormick, will feature testimony from several high-profile witnesses, including White House Chief of Staff Jeffrey Zients, who served on Meta's board from 2018 to 2020. Plaintiffs claim Zuckerberg profited by selling Facebook stock before the public learned of the data misuse, allegedly netting over $1 billion. Defendants deny all wrongdoing, maintaining they relied on compliance experts and were misled by Cambridge Analytica.This is the first oversight liability case of its kind to reach trial, a notoriously difficult claim under Delaware corporate law. Meta itself is not named as a defendant, and the company has declined to comment, though it has previously stated it has invested heavily in privacy protections since 2019.Facebook privacy practices the focus of $8 billion trial targeting Zuckerberg | ReutersKilmar Abrego, a Salvadoran migrant wrongly deported from the U.S. despite legal protections, is scheduled to appear in a Tennessee federal court on smuggling charges, though the future of his case remains murky. Abrego had been living legally in Maryland with a work permit and was protected by a 2019 court order barring deportation to El Salvador due to threats of gang violence. Nonetheless, he was deported in March and imprisoned there before being returned to the U.S. after federal prosecutors indicted him for allegedly participating in a human smuggling operation.He has pleaded not guilty, and his lawyers claim the charges are politically motivated—a cover for the Trump administration's mishandling of his case. They also challenge the credibility of prosecution witnesses, who are alleged co-conspirators seeking to avoid their own deportation or charges. A magistrate judge ordered Abrego released on bail, but prosecutors are appealing, arguing he poses a flight risk and should remain detained. Even if released from criminal custody, immigration officials have said they plan to detain him immediately for possible deportation.Judge Waverly Crenshaw is expected to hear arguments and potentially rule on his bail status. Abrego's attorneys have asked to delay any release until Wednesday to prevent sudden removal before he can challenge deportation. The administration has signaled it may try to deport him to a third country—possibly Mexico or South Sudan—instead of El Salvador. His case has become emblematic of broader concerns over due process in the Trump administration's aggressive immigration enforcement agenda.Returned deportee Abrego due in Tennessee court; future of smuggling case uncertain | ReutersMilbank, a prominent New York-based law firm, is representing the cities of Newark and Hoboken in a lawsuit brought by President Donald Trump's administration over their immigration policies. The federal suit, filed in May, accuses the cities of obstructing immigration enforcement by acting as “sanctuary jurisdictions.” Milbank's defense team includes notable figures like former acting U.S. Solicitor General Neal Katyal and ex-New Jersey Attorney General Gurbir Grewal, now both partners at the firm.Milbank's involvement in the case comes just months after it agreed to a deal with the Trump administration to avoid being targeted by executive orders aimed at major law firms. Trump had accused those firms of politicizing legal work and using racial diversity initiatives improperly. In response, several firms—including Milbank—committed to providing nearly $1 billion in pro bono legal services to mutually agreed-upon causes. Milbank set aside $100 million as part of its agreement, though it was not formally sanctioned.Despite that arrangement, Milbank has taken on multiple high-profile cases opposing the Trump administration. In addition to defending Newark and Hoboken, Katyal is leading two other cases challenging Trump policies, including import tariffs and alleged wrongful termination of a federal board member. The firm's role in these cases highlights its continued legal pushback against the administration, even while navigating its negotiated settlement with the White House.Law firm Milbank defends NJ cities in Trump immigration lawsuit | ReutersA piece I wrote for Inside Higher Ed this week argues that tax policy deserves a central place in the undergraduate liberal arts curriculum—not as a technical specialty but as a cornerstone of civic education. I open by reflecting on the tax legislation passed under President Trump, that is the One Big Beautiful Bill Act, noting how its complexity served political ends by shielding its full implications from public understanding. That opacity, I suggest, is not accidental—and it's exactly why we need to teach tax more broadly.In my course at Drexel University, “Introduction to Tax Theory and Policy,” I approach tax not as accounting or law but as a form of civic infrastructure. The course welcomes students from all majors, encouraging them to think about taxation in terms of fairness, power, and values. We explore how tax policy shapes economic behavior, redistributes resources, and reflects assumptions about what and whom government should support. Students analyze topics ranging from estate taxes to digital surveillance and propose their own reforms grounded in ethical and civic reasoning.By demystifying the tax code and framing it as a site of public decision-making, I aim to empower students to see themselves not just as subjects of tax policy but as potential shapers of it. Their engagement—often surprisingly enthusiastic—reveals a hunger for this kind of critical, values-based education. Ultimately, I argue that tax belongs in the liberal arts because it teaches students not just how the world works, but how it's been made to work—and how it could be remade more equitably.Tax Policy Belongs in Liberal Arts Curriculum (opinion) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
This Day in Legal History: Sedition Act PassedOn this day in legal history, July 14, 1798, the United States Congress passed the Sedition Act, one of the most controversial laws in the nation's early political history. Part of the broader Alien and Sedition Acts, this law made it a crime to publish “any false, scandalous and malicious writing” against the federal government, Congress, or the President with the intent to defame or bring them into disrepute. Ostensibly aimed at quelling foreign influence and internal subversion during the quasi-war with France, the Act was also a clear weapon against domestic political opposition—particularly the Democratic-Republicans led by Thomas Jefferson.Federalist lawmakers, who dominated Congress and the presidency under John Adams, justified the law as necessary for national security. However, it was widely criticized as an assault on First Amendment rights and a means of silencing dissent. The law resulted in the prosecution of several Republican editors and even members of Congress, including Representative Matthew Lyon of Vermont, who was sentenced to four months in jail.The Sedition Act provoked a fierce backlash and spurred Jefferson and James Madison to draft the Kentucky and Virginia Resolutions, which introduced the doctrine of nullification—the idea that states could declare federal laws unconstitutional. Public outrage over the Act played a significant role in the Federalists' defeat in the election of 1800 and the subsequent repeal or expiration of most provisions of the Alien and Sedition Acts.The Sedition Act expired on March 3, 1801, the day before Jefferson assumed the presidency. Its legacy remains a cautionary tale about the tension between national security and civil liberties, and it is frequently cited in debates over the limits of free speech in times of political crisis.California tax authorities have flagged over 1,500 high-end vehicles sold by 500 dealerships as likely being registered through Montana LLCs in an attempt to avoid California sales tax and vehicle registration fees. These vehicles—worth more than $300 million collectively—are tied to a long-running strategy used by buyers of luxury assets like exotic cars, yachts, and RVs to exploit Montana's zero percent sales tax and minimal registration costs. Dealers and buyers now face possible penalties, audits, and investigations as California intensifies enforcement.The scheme works like this: a buyer sets up a Montana LLC, purchases and registers the vehicle under that entity, and keeps the car out-of-state on paper—even if it's garaged and driven daily in a state like California. That regulatory fiction is precisely what states are cracking down on. Bloomberg Tax recently highlighted the scale of the problem, noting that more than 600,000 vehicles are likely registered in Montana but used elsewhere, costing states billions annually in uncollected taxes.Montana LLCs have become a go-to workaround for the wealthy looking to sidestep their home-state tax obligations. While technically legal under Montana law, when the vehicle is used in another state without proper registration or tax payment, it becomes a form of tax evasion. States like Illinois and Utah are following California's lead, passing laws to “look through” LLCs and hold in-state beneficial owners accountable.This isn't just a niche tax dodge—it's a broader challenge to state tax enforcement. As wealthier individuals increasingly exploit differences between state tax codes, it's prompting legal reforms and inter-agency cooperation to close loopholes once thought too obscure or dispersed to address. California's latest enforcement push suggests these Montana LLC schemes are no longer flying under the radar—and that other states may soon follow with penalties and structural reforms of their own.California Finds 1,500 Vehicles Linked to Montana Tax SheltersNearly two-thirds of the U.S. Department of Justice's Federal Programs Branch—the unit charged with defending Trump administration policies in court—has resigned or announced plans to leave since Donald Trump's reelection. Out of roughly 110 attorneys, 69 have exited, according to a list reviewed by Reuters. The exodus includes nearly half the section's supervisors and is far greater than typical turnover seen in prior administrations. While the Trump administration maintains its legal actions are within constitutional bounds, current and former DOJ lawyers cite an overwhelming workload and ethical concerns as key drivers of the departures.Many career lawyers reportedly struggled to defend policies they saw as legally dubious or procedurally flawed, including efforts to revoke birthright citizenship and claw back federal funding from universities. Several feared they'd be pressured to make misleading or unethical arguments in court. In some cases, lawyers were expected to defend executive orders with minimal input from the agencies involved. A recent whistleblower complaint even alleged retaliation against a supervisor who refused to make unsupportable claims in immigration cases.Despite the mass departures, the Trump administration continues to rely heavily on the unit as it seeks to expand executive power following favorable Supreme Court rulings. The DOJ has reassigned attorneys from other divisions, brought in over a dozen political appointees, and exempted the unit from the federal hiring freeze to keep up with litigation demands. Critics argue the changes undermine DOJ independence, while supporters claim the administration is merely ensuring its policies get a fair defense in court.Two-thirds of the DOJ unit defending Trump policies in court have quit | ReutersAn $8 billion trial kicks off this week in Delaware where Meta CEO Mark Zuckerberg and several current and former Facebook leaders are accused by shareholders of knowingly violating a 2012 FTC consent decree aimed at protecting user privacy. The lawsuit stems from the 2018 revelation that Cambridge Analytica accessed data from millions of Facebook users without their consent, ultimately leading to billions in fines and costs for Meta—including a $5 billion penalty from the FTC in 2019. Shareholders, including union pension funds like California's State Teachers' Retirement System, want Zuckerberg and others to reimburse the company, alleging they operated Facebook as a law-breaking enterprise.Defendants in the case include Sheryl Sandberg, Marc Andreessen, Peter Thiel, and Reed Hastings. While Meta itself is not a defendant, the case focuses on the board's alleged failure to oversee privacy practices and enforce the 2012 agreement. The plaintiffs must prove what legal experts call the most difficult claim in corporate law: a total failure of oversight by directors. Delaware law gives leeway for poor business decisions—but not illegal ones, even if they're profitable.Zuckerberg is expected to testify, and plaintiffs argue he personally directed deceptive privacy practices and tried to offload stock ahead of the Cambridge Analytica scandal to avoid losses, allegedly netting $1 billion. Defendants deny wrongdoing, claiming the company took privacy seriously by investing in compliance and being deceived by Cambridge Analytica.Meta investors, Zuckerberg to square off at $8 billion trial over alleged privacy violations | Reuters This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Vous aimez La Traque, le podcast Bababam Originals ? Vous allez adorer nos autres créations originales ! Aujourd'hui, on vous invite à découvrir Les Fabuleux Destins le podcast qui vous plonge dans les histoires les plus incroyables. Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
In this powerful and eye-opening episode of The Military Mentor Podcast, Big Chief dives into the global chaos we're all witnessing—from the wars in Ukraine and Gaza to civil unrest in the U.S.—and unveils a truth most people don't realize: we've all become unwilling soldiers in a new kind of war. But this isn't about boots on the ground. This is information warfare—a battlefield that plays out through screens, algorithms, and psychological manipulation. Drawing from his military leadership background, Mike explains how tactics once reserved for foreign PSYOP missions are now being used to influence public opinion at home. He breaks down real-world examples like Cambridge Analytica, AI deepfakes, and algorithmic radicalization to show how society is being shaped, divided, and often weaponized without people even knowing it.Listeners are challenged to check their biases, question their sources, and protect their mental terrain the same way soldiers guard physical ground. With wars now being won or lost in minds rather than in trenches, this episode is both a wake-up call and a guide to navigating today's volatile information environment. Whether you're a veteran or a civilian, the call to lead, think critically, and protect our shared humanity has never been more urgent.
FinTech is just lipstick on a pig. Same banks, same problems, better UI.In today's episode, we sit down with Sid Gandhi, CEO of PolyBase Labs, who's building Payy. Their mission is to bring money you actually own and can use. While everyone chases meme coins, Sid is solving real problems: frozen Venmo accounts, inaccessible financial services, and crypto's coming Cambridge Analytica moment. We explore Payy's gasless transactions, zero-fee fiat on/off-ramping, and positioning as infrastructure for companies like Robinhood.Let's get into it.The Rollup---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.com/----Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd9vbF3hJA2n7qoL5?si=7230787bb90947efPodcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire McLibel Victoria commande des menus au McDo, ravie de l'efficacité du service. À la sortie, une militante lui tend un tract dénonçant les pratiques de la firme. Ce simple papier la plonge dans l'affaire McLibel, un procès historique opposant deux militants fauchés au géant du fast-food. Malgré une condamnation, leur combat met en lumière les dérives de McDonald's et fait évoluer la loi britannique. Une bataille perdue en apparence, mais une victoire pour la liberté d'expression. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire des Pentagones Papers En pleine guerre du Vietnam, des soldats américains piégés dans la jungle affrontent un ennemi invisible, pendant que, à Washington, le gouvernement ment sciemment sur la réalité du conflit. En 1971, 7 000 pages de documents secrets fuitent : les Pentagon Papers. On y découvre que les présidents successifs savaient que la guerre était perdue, mais ont continué à envoyer des jeunes mourir. Grâce au courage des journalistes du Times et du Post, la vérité éclate. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire Médiator Murielle, jeune mère, découvre tardivement que les coupe-faim qui l'ont aidée à perdre du poids l'ont aussi gravement fragilisée. Elle frôle la mort, sans savoir qu'elle est l'une des nombreuses victimes du Médiator, un médicament prescrit à tort comme amaigrissant. En 2007, la pneumologue Irène Frachon, seule contre tous, commence à faire le lien entre ce médicament et des centaines de cas d'atteintes cardiaques. Son combat acharné contre les laboratoires Servier durera plus de dix ans. Finalement, le scandale éclate, révélant l'un des plus grands drames sanitaires français. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire Cambridge Analytica Dans une salle d'audition du Congrès américain, Mark Zuckerberg fait face à 44 sénateurs, impassible malgré les accusations. Au cœur de l'affaire : Cambridge Analytica, qui a siphonné les données de 80 millions d'utilisateurs Facebook. Ces données ont permis de cibler les électeurs indécis avec des contenus politiques sur mesure. Résultat : deux votes historiques – le Brexit et l'élection de Trump – ont peut-être été influencés. Découvrez ce récit et plongez dans l'un des plus grands scandales d'État du XXIe siècle. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Bienvenue dans Les Fabuleux Destins, le podcast pour découvrir des histoires vraies et étonnantes. Cette semaine découvrez 4 récits faisant partie des plus grands scandales d'état de l'histoire. Avez vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Rejoignez-nous, et explorez ces récits captivants, où chaque détail vous tiendra en haleine. Et tout au long de la semaine comme toujours, (ré)écoutez nos meilleurs fabuleux destins et nos meilleurs épisodes de La Traque, le podcast qui vous plonge dans les plus grandes cavales criminelles de l'histoire. Learn more about your ad choices. Visit megaphone.fm/adchoices
What if an algorithm could read your personality better than your best friend? In this eye-opening episode, Oliver sits down with Dr. Sandra Matz – Columbia Business School professor, computational psychologist, and author of Mindmasters – to explore how our digital footprints reveal far more than we think.Sandra unpacks how AI tools can accurately infer our Big Five personality traits, mental health, and values using everything from Google searches to smartphone activity. Oliver and Sandra discuss the power – and ethical challenges – of data-driven persuasion, including what brands and political campaigns really know about you, and how personalization is reshaping communication, marketing, and even internal business culture.Drawing on her role in helping uncover the truth behind Cambridge Analytica and her work with leading global brands, Sandra provides a rare, credible look at how psychological targeting works – and where it's headed next. From mental health interventions to personalized internal comms, this conversation goes deep into the future of influence.Plus: Why personality tests like Myers-Briggs are outdated, how to build high-performing teams using psychology, and Sandra's #1 piece of communication advice.Find out more about Sandra Matz: https://sandramatz.com/Make sure you're subscribed to the Speak Like a CEO newsletter to become a top 1% communicator: https://eoipsocommunications.com/newsletter/Do you want to speak with confidence, inspire your team, and accelerate your career? You can now join the Speak Like a CEO Academy and work with Oliver over the next 90 days to become a top 1% communicator and transform your career:https://speaklikeaceoacademy.com/
Send us a textProPublica's investigation reveals the National Shooting Sports Foundation has been secretly sharing gun buyers' personal information, including underwear sizes, for political purposes. This privacy breach raises serious concerns about data exploitation even in industries that publicly position themselves as defenders of individual rights.• Gun owners group demands federal investigation into firearms industry data sharing• Personal data shared included underwear sizes and was allegedly used for political targeting• NSSF collaborated with Cambridge Analytica to enhance voter data• Privacy concerns should transcend political divides - "Privacy is an everybody problem"• The gun industry publicly defends rights while quietly engaging in data exploitation• Senator Richard Blumenthal supports investigation into these practicesIf you're a privacy professional or legal expert with insights on this issue, we'd love to have you on the show to discuss this further and answer some of the questions we've raised today. Support the show
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/psychology
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/economics
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/van-leer-institute
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices
Mark Turnbull has spent thirty five years as a geopolitical communications strategist, working behind the scenes to shape some of the most consequential political moments of modern times. It's a career that's taken him from the American invasion of Iraq to the final days of apartheid South Africa — from the inner sanctum of a leading KGB defector to the cockpit of Cambridge Analytica.In this episode, he reflects generously on his terminal cancer diagnosis and how as a self-proclaimed "incurable optimist" he remains fiercely upbeat about his life and how he continues to lead it, with purpose, resilience and courage. Here is Mark's Crisis Compass.Host: Andy Coulson CWC team: Jane Sankey, Hana Walker-Brown, Bill GriffinWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
Mark Turnbull has spent thirty-five years as a geopolitical communications strategist, working behind the scenes to shape some of the most consequential political moments of modern times. It's a career that's taken him from the American invasion of Iraq to the final days of apartheid South Africa — from the inner sanctum of a leading KGB defector to the cockpit of Cambridge Analytica.In this episode, Mark talks about the physical dangers and reputational risks of his trade — and the pressure it put on his family. From being ambushed by insurgents in Baghdad and battling Somali warlords, to taking on Putin's propaganda machine and the sting operation that would bring down Trump campaign agency Cambridge Analytica. Along the way, it cast him as a poster boy for data theft and dirty tricks.He reveals the chilling moment when Alexander Litvinenko warned him about the KGB's preferred method of assassination and lifts the lid on the Cambridge Analytica sting operation that made global headlines—explaining how it unfolded, what the media got wrong, and the cost to his professional and personal life. Finally, he reflects generously on his terminal cancer diagnosis and how as a self-proclaimed "incurable optimist" he remains fiercely upbeat about his life and how he continues to lead it, with purpose, resilience and courage.Host: Andy Coulson CWC team: Jane Sankey, Hana Walker-Brown, Bill GriffinWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
Today's West Coast Cookbook & Speakeasy Podcast for our especially special Daily Special, River City Hash Mondays is now available on the Spreaker Player!Starting off in the Bistro Cafe, Canada is taking the lead to show the world how to destroy Donald Trump.Then, on the rest of the menu, the man tapped by RFK Jr to run the clinical trial to tie vaccines to autism was previously convicted with practicing medicine without a license, had given autistic children a dangerous drug not approved for use in the US and improperly prescribed puberty blockers; previously charged with misrepresenting its involvement in the Cambridge Analytica scandal and lying about its role in the deportation system set up by Trump, Peter Thiel's Palantir is helping DOGE with a massive IRS data project; and, the community is fighting back after a Florida teacher was fired for calling a trans student by their name.After the break, we move to the Chef's Table where the Irish privacy watchdog is investigating Musk's use of personal data to train the Grok AI chatbot; and, Australian academics refuse to attend US conferences for fear of being detained, or worse.Bon Appétit!The Netroots Radio Live PlayerKeep Your Resistance Radio Beaming 24/7/365! "I was never a spy. I was with the OSS organization. We had a number of women, but we were all office help." -- Julia ChildBecome a supporter of this podcast: https://www.spreaker.com/podcast/west-coast-cookbook-speakeasy--2802999/support.
Mark Zuckerberg Is Done Listening to His UsersSlate Magazine, By Nitish Pahwa, on January 8, 2025https://slate.com/technology/2025/01/mark-zuckerberg-meta-fact-checking-announcement-elon-musk-trump.htmlMark Zuckerberg's decision to replace third-party fact-checkers with community-based content moderation at Meta has sparked significant debate. Citing the inherent biases of experts, Zuckerberg claims this shift aligns with Meta's commitment to free expression. Critics argue this move opens the door to increased misinformation and erodes accountability, reminiscent of past controversies like the Cambridge Analytica scandal. Skeptics worry this decision conveniently follows political pressures, potentially enabling unchecked political propaganda. While some view this as an effort to reduce perceived censorship, others fear it could amplify hate speech and conspiracy theories. Concerns persist about whether this change genuinely promotes free speech or prioritizes profit over public safety.The Non-Prophets, Episode 24.03.1 featuring Scott Dickie, Chris Mallard and Eli SlackBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-non-prophets--3254964/support.
The Biden administration takes its first step to retaliate against China for the Salt Typhoon cyberattack. The Feds release a draft National Cyber Incident Response Plan. Telecom Namibia suffers a cyberattack. The Australian Information Commissioner has reached a $50 million settlement with Meta over the Cambridge Analytica scandal. CISA releases its 2024 year in review. LastPass hackers nab an additional five millions dollars. Texas Tech University notifies over 1.4 million individuals of a ransomware attack. Researchers discover a new DarkGate RAT attack vector using vishing. A fraudster gets 69 months in prison. On our Threat Vector segment, David Moulton speaks with Nir Zuk, Founder and CTO of Palo Alto Networks about predictions for 2025. Surveillance tweaks our brains in unexpected ways. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector Segment On our Threat Vector segment, we preview this week's episode where host David Moulton talks with Nir Zuk, Founder and CTO of Palo Alto Networks. They talk about Palo Alto Networks' predictions for 2025, focusing on the shift to unified data security platforms and the growing importance of AI in cybersecurity. You can catch new episodes of Threat Vector every Thursday here and on your favorite podcast app. Selected Reading Biden Administration Takes First Step to Retaliate Against China Over Hack (The New York Times) US Unveils New National Cyber Incident Response Plan (Infosecurity Magazine) Telecom Namibia Cyberattack: 400,000 Files Leaked (The Cyber Express) Landmark settlement of $50m from Meta for Australian users impacted by Cambridge Analytica incident (OAIC) CISA Warns of New Windows Vulnerability Used in Hacker Attacks (CyberInsider) CISA 2024 Year in review (CISA) LastPass threat actor steals $5.4M from victims just a week before Xmas (Cointelegraph) Texas Tech University Data Breach Impacts 1.4 Million People (SecurityWeek) Microsoft Teams Vishing Spreads DarkGate RAT (Dark Reading) Man Accused of SQL Injection Hacking Gets 69-Month Prison Sentence (SecurityWeek) The psychological implications of Big Brother's gaze (SCIMEX) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
How can big tech and a healthy democracy coexist? In this episode Rick is joined by Marietje Schaake, author of The Tech Coup: How to Save Democracy from Silicon Valley. Together, they delve into the growing influence of big tech companies on global democracy, the dangers posed by unchecked data collection, and the rise of tools like Pegasus spyware. Schaake highlights the need for greater transparency, regulation, and accountability in the tech sector, while addressing the potential threats to privacy and democratic freedoms. They also discuss the weaponization of data by companies like Palantir and Cambridge Analytica, as well as potential solutions to safeguard democratic systems in the face of advancing technology. Marietje's book, The Tech Coup: How to Save Democracy from Silicon Valley, available now. Timestamps: (00:01:26) The Tech Coup (00:05:37) Government against large corporations (00:09:56) Which country's are doing it right? (00:15:05) Palantir Follow Resolute Square: Instagram Twitter TikTok Find out more at Resolute Square Learn more about your ad choices. Visit megaphone.fm/adchoices
We have a very special episode of Decoder today. It's become a tradition every fall to have Verge deputy editor Alex Heath interview Meta CEO Mark Zuckerberg on the show at Meta Connect. This year, before his interview with Mark, Alex got to try a new pair of experimental AR glasses the company is calling Orion. Alex talked to Mark about a whole lot more, including why the company is investing so heavily in AR, why he's shifted away from politics, Mark's thoughts on the link between teen mental health and social media, and why the Meta chief executive is done apologizing for corporate scandals like Cambridge Analytica that he feels were overblown and misrepresented. Links: Hands-on with Orion, Meta's first pair of AR glasses | The Verge The biggest news from Meta Connect 2024 | The Verge Mark Zuckerberg: publishers ‘overestimate the value' of their work for training AI | The Verge Meta extends its Ray-Ban smart glasses deal beyond 2030 | The Verge The Ray-Ban Meta smart glasses actually make the future look cool | The Verge Meta has a major opportunity to win the AI hardware race | The Verge Instagram is putting every teen into more private and restrictive new account | The Verge Threads isn't for news and politics, says Instagram's boss | The Verge Facebook puts news on the back burner | The Verge Meta is losing a billion dollars on VR and AR every single month | The Verge Transcript: https://www.theverge.com/e/24017522 Credits: Decoder is a production of The Verge and is part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt; our editor is Callie Wright. This episode was additionally produced by Brett Putman and Vjeran Pavic. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices