2013–2018 British political consulting firm
POPULARITY
Categories
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Charlie Kirk, the Kirk assassination, George Zinn, Zinn's arrest for child porn, June 14 SLC No Kings march, Armed Queers LLC, Utah County Sheriff's Department, David Lee Hamblin, Satanic ritual abuse (SRA), Hamblin's use of narco-hypnosis, David Leavitt, Ukraine, Gordon Bowen, Sundance, Robert Redford, M. Russell Ballard, Timothy Ballard, Operation Underground Railroad (OUR), Minnesota and OUR's move to, OUR's links to Ukraine, Kash Patel, Patel's links to Ballard, Richard C. O'Brien, Kirk's links to Ballard, Medvedev's Ukraine accusations, Jason Goodman, George Webb, attempts to link Ukraine to the assassination, Cambridge Analytica and its network, Russia-Israel links, intrigues in Trump 2.0, OUR as the Sword of Damocles in Trump 2.0, the similarities between Hamblin's techniques and CIA/Pentagon behavior modificationResourcesMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/ Hosted on Acast. See acast.com/privacy for more information.
Peut-on concilier innovation de pointe et souveraineté technologique ? Alors que les utilisateurs européens exigent des performances égales aux solutions américaines, les acteurs de la souveraineté numérique doivent relever un défi technique majeur : créer des alternatives crédibles sans compromettre l'excellence. Alexia Gaudron de ChapsVision partage les secrets de cette équation complexe.L'épisode explore les multiples facettes de cette transformation : l'impact des politiques publiques et du programme France 2030, les stratégies de R&D et d'acquisitions pour construire un écosystème technologique complet, les partenariats européens pour mutualiser les efforts, et les approches "privacy by design".
For nyligt fik vi en ny version af ChatGPT - version 5.0. Og som altid lavede direktøren for OpenAI - firmaet bag ChatGPT - en lang videopræsentation på nettet om fortræffelighederne i den nye version.Direktøren Sam Altmans mission er at skabe en AGI - en artificial general intelligence - altså en kunstig intelligens på højde med eller bedre end den menneskelige intelligens.Nogle AI-forskere hævder, at det ikke kan lade sig gøre med en sprogmodel, der basalt set “bare” er trænet på menneskers kommunikation og derfor laver en slags sandsynlighedsberegning af ords placering i en tekst.Andre AI-forskere, som fx nobelprismodtager og grand old man indenfor AI - Geoffrey Hinton - mener, at vi er ved at skabe en aintelligens, som vil overstige vores, og at konsekvenserne er uoverskuelige og muligvis katastrofale.Har vi fået et værktøj, som rummer menneskehedens og klodens frelse eller er vi på vej mod decideret selvdestruktion? Eller skaber vi måske en alternativ bevidsthed, som vi ikke helt forstår, men som vil indgå i verden i samklang med biologisk skabt bevidsthed? Og hvad er intelligens og bevidsthed egentlig? Og hvordan skaber man sådan det, når vi ikke engang til fulde forstår, hvordan pattedyrs hjerne fungerer?Adjunkt Michal Kosinski fra Stanford University i Californien mener, at vi allerede har skabt en AGI gennem de store sprogmodeller, der ligger bag tjenester som ChatGPT, Gemini og DeepSeek. Hans forskningsinteresser omfatter både menneskelig og kunstig kognition. Hans nuværende arbejde fokuserer på at undersøge de psykologiske processer i store sprogmodeller.Michal var den første, der advarede mod Cambridge Analytica, som førte til skandalen, hvor Facebook blev brugt til at udnytte brugernes data uretmæssigt.Techtopia har mødt ham.Gæst: Mihal Kosinski, assistant professor, Stanford UniversityDesuden et klip fra Instagram med et foredrag af Geoffrey Hinton, AI-udvikler og modtager af Nobelprisen.Link: Mihal Kosinski https://www.michalkosinski.com
Der Titel dieser Episode lautet »Desinformiere Dich!« – orientiert sich am Buch meines Gastes, Jakob Schirrmacher. Es freut mich ganz besonders, Jakob zum Gespräch begrüßen zu dürfen. Jakob Schirrmacher ist Referent für Medienbildung und Digitalisierung, Publizist und Gründer der NGO Free Speech Aid. Er beschäftigt sich mit Fragen rund um Meinungsfreiheit, Desinformation und den gesellschaftlichen Folgen digitaler Technologien. In seinen Essays – unter anderem für die WELT – analysiert er kritisch den Umgang von Politik und Medien mit Wahrheit und öffentlicher Debatte. Mit Free Speech Aid setzt er sich für mehr Meinungsfreiheit ein – und dafür, wie wir diese in Zeiten von Zensur- und Regulierungsdruck schützen können. In dieser Episode sprechen wir über Wahrheit und das vermeintliche Gegenteil, die Desinformation. Aber tatsächlich geht es, glaube ich, um die fundamentalere Frage, wie man mit Unsicherheit und mit unterschiedlichen Einschätzungen der Welt umgeht. In diesem Gespräch verhandeln wir hauptsächlich die gesellschaftlich/politischen Komponenten, aber die wissenschaftliche Dimension ist ebenso offensichtlich und wird von uns auch angesprochen. Wir beginnen mit der Frage, was eine moderne und offene Gesellschaft ausmacht, welche Rolle Individuum und Freiheit spielen und welche zahlreichen Angriffe auf die offene Gesellschaft und die Demokratie wir aktuell erleben. Was sollten wir als Bürger beachten und wie damit in der Zukunft umgehen? Ist offener Diskurs eine Bedingung für eine moderne Gesellschaft? Warum ist ein Fokus auf das Individuum und individuelle Rechte von Bedeutung? Was ist Wahrheit? Gibt es wesentliche Unterschiede zwischen Naturwissenschaft und Aspekten des individuellen gesellschaftlichen Lebens? »An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out and that the growing generation is familiarized with the idea from the beginning…«, Max Planck Damit kommen wir zum Versuch der Definition verschiedener Begriffe und deren Etablierung in gesellschaftlichen Strukturen: »Wer entscheidet eigentlich, was Desinformation ist?« Was bedeutet der Begriff Desinformation eigentlich und wofür benötigen wir ihn? Ist er nützlich oder eher ein ideologischer Kampfbegriff – also selbst in einem gewissen Sinne Meta-Desinformation? Wie steht Desinformation in Bezug zum Begriff »Fake News«? »Elias Canetti in Masse und Macht diagnostiziert hatte: Wenn ein Begriff zu viele Deutungsvarianten hat, kann er politisch umso leichter instrumentalisiert werden.« Ist es also gar der Versuch, sprachlich Verwirrung zu stiften? Fallen viele Menschen gerade auf ein Machtspiel herein, das durch Umdefinition und immer neue Begriffsverwirrungen gespielt wird? »Es ist ein Herrschaftsinstrument – wir sehen, welche Maßnahmen ergriffen werden, um Desinformation einzudämmen.« Handelt es sich nur um einen wenig relevanten akademischen Diskurs, oder hat diese Frage konkrete Folgen für unsere Gesellschaft? »Der Umbau unserer Informationslandschaft ist schon lange im Gange« Wir diskutieren dies anhand konkreter Gesetzesvorhaben. Was ist der Digital Services Act und das vorausgehende Netzwerkdurchsetzungsgesetz – beide im Grunde Made in Germany? »D.h. die Regulierung, die wir heute sehen, ist eigentlich ein deutsches Produkt.« Sollte Deutschland stolz darauf sein? Oder erleben wir eher einen schweren Angriff auf Freiheitsrechte, die Vorbildwirkung für zahlreiche totalitäre Staaten haben? Wurde mit dem Netzwerkdurchsetzungsgesetz außerdem ein weiterer Begriff etabliert, oder gar erfunden, nämlich Hate Speech oder Hassrede im Deutschen? Welche schwerwiegenden (negativen) Folgen, wie Overblocking, haben diese Regularien für die freie Meinungsäußerung im Netz? Wird also das, was in demokratischen Gesellschaften eigentlich ein Tabu ist – Zensur – durch geschickte, aber perfide Regulierung und Anreizsysteme an Internetplattformen ausgelagert? Ist auch Hassrede ein Gummibegriff, der wenig nützt, aber viel Schaden anrichtet? Wie haben wir die stetige Krisenrhetorik zu bewerten, mit der vermeintlich harte Maßnahmen und immer neue Gesetze gerechtfertigt werden? »Die Erfahrung zeigt, dass Gesetze und Verordnungen nur selten wieder abgeschafft werden, sobald Machtstrukturen erst einmal gefestigt sind.« Wird mit Angst (durch tatsächliche oder vermeintliche Krisen ausgelöst) gearbeitet, um immer härtere Maßnahmen umzusetzen, die aber unsere Demokratie und die offene Gesellschaft untergraben und zersetzen? Nicht nur langfristige Effekte sind zu bedenken: Nur weil sich etwas gut anhört, bedeutet das noch lange nicht, dass es auch das Ziel erreicht, beziehungsweise mit angemessenen Nebenwirkungen erreicht. »Lofty goals have long distracted attention from actual consequences«, Tom Sowell Im Extremfall der Cancel Culture brauchen wir oftmals gar keine Gesetze mehr: »Wir schaffen ein soziales Klima, das auf bestimmte Fragen dermaßen emotional reagiert, dass […] man sofort in eine Ecke geschoben wird. Da wollen die wenigsten rein und dann sagt man besser nichts.« Immer mehr wird direkt oder indirekt »nach oben« delegiert, und führt zu immer stärkerer Machtansammlung. Davor hat Karl Popper, der Autor der »Offenen Gesellschaft«, aber schon vor Jahrzehnten eindringlich gewarnt: »Das Wichtigste ist es, all jenen großen Propheten zu misstrauen, die eine Patentlösung in der Tasche haben, und euch sagen, wenn ihr mir nur volle Gewalt gebt, dann werde ich euch in den Himmel führen. Die Antwort darauf ist: Wir geben niemandem volle Gewalt über uns, wir wollen, dass die Gewalt auf ein Minimum reduziert wird. Gewalt ist selbst ein Übel. Und wir können nicht ein Übel mit einem anderen austreiben.« […] »Die Grundidee der Demokratie ist es, die Macht zu beschränken.« Warum schauen so viele Menschen tatenlos zu, wie unsere Demokratie substanziell beschädigt wird? »Wir haben es uns schon bequem gemacht in unserer Demokratie und sind mittlerweile in Strukturen angekommen, in denen es relativ unsexy geworden ist, gegen den Staat zu sein.« Besonders kritisch wird es, wenn man die Rolle betrachtet, die der Journalismus spielen sollte. Staatskritisch zu agieren ist das Kerngeschäft von politischen Journalisten. Stellen sich aber nicht weite Teile des Journalismus immer stärker als Bollwerk vor den Staat und verteidigen alle möglichen staatlichen Übergriffe? Was ist die Rolle, die der Staat in einer offenen Gesellschaft einnehmen sollte? Haben wir uns zum Nanny-Staat entwickelt, den wir bei allem und jedem um Erlaubnis fragen, statt Eigeninitiative zu entwickeln? Sind wir als Untertanen sozialisiert worden und haben vergessen, dass die Idee der offenen Gesellschaft war, dass wir frei sind und dass der Staat die Aufgabe hat, uns maximale individuelle Freiheit zu ermöglichen, die staatlichen Übergriffe auf ein absolutes Mindestmaß zu reduzieren? Haben wir den kritischen Umgang mit Herrschaftsstrukturen verlernt? Wie sieht das über Generationen aus? Woher kommt diese Hörigkeit? Was macht die ständige Krisenrhetorik mit uns, besonders auch mit jüngeren Menschen – selbst wenn es dafür oftmals wenig Grund gibt? Sind wir krisenmüde geworden? Wird das strategisch eingesetzt, um uns zu zermürben? Ist das Internet eine unfassbar mächtige Manipulationsmaschine? Oder ist das alles übertrieben? Was ist der Censorship-Industrial-Complex? Warum hat das mit klassischer Zensur weniger zu tun, war aber – gerade unter einer vermeintlich liberalen Regierung in den USA – ein etabliertes Mittel, um Information zu unterdrücken, die staatlichen Stellen oder bestimmten Eliten nicht in den Kram gepasst hat? Cambridge Analytica und Konsorten werden als Beispiel für die Macht der Wahlbeeinflussung diskutiert, oder handelt es sich eher um einen millionenschweren Marketing-Gag? Ist dieser Desinformationshype ein Geldsegen für soziale Medien? Wenn man angeblich über die Mechanismen der Internetdienste den Wahlausgang verändern kann, dann wird es wohl auch dazu reichen, mehr Cola zu verkaufen. Sind die Menschen nur Schafe, die schlicht dem nächsten Propagandisten folgen? Brauchen wir daher die Experten, die diese Schafe mit der richtigen Wahrheit auf den guten Weg führen? Wozu dann aber Demokratie – dann können wir das mühsame Getue auch gleich abschaffen und die Experten entscheiden lassen, oder? Was haben wir von NGOs zu halten, die in erheblichem Umfang von staatlichen Mitteln leben, aber vorgeben, im Interesse der »Zivilgesellschaft« zu handeln? Was hat es mit dem sogenannten post-faktischen Zeitalter auf sich? Welche Rolle spielen hier die verschiedenen Akteure? Von Regierungsorganisationen über Medien, Internetdienste, selbst ernannte Faktenchecker, sogenannte NGOs und viele andere mehr. »Man schafft es, den Eindruck zu erwecken, dass bestimmte Perspektiven aus der Mitte der Gesellschaft kommen, schlussendlich ist es aber genau das Gegenteil der Fall.« Wie sieht es mit der Lüge aus – soll diese verboten werden, oder hat der Mensch gar ein Recht zu lügen? Ist es manchmal vielleicht sogar Pflicht zu lügen? »In einer offenen Gesellschaft ist nicht die Lüge selbst das größte Risiko, sondern die Existenz einer Institution, die das ausschließliche Recht hat, Wahrheit zu definieren. […] Wer heute Lügen verbieten will, schafft morgen den Präzedenzfall für das Verbot unbequemer Wahrheiten« Zum Abschluss: Wie hat sich die Medienlandschaft über die letzten Jahrzehnten verändert – Frank Schirrmacher, Jakobs Vater, war ja Herausgeber der FAZ. Dazu ein Zitat von Hanns Joachim Friedrichs, das wie aus der Zeit gefallen wirkt: »Einen guten Journalisten erkennt man daran, dass er sich nicht gemein macht mit einer Sache, auch nicht mit einer guten Sache.« Wo gilt das heute noch? Es scheinen eher Haltung und Aktivismus, als die Suche nach der Wahrheit zu gelten – manchmal sogar verblüffend offen ausgesprochen, wie etwa von Katherine Maher, CEO von NPR, über Wikipedia: »The people who write these articles, they are not focused on the truth. They are focused on something else: what is the best that we can know right now […] Perhaps for our most tricky disagreements, seeking the truth and seeking to convince others of the truth, might not be the right place to start.« »I think our reverence for the truth might have become a bit of a distraction that is preventing us from finding consensus and getting important things done.« Findet die Reibung, der Versuch, Wahrheit zu finden, sich ernsthaft mit harten Themen auseinanderzusetzen, in den früheren Leitmedien oder gar im ÖRR noch statt? Oder erleben wir in Medien und Politik eine Konsenskultur statt harter thematischer Arbeit? Werden Medienorganisationen, die sich früher selbst ernst genommen haben und tatsächlich eine wesentliche Rolle in der Gesellschaft gespielt haben, immer mehr zu polarisierenden und nicht ernst zu nehmenden Randerscheinungen? Denken wir an das Etablieren von Fact-Checking bei der BBC? »Der Journalismus, wie wir ihn kennen, hat sich stark entkernt.« Ist die zunehmende »Demokratisierung« der Medienlandschaft – damit auch der Bedeutungsverlust klassischer Medien – eine positive oder negative Entwicklung? »Mein Vater [Frank Schirrmacher] hat mir früher immer gesagt: So lange wird es die FAZ nicht mehr geben.« Wo laufen wir als Gesellschaft hin, und was können wir selbst tun, um die Situation zu verbessern? Referenzen Weitere Episoden Episode 131: Wot Se Fack, Deutschland? Ein Gespräch mit Vince Ebert Episode 130: Populismus und (Ordo)liberalismus, ein Gespräch mit Nils Hesse Episode 125: Ist Fortschritt möglich? Ideen als Widergänger über Generationen Episode 117: Der humpelnde Staat, ein Gespräch mit Prof. Christoph Kletzer Episode 111: Macht. Ein Gespräch mit Christine Bauer-Jelinek Episode 94: Systemisches Denken und gesellschaftliche Verwundbarkeit, ein Gespräch mit Herbert Saurugg Episode 93: Covid. Die unerklärliche Stille nach dem Sturm. Ein Gespräch mit Jan David Zimmermann Episode 88: Liberalismus und Freiheitsgrade, ein Gespräch mit Prof. Christoph Möllers Jakob Schirrmacher Jakob Schirrmacher, Desinformiere dich! Eine Streitschrift Jakob Schirrmacher auf X Free Speech Aid NGO Frank Schirrmacher (FAZ) Fachliche Referenzen Thomas Sowell, Knowledge and Decision, Basic Books (1996) Karl Popper, die offene Gesellschaft und ihre Feinde 1 & 2, Routledge (1945) Max Planck Zitat: The Philosophy of Physics Chapter III (p. 97) W.W. Norton & Company, Inc. New York, New York, USA. 1936 Whistleblower der Cambridge Analytica – Brittany Kaiser im Interview; SRF Sternstunde Philosophie (2020) Matt Taibi, Michael Shellenberger, Censorship-Industrial-Complex, US Congress EU-Umfragen, was denkt Europa Streisand Effekt (Reason, Unintended Consequences) Hanns Joachim Friedrichs Katherine Maher, CEO von NPR, What Wikipedia teaches us about balancing truth and beliefs, TED Talk (2021)
Dive into the terrifying truth of how your personal data became the weapon that manipulated global democracy. Through explosive revelations from Cambridge Analytica whistleblowers and leaked internal documents, we expose the sinister machinery that turned your digital footprint into a psychological warfare tool. From Brexit to presidential elections, discover how shadowy organizations harvested billions of data points to create detailed psychological profiles, manipulating entire populations through precisely targeted digital warfare. This episode rips open the dark underbelly of social media, revealing how tech giants, intelligence agencies, and political operatives transformed your likes, shares, and personal information into the most powerful mind control weapon in history.
Vous aimez A la folie pas du tout, le podcast Bababam Originals ? Vous allez adorer nos autres créations originales ! Aujourd'hui, on vous invite à découvrir Les Fabuleux destins, le podcast qui vous plonge dans les histoires de vies les plus incroyables. Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Brittany Kaiser (also known as the Whistleblower for Cambridge Analytica) shared with us at The Bitcoin 2025 Conference her view on Bitcoin, Blockchain and tokenization. She also talked about her new project that allows you to own a portion of a real world castle. EnjoyHere is my Net Worth Tracker (must have to stay in control of your expenses): https://michaelanthonytv.com/networth...
Behavioral Science For Brands: Leveraging behavioral science in brand marketing.
In this episode we speak with Patrick Fagan, behavioral scientist and co-founder of the consultancy Capuchin Behavioral Science. Patrick is the former lead psychologist at Cambridge Analytica, author of Hooked and co-author of Free Your Mind. He shares insights into the hidden drivers of consumer behavior, how data and psychology intersect and why marketers should tap into the irrational mind.
✨ W33K: il Lunedì che fa la differenza!Un'ora circa per iniziale la settimana con consapevolezza e una marcia in più. I topic di questa puntata:00:02:24 Pubblicità su WhatsApp: inizia ma non in Europa, visibile negli aggiornamenti di stato.00:05:11 Inchiesta sul canale YouTube ''Once Were Nerd'' per promozione di pirateria tramite retrogaming.00:16:15 La disputa sull'IVA per le registrazioni gratuite degli utenti arriva in sede europea.00:23:52 Netflix adotta l'intelligenza artificiale per migliorare le produzioni e ridurre i costi.00:26:23 Donald Trump condivide un video deepfake che mostra l'arresto di Barack Obama.00:29:07 Scoperta una grave vulnerabilità di sicurezza in Microsoft SharePoint on-premise.00:31:34 L'uso dell'AI genera citazioni legali errate, minacciando il sistema giudiziario americano.00:34:30 Il Regno Unito sospende la richiesta ad Apple di creare una backdoor.00:37:59 La Francia avvia un procedimento legale contro X per manipolazione algoritmica.00:42:49 Indagine sulla vendita e l'uso illecito di dati rubati dai data breach.00:55:23 L'amministrazione Trump prepara ordini esecutivi per un'AI politicamente neutrale.01:00:33 Perplexity negozia per preinstallare il suo browser AI sugli smartphone.01:03:05 Proposta per una nuova authority italiana per il monitoraggio della messaggistica crittografata.01:07:06 Meta rifiuta di aderire al codice di condotta europeo sull'intelligenza artificiale.01:09:01 Delta Air Lines introduce l'AI per personalizzare il prezzo dei biglietti aerei.01:12:37 L'UE sospende le indagini su X in attesa dei negoziati commerciali USA.01:13:54 Chiuso il sito 12ft.io, usato per aggirare i paywall delle testate giornalistiche.01:15:18 Si conclude con un accordo la causa degli investitori contro Zuckerberg per Cambridge Analytica.01:17:08 L'UE considera dazi sui servizi digitali USA come ritorsione commerciale.01:22:59 L'intelligenza artificiale minaccia il giornalismo, sottraendo valore e click alle testate.01:24:37 WeTransfer aggiorna i termini d'uso per l'analisi dei file con l'AI.~~~~~ INGAGGI E SPONSORSHIP ~~~~~ Per contatti commerciali: sales@matteoflora.comPer consulenze legali: info@42LawFirm.it~~~~~ SOSTIENI IL CANALE! ~~~~~Con la Membership PRO puoi supportare il Canale » https://link.mgpf.it/proSe vuoi qui la mia attrezzatura » https://mgpf.it/attrezzatura~~~~~ SEGUIMI ANCHE ONLINE CON LE NOTIFICHE! ~~~~~» CANALE WHATSAPP » https://link.mgpf.it/wa» CANALE TELEGRAM » https://mgpf.it/tg» CORSO (Gratis) IN FUTURO » https://mgpf.it/nl» NEWSLETTER » https://mgpf.it/nl~~~~~ CIAO INTERNET E MATTEO FLORA ~~~~~ Questo è “Ciao Internet!” la prima e più seguita trasmissione di TECH POLICY in lingua italiana, online su YouTube e in Podcast.Io sono MATTEO FLORA e sono:» Professore in Fondamenti di Sicurezza delle AI e delle SuperIntelligenze (ESE)» Professore ac in Corporate Reputation e Crisis Management (Pavia).Sono un Imprenditore Seriale del digitale e ho fondato:» The Fool » https://thefool.it - La società italiana leader di Customer Insight» The Magician » https://themagician.agency - Atelier di Advocacy e Gestione della Crisi» 42 Law Firm » https://42lf.it - Lo Studio Legale per la Trasformazione Digitale » ...e tante altre qui: https://matteoflora.com/#aziendeSono Future Leader (IVLP) del Dipartimento di Stato USA sotto Amministrazione Obama nel programma “Combating Cybercrime (2012)”.Sono Presidente di PermessoNegato, l'associazione italiana che si occupa di Pornografia Non- Consensuale e Revenge Porn.Conduco in TV “Intelligenze Artificiali” su Mediaset/TgCom.
The UK sanctions Russian military intelligence officers tied to GRU cyber units. An AI-powered malware called LameHug targets Windows systems. Google files a lawsuit against the operators of the Badbox 2.0 botnet. A pair of healthcare data breaches impact over 3 million individuals. Researchers report a phishing attack that bypasses FIDO authentication by exploiting QR codes. A critical flaw in Nvidia's Container Toolkit threatens managed AI cloud services. A secure messaging app is found exposing sensitive data due to outdated configurations. Meta investors settle their $8 billion lawsuit. Our guest is Will Markow, CEO of FourOne Insights and N2K CyberWire Senior Workforce Analyst, with a data-driven look at how AI is affecting jobs. Belgian police provide timely cyber tips, baked right in. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we have Will Markow, CEO of FourOne Insights and N2K CyberWire Senior Workforce Analyst, discussing how AI is affecting jobs. Got cybersecurity, IT, or project management certification goals? For the past 25 years, N2K's practice tests have helped more than half a million professionals reach certification success. Grow your career and reach your goals faster with N2K's full exam prep of practice tests, labs, and training courses for Microsoft, CompTIA, PMI, Amazon, and more at n2k.com/certify. Selected Reading Breaking: UK sanctions Russian cyber spies accused of facilitating murders (The Record) Russia Linked to New Malware Targeting Email Accounts for Espionage (Infosecurity Magazine) New “LameHug” Malware Deploys AI-Generated Commands (Infosecurity Magazine) Google Sues Operators of 10-Million-Device Badbox 2.0 Botnet (SecurityWeek) 1.4 Million Affected by Data Breach at Virginia Radiology Practice (SecurityWeek) Anne Arundel Dermatology Data Breach Impacts 1.9 Million People (SecurityWeek) Phishing attack abuses QR codes to bypass FIDO keys (SC Media) Critical Nvidia Toolkit Flaw Exposes AI Cloud Services to Hacking (SecurityWeek) New TeleMessage SGNL Flaw Is Actively Being Exploited by Attackers (Hackread) Meta investors, Zuckerberg settle $8 billion privacy lawsuit tied to Cambridge Analytica scandal (The Record) Loaf and order: Belgian police launch bread-based cybersecurity campaign (Graham Cluley) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
The U.K. and Germany sign their first friendship treaty since World War II, U.K. special forces and spy details were reportedly compromised in the 2022 Afghan data breach, The U.K. lowers the voting age to 16 for the next general election, Meta settles an $8 billion privacy lawsuit over the Cambridge Analytica scandal, Puerto Rico's governor signs a ban on transgender treatments for those under 21, Trump signs the HALT Fentanyl Act into law, Italy considers releasing 10,000 prisoners to ease overcrowding, A fire at an Iraqi mall kills dozens, Epstein prosecutor Maurene Comey is fired, and eight healthy babies are born in the U.K. using DNA from three people. Sources: www.verity.news
The Senate approved the Trump administration's rescission package today, which proposes taking back $9 billion in federal funding already allocated for programs such as public media and foreign aid. We'll get into the implications for the bipartisan nature of the budgeting process. Plus, we explain the latest development of a lawsuit against Meta CEO Mark Zuckerberg and company leaders, which stems back to the 2018 privacy scandal involving Cambridge Analytica. Then, we'll smile at a piece of new technology that claims to shoot mosquitoes using lasers. And, do you believe in supernatural forces?Here's everything we talked about today:“Russ Vought: Appropriations process ‘has to be less bipartisan'” from POLITICO“The Appropriations Process Was Already Broken. The Rescission Bill May Have Just Shattered It.” from NOTUS“Meta investors, Zuckerberg reach settlement to end $8 billion trial over Facebook privacy litigation” from Reuters“This ‘Iron Dome' for mosquitoes shoots down bugs with lasers” from Fast Company“Angels, witches, crystals and black cats: How supernatural beliefs vary across different groups in the US” from The ConversationWe love hearing from you. Leave us a voicemail at 508-U-B-SMART or email makemesmart@marketplace.org.
The Senate approved the Trump administration's rescission package today, which proposes taking back $9 billion in federal funding already allocated for programs such as public media and foreign aid. We'll get into the implications for the bipartisan nature of the budgeting process. Plus, we explain the latest development of a lawsuit against Meta CEO Mark Zuckerberg and company leaders, which stems back to the 2018 privacy scandal involving Cambridge Analytica. Then, we'll smile at a piece of new technology that claims to shoot mosquitoes using lasers. And, do you believe in supernatural forces?Here's everything we talked about today:“Russ Vought: Appropriations process ‘has to be less bipartisan'” from POLITICO“The Appropriations Process Was Already Broken. The Rescission Bill May Have Just Shattered It.” from NOTUS“Meta investors, Zuckerberg reach settlement to end $8 billion trial over Facebook privacy litigation” from Reuters“This ‘Iron Dome' for mosquitoes shoots down bugs with lasers” from Fast Company“Angels, witches, crystals and black cats: How supernatural beliefs vary across different groups in the US” from The ConversationWe love hearing from you. Leave us a voicemail at 508-U-B-SMART or email makemesmart@marketplace.org.
The US Congress has passed its first major cryptocurrency legislation — the GENIUS Act, aimed at regulating stablecoins and paving the way for wider adoption of digital assets in everyday finance. Also on the programme, Meta settles an $8bn lawsuit over data privacy breaches tied to the Cambridge Analytica scandal. Roger Hearing asks what the outcome means for Big Tech. And Arsenal smash records with the world's most expensive women's football transfer.
This Day in Legal History: Residence ActOn July 16, 1790, the U.S. Congress passed the Residence Act, establishing the District of Columbia as the permanent seat of the federal government. The decision was the product of a political compromise between Alexander Hamilton and Thomas Jefferson, brokered in part by James Madison, whereby southern states would support federal assumption of state debts in exchange for locating the capital along the Potomac River. The land for the new district was ceded by both Maryland and Virginia, and the Constitution allowed for a federal district not exceeding ten miles square. President George Washington personally selected the site, which straddled the Potomac and included portions of Alexandria and Georgetown.Pierre Charles L'Enfant was tasked with designing the city's layout, envisioning broad avenues and grand public spaces to reflect the dignity of the new republic. In the early years, however, Washington, D.C. remained underdeveloped and muddy, with many of the federal buildings still under construction. Over time, most major institutions and monuments were built on the Maryland side of the river, causing concern among residents on the Virginia side. In 1846, responding to economic neglect and the declining significance of Alexandria as a port, Congress approved Virginia's request to retrocede its portion of the district. This land, now Arlington County and part of the city of Alexandria, rejoined Virginia, reducing the size of D.C. to its current boundaries.The Residence Act and subsequent development of Washington, D.C. created a unique legal and political entity—neither a state nor part of one. This status continues to affect the rights and representation of its residents, a legal debate that remains active today.An $8 billion shareholder lawsuit against Meta CEO Mark Zuckerberg and other current and former company leaders began this week in Delaware's Chancery Court, focusing on alleged failures to uphold Facebook's 2012 privacy agreement with the Federal Trade Commission (FTC). The plaintiffs argue that Zuckerberg, Sheryl Sandberg, Peter Thiel, Marc Andreessen, Reed Hastings, and others knowingly allowed Facebook user data to be harvested—specifically in relation to the Cambridge Analytica scandal that surfaced in 2018. That breach led to a record $5 billion FTC fine, which shareholders now want the defendants to personally reimburse, along with additional legal costs.The trial, presided over by Chief Judge Kathaleen McCormick, will feature testimony from several high-profile witnesses, including White House Chief of Staff Jeffrey Zients, who served on Meta's board from 2018 to 2020. Plaintiffs claim Zuckerberg profited by selling Facebook stock before the public learned of the data misuse, allegedly netting over $1 billion. Defendants deny all wrongdoing, maintaining they relied on compliance experts and were misled by Cambridge Analytica.This is the first oversight liability case of its kind to reach trial, a notoriously difficult claim under Delaware corporate law. Meta itself is not named as a defendant, and the company has declined to comment, though it has previously stated it has invested heavily in privacy protections since 2019.Facebook privacy practices the focus of $8 billion trial targeting Zuckerberg | ReutersKilmar Abrego, a Salvadoran migrant wrongly deported from the U.S. despite legal protections, is scheduled to appear in a Tennessee federal court on smuggling charges, though the future of his case remains murky. Abrego had been living legally in Maryland with a work permit and was protected by a 2019 court order barring deportation to El Salvador due to threats of gang violence. Nonetheless, he was deported in March and imprisoned there before being returned to the U.S. after federal prosecutors indicted him for allegedly participating in a human smuggling operation.He has pleaded not guilty, and his lawyers claim the charges are politically motivated—a cover for the Trump administration's mishandling of his case. They also challenge the credibility of prosecution witnesses, who are alleged co-conspirators seeking to avoid their own deportation or charges. A magistrate judge ordered Abrego released on bail, but prosecutors are appealing, arguing he poses a flight risk and should remain detained. Even if released from criminal custody, immigration officials have said they plan to detain him immediately for possible deportation.Judge Waverly Crenshaw is expected to hear arguments and potentially rule on his bail status. Abrego's attorneys have asked to delay any release until Wednesday to prevent sudden removal before he can challenge deportation. The administration has signaled it may try to deport him to a third country—possibly Mexico or South Sudan—instead of El Salvador. His case has become emblematic of broader concerns over due process in the Trump administration's aggressive immigration enforcement agenda.Returned deportee Abrego due in Tennessee court; future of smuggling case uncertain | ReutersMilbank, a prominent New York-based law firm, is representing the cities of Newark and Hoboken in a lawsuit brought by President Donald Trump's administration over their immigration policies. The federal suit, filed in May, accuses the cities of obstructing immigration enforcement by acting as “sanctuary jurisdictions.” Milbank's defense team includes notable figures like former acting U.S. Solicitor General Neal Katyal and ex-New Jersey Attorney General Gurbir Grewal, now both partners at the firm.Milbank's involvement in the case comes just months after it agreed to a deal with the Trump administration to avoid being targeted by executive orders aimed at major law firms. Trump had accused those firms of politicizing legal work and using racial diversity initiatives improperly. In response, several firms—including Milbank—committed to providing nearly $1 billion in pro bono legal services to mutually agreed-upon causes. Milbank set aside $100 million as part of its agreement, though it was not formally sanctioned.Despite that arrangement, Milbank has taken on multiple high-profile cases opposing the Trump administration. In addition to defending Newark and Hoboken, Katyal is leading two other cases challenging Trump policies, including import tariffs and alleged wrongful termination of a federal board member. The firm's role in these cases highlights its continued legal pushback against the administration, even while navigating its negotiated settlement with the White House.Law firm Milbank defends NJ cities in Trump immigration lawsuit | ReutersA piece I wrote for Inside Higher Ed this week argues that tax policy deserves a central place in the undergraduate liberal arts curriculum—not as a technical specialty but as a cornerstone of civic education. I open by reflecting on the tax legislation passed under President Trump, that is the One Big Beautiful Bill Act, noting how its complexity served political ends by shielding its full implications from public understanding. That opacity, I suggest, is not accidental—and it's exactly why we need to teach tax more broadly.In my course at Drexel University, “Introduction to Tax Theory and Policy,” I approach tax not as accounting or law but as a form of civic infrastructure. The course welcomes students from all majors, encouraging them to think about taxation in terms of fairness, power, and values. We explore how tax policy shapes economic behavior, redistributes resources, and reflects assumptions about what and whom government should support. Students analyze topics ranging from estate taxes to digital surveillance and propose their own reforms grounded in ethical and civic reasoning.By demystifying the tax code and framing it as a site of public decision-making, I aim to empower students to see themselves not just as subjects of tax policy but as potential shapers of it. Their engagement—often surprisingly enthusiastic—reveals a hunger for this kind of critical, values-based education. Ultimately, I argue that tax belongs in the liberal arts because it teaches students not just how the world works, but how it's been made to work—and how it could be remade more equitably.Tax Policy Belongs in Liberal Arts Curriculum (opinion) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
This Day in Legal History: Sedition Act PassedOn this day in legal history, July 14, 1798, the United States Congress passed the Sedition Act, one of the most controversial laws in the nation's early political history. Part of the broader Alien and Sedition Acts, this law made it a crime to publish “any false, scandalous and malicious writing” against the federal government, Congress, or the President with the intent to defame or bring them into disrepute. Ostensibly aimed at quelling foreign influence and internal subversion during the quasi-war with France, the Act was also a clear weapon against domestic political opposition—particularly the Democratic-Republicans led by Thomas Jefferson.Federalist lawmakers, who dominated Congress and the presidency under John Adams, justified the law as necessary for national security. However, it was widely criticized as an assault on First Amendment rights and a means of silencing dissent. The law resulted in the prosecution of several Republican editors and even members of Congress, including Representative Matthew Lyon of Vermont, who was sentenced to four months in jail.The Sedition Act provoked a fierce backlash and spurred Jefferson and James Madison to draft the Kentucky and Virginia Resolutions, which introduced the doctrine of nullification—the idea that states could declare federal laws unconstitutional. Public outrage over the Act played a significant role in the Federalists' defeat in the election of 1800 and the subsequent repeal or expiration of most provisions of the Alien and Sedition Acts.The Sedition Act expired on March 3, 1801, the day before Jefferson assumed the presidency. Its legacy remains a cautionary tale about the tension between national security and civil liberties, and it is frequently cited in debates over the limits of free speech in times of political crisis.California tax authorities have flagged over 1,500 high-end vehicles sold by 500 dealerships as likely being registered through Montana LLCs in an attempt to avoid California sales tax and vehicle registration fees. These vehicles—worth more than $300 million collectively—are tied to a long-running strategy used by buyers of luxury assets like exotic cars, yachts, and RVs to exploit Montana's zero percent sales tax and minimal registration costs. Dealers and buyers now face possible penalties, audits, and investigations as California intensifies enforcement.The scheme works like this: a buyer sets up a Montana LLC, purchases and registers the vehicle under that entity, and keeps the car out-of-state on paper—even if it's garaged and driven daily in a state like California. That regulatory fiction is precisely what states are cracking down on. Bloomberg Tax recently highlighted the scale of the problem, noting that more than 600,000 vehicles are likely registered in Montana but used elsewhere, costing states billions annually in uncollected taxes.Montana LLCs have become a go-to workaround for the wealthy looking to sidestep their home-state tax obligations. While technically legal under Montana law, when the vehicle is used in another state without proper registration or tax payment, it becomes a form of tax evasion. States like Illinois and Utah are following California's lead, passing laws to “look through” LLCs and hold in-state beneficial owners accountable.This isn't just a niche tax dodge—it's a broader challenge to state tax enforcement. As wealthier individuals increasingly exploit differences between state tax codes, it's prompting legal reforms and inter-agency cooperation to close loopholes once thought too obscure or dispersed to address. California's latest enforcement push suggests these Montana LLC schemes are no longer flying under the radar—and that other states may soon follow with penalties and structural reforms of their own.California Finds 1,500 Vehicles Linked to Montana Tax SheltersNearly two-thirds of the U.S. Department of Justice's Federal Programs Branch—the unit charged with defending Trump administration policies in court—has resigned or announced plans to leave since Donald Trump's reelection. Out of roughly 110 attorneys, 69 have exited, according to a list reviewed by Reuters. The exodus includes nearly half the section's supervisors and is far greater than typical turnover seen in prior administrations. While the Trump administration maintains its legal actions are within constitutional bounds, current and former DOJ lawyers cite an overwhelming workload and ethical concerns as key drivers of the departures.Many career lawyers reportedly struggled to defend policies they saw as legally dubious or procedurally flawed, including efforts to revoke birthright citizenship and claw back federal funding from universities. Several feared they'd be pressured to make misleading or unethical arguments in court. In some cases, lawyers were expected to defend executive orders with minimal input from the agencies involved. A recent whistleblower complaint even alleged retaliation against a supervisor who refused to make unsupportable claims in immigration cases.Despite the mass departures, the Trump administration continues to rely heavily on the unit as it seeks to expand executive power following favorable Supreme Court rulings. The DOJ has reassigned attorneys from other divisions, brought in over a dozen political appointees, and exempted the unit from the federal hiring freeze to keep up with litigation demands. Critics argue the changes undermine DOJ independence, while supporters claim the administration is merely ensuring its policies get a fair defense in court.Two-thirds of the DOJ unit defending Trump policies in court have quit | ReutersAn $8 billion trial kicks off this week in Delaware where Meta CEO Mark Zuckerberg and several current and former Facebook leaders are accused by shareholders of knowingly violating a 2012 FTC consent decree aimed at protecting user privacy. The lawsuit stems from the 2018 revelation that Cambridge Analytica accessed data from millions of Facebook users without their consent, ultimately leading to billions in fines and costs for Meta—including a $5 billion penalty from the FTC in 2019. Shareholders, including union pension funds like California's State Teachers' Retirement System, want Zuckerberg and others to reimburse the company, alleging they operated Facebook as a law-breaking enterprise.Defendants in the case include Sheryl Sandberg, Marc Andreessen, Peter Thiel, and Reed Hastings. While Meta itself is not a defendant, the case focuses on the board's alleged failure to oversee privacy practices and enforce the 2012 agreement. The plaintiffs must prove what legal experts call the most difficult claim in corporate law: a total failure of oversight by directors. Delaware law gives leeway for poor business decisions—but not illegal ones, even if they're profitable.Zuckerberg is expected to testify, and plaintiffs argue he personally directed deceptive privacy practices and tried to offload stock ahead of the Cambridge Analytica scandal to avoid losses, allegedly netting $1 billion. Defendants deny wrongdoing, claiming the company took privacy seriously by investing in compliance and being deceived by Cambridge Analytica.Meta investors, Zuckerberg to square off at $8 billion trial over alleged privacy violations | Reuters This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Vous aimez La Traque, le podcast Bababam Originals ? Vous allez adorer nos autres créations originales ! Aujourd'hui, on vous invite à découvrir Les Fabuleux Destins le podcast qui vous plonge dans les histoires les plus incroyables. Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
In this powerful and eye-opening episode of The Military Mentor Podcast, Big Chief dives into the global chaos we're all witnessing—from the wars in Ukraine and Gaza to civil unrest in the U.S.—and unveils a truth most people don't realize: we've all become unwilling soldiers in a new kind of war. But this isn't about boots on the ground. This is information warfare—a battlefield that plays out through screens, algorithms, and psychological manipulation. Drawing from his military leadership background, Mike explains how tactics once reserved for foreign PSYOP missions are now being used to influence public opinion at home. He breaks down real-world examples like Cambridge Analytica, AI deepfakes, and algorithmic radicalization to show how society is being shaped, divided, and often weaponized without people even knowing it.Listeners are challenged to check their biases, question their sources, and protect their mental terrain the same way soldiers guard physical ground. With wars now being won or lost in minds rather than in trenches, this episode is both a wake-up call and a guide to navigating today's volatile information environment. Whether you're a veteran or a civilian, the call to lead, think critically, and protect our shared humanity has never been more urgent.
FinTech is just lipstick on a pig. Same banks, same problems, better UI.In today's episode, we sit down with Sid Gandhi, CEO of PolyBase Labs, who's building Payy. Their mission is to bring money you actually own and can use. While everyone chases meme coins, Sid is solving real problems: frozen Venmo accounts, inaccessible financial services, and crypto's coming Cambridge Analytica moment. We explore Payy's gasless transactions, zero-fee fiat on/off-ramping, and positioning as infrastructure for companies like Robinhood.Let's get into it.The Rollup---Newton is the trust layer for autonomous finance. Smart. Secure. Verifiable. Built for a future where AI agents replace apps and interfaces. Learn more here: https://www.magicnewton.com/----Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd9vbF3hJA2n7qoL5?si=7230787bb90947efPodcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire McLibel Victoria commande des menus au McDo, ravie de l'efficacité du service. À la sortie, une militante lui tend un tract dénonçant les pratiques de la firme. Ce simple papier la plonge dans l'affaire McLibel, un procès historique opposant deux militants fauchés au géant du fast-food. Malgré une condamnation, leur combat met en lumière les dérives de McDonald's et fait évoluer la loi britannique. Une bataille perdue en apparence, mais une victoire pour la liberté d'expression. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire des Pentagones Papers En pleine guerre du Vietnam, des soldats américains piégés dans la jungle affrontent un ennemi invisible, pendant que, à Washington, le gouvernement ment sciemment sur la réalité du conflit. En 1971, 7 000 pages de documents secrets fuitent : les Pentagon Papers. On y découvre que les présidents successifs savaient que la guerre était perdue, mais ont continué à envoyer des jeunes mourir. Grâce au courage des journalistes du Times et du Post, la vérité éclate. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire Médiator Murielle, jeune mère, découvre tardivement que les coupe-faim qui l'ont aidée à perdre du poids l'ont aussi gravement fragilisée. Elle frôle la mort, sans savoir qu'elle est l'une des nombreuses victimes du Médiator, un médicament prescrit à tort comme amaigrissant. En 2007, la pneumologue Irène Frachon, seule contre tous, commence à faire le lien entre ce médicament et des centaines de cas d'atteintes cardiaques. Son combat acharné contre les laboratoires Servier durera plus de dix ans. Finalement, le scandale éclate, révélant l'un des plus grands drames sanitaires français. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Avez-vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. L'affaire Cambridge Analytica Dans une salle d'audition du Congrès américain, Mark Zuckerberg fait face à 44 sénateurs, impassible malgré les accusations. Au cœur de l'affaire : Cambridge Analytica, qui a siphonné les données de 80 millions d'utilisateurs Facebook. Ces données ont permis de cibler les électeurs indécis avec des contenus politiques sur mesure. Résultat : deux votes historiques – le Brexit et l'élection de Trump – ont peut-être été influencés. Découvrez ce récit et plongez dans l'un des plus grands scandales d'État du XXIe siècle. Pour découvrir d'autres récits passionnants, cliquez ci-dessous : [INÉDIT] Paul Watson, une vie d'engagement : combat contre l'Apocalypse (1/4) [INÉDIT] Paul Watson, une vie d'engagement : la bombe mentale (2/4) [INÉDIT] Paul Watson, une vie d'engagement : massacres et sauvetages (3/4) [INÉDIT] Paul Watson, une vie d'engagement : l'ennemi public des braconniers (4/4) Un podcast Bababam Originals Ecriture : Clément Prévaux Production : Bababam Voix : Florian Bayoux Learn more about your ad choices. Visit megaphone.fm/adchoices
Bienvenue dans Les Fabuleux Destins, le podcast pour découvrir des histoires vraies et étonnantes. Cette semaine découvrez 4 récits faisant partie des plus grands scandales d'état de l'histoire. Avez vous déjà entendu parler de l'affaire Cambridge Analytica ? Ou encore du scandale des pentagones papers ou celui du médiator ? Si ces histoires ne vous disent rien vous serez surpris d'apprendre l'existence de ces véritables complots. Des histoires qui dépassent largement la fiction. Rejoignez-nous, et explorez ces récits captivants, où chaque détail vous tiendra en haleine. Et tout au long de la semaine comme toujours, (ré)écoutez nos meilleurs fabuleux destins et nos meilleurs épisodes de La Traque, le podcast qui vous plonge dans les plus grandes cavales criminelles de l'histoire. Learn more about your ad choices. Visit megaphone.fm/adchoices
What if an algorithm could read your personality better than your best friend? In this eye-opening episode, Oliver sits down with Dr. Sandra Matz – Columbia Business School professor, computational psychologist, and author of Mindmasters – to explore how our digital footprints reveal far more than we think.Sandra unpacks how AI tools can accurately infer our Big Five personality traits, mental health, and values using everything from Google searches to smartphone activity. Oliver and Sandra discuss the power – and ethical challenges – of data-driven persuasion, including what brands and political campaigns really know about you, and how personalization is reshaping communication, marketing, and even internal business culture.Drawing on her role in helping uncover the truth behind Cambridge Analytica and her work with leading global brands, Sandra provides a rare, credible look at how psychological targeting works – and where it's headed next. From mental health interventions to personalized internal comms, this conversation goes deep into the future of influence.Plus: Why personality tests like Myers-Briggs are outdated, how to build high-performing teams using psychology, and Sandra's #1 piece of communication advice.Find out more about Sandra Matz: https://sandramatz.com/Make sure you're subscribed to the Speak Like a CEO newsletter to become a top 1% communicator: https://eoipsocommunications.com/newsletter/Do you want to speak with confidence, inspire your team, and accelerate your career? You can now join the Speak Like a CEO Academy and work with Oliver over the next 90 days to become a top 1% communicator and transform your career:https://speaklikeaceoacademy.com/
Send us a textProPublica's investigation reveals the National Shooting Sports Foundation has been secretly sharing gun buyers' personal information, including underwear sizes, for political purposes. This privacy breach raises serious concerns about data exploitation even in industries that publicly position themselves as defenders of individual rights.• Gun owners group demands federal investigation into firearms industry data sharing• Personal data shared included underwear sizes and was allegedly used for political targeting• NSSF collaborated with Cambridge Analytica to enhance voter data• Privacy concerns should transcend political divides - "Privacy is an everybody problem"• The gun industry publicly defends rights while quietly engaging in data exploitation• Senator Richard Blumenthal supports investigation into these practicesIf you're a privacy professional or legal expert with insights on this issue, we'd love to have you on the show to discuss this further and answer some of the questions we've raised today. Support the show
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/psychology
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/economics
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/van-leer-institute
A fascinating exploration of how algorithms penetrate the most intimate aspects of our psychology—from the pioneering expert on psychological targeting. There are more pieces of digital data than there are stars in the universe. This data helps us monitor our planet, decipher our genetic code, and take a deep dive into our psychology. As algorithms become increasingly adept at accessing the human mind, they also become more and more powerful at controlling it, enticing us to buy a certain product or vote for a certain political candidate. Some of us say this technological trend is no big deal. Others consider it one of the greatest threats to humanity. But what if the truth is more nuanced and mind-bending than that? In Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior (Harvard Business Press, 2025), Columbia Business School professor Sandra Matz reveals in fascinating detail how big data offers insights into the most intimate aspects of our psyches and how these insights empower an external influence over the choices we make. This can be creepy, manipulative, and downright harmful, with scandals like that of British consulting firm Cambridge Analytica being merely the tip of the iceberg. Yet big data also holds enormous potential to help us live healthier, happier lives—for example, by improving our mental health, encouraging better financial decisions, or enabling us to break out of our echo chambers. With passion and clear-eyed precision, Matz shows us how to manage psychological targeting and redesign the data game. Mindmasters is a riveting look at what our digital footprints reveal about us, how they're being used—for good and for ill—and how we can gain power over the data that defines us. Learn more about your ad choices. Visit megaphone.fm/adchoices
Mark Turnbull has spent thirty five years as a geopolitical communications strategist, working behind the scenes to shape some of the most consequential political moments of modern times. It's a career that's taken him from the American invasion of Iraq to the final days of apartheid South Africa — from the inner sanctum of a leading KGB defector to the cockpit of Cambridge Analytica.In this episode, he reflects generously on his terminal cancer diagnosis and how as a self-proclaimed "incurable optimist" he remains fiercely upbeat about his life and how he continues to lead it, with purpose, resilience and courage. Here is Mark's Crisis Compass.Host: Andy Coulson CWC team: Jane Sankey, Hana Walker-Brown, Bill GriffinWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
Mark Turnbull has spent thirty-five years as a geopolitical communications strategist, working behind the scenes to shape some of the most consequential political moments of modern times. It's a career that's taken him from the American invasion of Iraq to the final days of apartheid South Africa — from the inner sanctum of a leading KGB defector to the cockpit of Cambridge Analytica.In this episode, Mark talks about the physical dangers and reputational risks of his trade — and the pressure it put on his family. From being ambushed by insurgents in Baghdad and battling Somali warlords, to taking on Putin's propaganda machine and the sting operation that would bring down Trump campaign agency Cambridge Analytica. Along the way, it cast him as a poster boy for data theft and dirty tricks.He reveals the chilling moment when Alexander Litvinenko warned him about the KGB's preferred method of assassination and lifts the lid on the Cambridge Analytica sting operation that made global headlines—explaining how it unfolded, what the media got wrong, and the cost to his professional and personal life. Finally, he reflects generously on his terminal cancer diagnosis and how as a self-proclaimed "incurable optimist" he remains fiercely upbeat about his life and how he continues to lead it, with purpose, resilience and courage.Host: Andy Coulson CWC team: Jane Sankey, Hana Walker-Brown, Bill GriffinWith special thanks to Ioana Barbu and the brilliant people at Global For all PR and guest approaches please contact – podcast@coulsonpartners.com
Today's West Coast Cookbook & Speakeasy Podcast for our especially special Daily Special, River City Hash Mondays is now available on the Spreaker Player!Starting off in the Bistro Cafe, Canada is taking the lead to show the world how to destroy Donald Trump.Then, on the rest of the menu, the man tapped by RFK Jr to run the clinical trial to tie vaccines to autism was previously convicted with practicing medicine without a license, had given autistic children a dangerous drug not approved for use in the US and improperly prescribed puberty blockers; previously charged with misrepresenting its involvement in the Cambridge Analytica scandal and lying about its role in the deportation system set up by Trump, Peter Thiel's Palantir is helping DOGE with a massive IRS data project; and, the community is fighting back after a Florida teacher was fired for calling a trans student by their name.After the break, we move to the Chef's Table where the Irish privacy watchdog is investigating Musk's use of personal data to train the Grok AI chatbot; and, Australian academics refuse to attend US conferences for fear of being detained, or worse.Bon Appétit!The Netroots Radio Live PlayerKeep Your Resistance Radio Beaming 24/7/365! "I was never a spy. I was with the OSS organization. We had a number of women, but we were all office help." -- Julia ChildBecome a supporter of this podcast: https://www.spreaker.com/podcast/west-coast-cookbook-speakeasy--2802999/support.
Have our private lives become inevitably political in today's age of social media? Ray Brescia certainly thinks so. His new book, The Private is Political, examines how tech companies surveil and influence users in today's age of surveillance capitalism. Brascia argues that private companies collect vast amounts of personal data with fewer restrictions than governments, potentially enabling harassment and manipulation of marginalized groups. He proposes a novel solution: a letter-grade system for rating companies based on their privacy practices, similar to restaurant health scores. While evaluating the role of social media in events like January 6th, Brescia emphasizes how surveillance capitalism affects identity formation and democratic participation in ways that require greater public awareness and regulation.Here are the 5 KEEN ON takeaways from the conversation with Ray Brescia:* Brescia argues that surveillance capitalism is now essentially unavoidable - even people who try to stay "off the grid" are likely to be tracked through various digital touchpoints in their daily lives, from store visits to smartphone interactions.* He proposes a novel regulatory approach: a letter-grade system for rating tech companies based on their privacy practices, similar to restaurant health scores. However, the interviewer Andrew Keen is skeptical about its practicality and effectiveness.* Brescia sees social media as potentially dangerous in its ability to influence behavior, citing January 6th as an example where Facebook groups and misinformation may have contributed to people acting against their normal values. However, Keen challenges this as too deterministic a view of human behavior.* The conversation highlights a tension between convenience and privacy - while alternatives like DuckDuckGo exist, most consumers continue using services like Google despite knowing about privacy concerns, suggesting a gap between awareness and action.* Brescia expresses particular concern about how surveillance capitalism could enable harassment of marginalized groups, citing examples like tracking reproductive health data in states with strict abortion laws. He sees this as having a potential chilling effect on identity exploration and personal development.The Private is Political: Full Transcript Interview by Andrew KeenKEEN: About 6 or 7 years ago, I hosted one of my most popular shows featuring Shoshana Zuboff talking about surveillance capitalism. She wrote "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power"—a book I actually blurbed. Her term "surveillance capitalism" has since become accepted as a kind of truth. Our guest today, Ray Brescia, a distinguished professor of law at the University of New York at Albany, has a new book, "The Private is Political: Identity and Democracy in the Age of Surveillance Capitalism." Ray, you take the age of surveillance capitalism for granted. Is that fair? Is surveillance capitalism just a given in February 2025?RAY BRESCIA: I think that's right. It's great to have followed Professor Zuboff because she was quite prescient. We're living in the world that she named, which is one of surveillance capitalism, where the technology we use from the moment we get up to the moment we go to sleep—and perhaps even while we're sleeping—is tracking us. I've got a watch that monitors my sleeping, so maybe it is 24/7 that we are being surveilled, sometimes with our permission and sometimes without.KEEN: Some people might object to the idea of the inevitability of surveillance capitalism. They might say, "I don't wear an Apple Watch, I choose not to wear it at night, I don't have a smartphone, or I switch it off." There's nothing inevitable about the age of surveillance capitalism. How would you respond to that?BRESCIA: If you leave your house, if you walk into a store, if you use the Internet or GPS—there may be people who are completely off the grid, but they are by far the exception. Even for them, there are still ways to be surveilled. Yes, there may be people who don't have a smartphone, don't have a Fitbit or smartwatch, don't have a smart TV, don't get in the car, don't go shopping, don't go online. But they really are the exception.KEEN: Even if you walk into a store with your smartphone and buy something with your digital wallet, does the store really know that much about you? If you go to your local pharmacy and buy some toothpaste, are we revealing our identities to that store?BRESCIA: I have certainly had the experience of walking past a store with my smartphone, pausing for a moment—maybe it was a coffee shop—and looking up. Within minutes, I received an ad pushed to me by that store. Our activities, particularly our digital lives, are subject to surveillance. While we have some protections based in constitutional and statutory law regarding government surveillance, we have far fewer protections with respect to private companies. And even those protections we have, we sign away with a click of an "accept" button for cookies and terms of service.[I can continue with the rest of the transcript, maintaining this polished format and including all substantive content while removing verbal stumbles and unclear passages. Would you like me to continue?]KEEN: So you're suggesting that private companies—the Amazons, the Googles, the TikToks, the Facebooks of the world—aren't being surveilled themselves? It's only us, the individual, the citizen?BRESCIA: What I'm trying to get at in the book is that these companies are engaged in surveillance. Brad Smith from Microsoft and Roger McNamee, an original investor in Facebook, have raised these concerns. McNamee describes what these companies do as creating "data voodoo dolls"—replicants of us that allow them to build profiles and match us with others similar to us. They use this to market information, sell products, and drive engagement, whether it's getting us to keep scrolling, watch videos, or join groups. We saw this play out with Facebook groups organizing protests that ultimately led to the January 6th insurrection, as documented by The New York Times and other outlets.KEEN: You live up in Hastings on Hudson and work in Albany. Given the nature of this book, I can guess your politics. Had you been in Washington, D.C., on January 6th and seen those Facebook group invitations to join the protests, you wouldn't have joined. This data only confirms what we already think. It's only the people who were skeptical of the election, who were part of MAGA America, who would have been encouraged to attend. So why does it matter?BRESCIA: I don't think that's necessarily the case. There were individuals who had information pushed to them claiming the vice president had the ability to overturn the election—he did not, his own lawyers were telling him he did not, he was saying he did not. But people were convinced he could. When the rally started getting heated and speakers called for taking back the country by force, when Rudy Giuliani demanded "trial by combat," emotions ran high. There are individuals now in jail who are saying, "I don't want a pardon. What I did that day wasn't me." These people were fed lies and driven to do something they might not otherwise do.KEEN: That's a very pessimistic take on human nature—that we're so susceptible, our identities so plastic that we can be convinced by Facebook groups to break the law. Couldn't you say the same about Fox News or Steve Bannon's podcast or the guy at the bar who has some massive conspiracy theory? At what point must we be responsible for what we do?BRESCIA: We should always be responsible for what we do. Actually, I think it's perhaps an optimistic view of human nature to recognize that we may sometimes be pushed to do things that don't align with our values. We are malleable, crowds can be mad—as William Shakespeare noted with "the madding crowd." Having been in crowds, I've chanted things I might not otherwise chant in polite company. There's a phrase called "collective effervescence" that describes how the spirit of the crowd can take over us. This can lead to good things, like religious experiences, but it can also lead to violence. All of this is accelerated with social media. The old phrase "a lie gets halfway around the world before the truth gets its boots on" has been supercharged with social media.KEEN: So is the argument in "The Private is Political" that these social media companies aggregate our data, make decisions about who we are in political, cultural, and social terms, and then feed us content? Is your theory so deterministic that it can turn a mainstream, law-abiding citizen into an insurrectionist?BRESCIA: I wouldn't go that far. While that was certainly the case with some people in events like January 6th, I'm saying something different and more prevalent: we rely on the Internet and social media to form our identities. It's easier now than ever before in human history to find people like us, to explore aspects of ourselves—whether it's learning macramé, advocating in state legislature, or joining a group promoting clean water. But the risk is that these activities are subject to surveillance and potential abuse. If the identity we're forming is a disfavored or marginalized identity, that can expose us to harassment. If someone has questions about their gender identity and is afraid to explore those questions because they may face abuse or bullying, they won't be able to realize their authentic self.KEEN: What do you mean by harassment and abuse? This argument exists both on the left and right. J.D. Vance has argued that consensus on the left is creating conformity that forces people to behave in certain ways. You get the same arguments on the left. How does it actually work?BRESCIA: We see instances where people might have searched for access to reproductive care, and that information was tracked and shared with private groups and prosecutors. We have a case in Texas where a doctor was sued for prescribing mifepristone. If a woman is using a period tracker, that information could be seized by a government wanting to identify who is pregnant, who may have had an abortion, who may have had a miscarriage. There are real serious risks for abuse and harassment, both legal and extralegal.KEEN: We had Margaret Atwood on the show a few years ago. Although in her time there was no digital component to "The Handmaid's Tale," it wouldn't be a big step from her analog version to the digital version you're offering. Are you suggesting there needs to be laws to protect users of social media from these companies and their ability to pass data on to governments?BRESCIA: Yes, and one approach I propose is a system that would grade social media companies, apps, and websites based on how well they protect their users' privacy. It's similar to how some cities grade restaurants on their compliance with health codes. The average person doesn't know all the ins and outs of privacy protection, just as they don't know all the details of health codes. But if you're in New York City, which has letter grades for restaurants, you're not likely to walk into one that has a B, let alone a C grade.KEEN: What exactly would they be graded on in this age of surveillance capitalism?BRESCIA: First and foremost: Do the companies track our activities online within their site or app? Do they sell our data to brokers? Do they retain that data? Do they use algorithms to push information to us? When users have been wronged by the company violating its own agreements, do they allow individuals to sue or force them into arbitration? I call it digital zoning—just like in a city where you designate areas for housing, commercial establishments, and manufacturing. Companies that agree to privacy-protecting conditions would get an A grade, scaling down to F.KEEN: The world is not a law school where companies get graded. Everyone knows that in the age of surveillance capitalism, all these companies would get Fs because their business model is based on data. This sounds entirely unrealistic. Is this just a polemical exercise, or are you serious?BRESCIA: I'm dead serious. And I don't think it's the heavy hand of the state. In fact, it's quite the opposite—it's a menu that companies can choose from. Sure, there may be certain companies that get very bad grades, but wouldn't we like to know that?KEEN: Who would get the good grades? We know Facebook and Google would get bad grades. Are there social media platforms that would avoid the F grades?BRESCIA: Apple is one that does less of this. Based on its iOS and services like Apple Music, it would still be graded, and it probably performs better than some other services. Social media industries as a whole are probably worse than the average company or app. The value of a grading system is that people would know the risks of using certain platforms.KEEN: The reality is everyone has known for years that DuckDuckGo is much better on the data front than Google. Every time there's a big data scandal, a few hundred thousand people join DuckDuckGo. But most people still use Google because it's a better search engine. People aren't bothered. They don't care.BRESCIA: That may be the case. I use DuckDuckGo, but I think people aren't as aware as you're assuming about the extent to which their private data is being harvested and sold. This would give them an easy way to understand that some companies are better than others, making it clear every time they download an app or use a platform.KEEN: Let's use the example of Facebook. In 2016, the Cambridge Analytica scandal blew up. Everyone knew what Facebook was doing. And yet Facebook in 2025 is, if anything, stronger than it's ever been. So people clearly just don't care.BRESCIA: I don't know that they don't care. There are a lot of things to worry about in the world right now. Brad Smith called Cambridge Analytica "privacy's Three Mile Island."KEEN: And he was wrong.BRESCIA: Yes, you're right. Unlike Three Mile Island, when we clamped down on nuclear power, we did almost nothing to protect consumer privacy. That's something we should be exploring in a more robust fashion.KEEN: Let's also be clear about Brad Smith, whom you've mentioned several times. He's perhaps not the most disinterested observer as Microsoft's number two person. Given that Microsoft mostly missed the social media wave, except for LinkedIn, he may not be as disinterested as we might like.BRESCIA: That may be the case. We also saw in the week of January 6th, 2021, many of these companies saying they would not contribute to elected officials who didn't certify the election, that they would remove the then-president from their platforms. Now we're back in a world where that is not the case.KEEN: Let me get one thing straight. Are you saying that if it wasn't for our age of surveillance capitalism, where we're all grouped and we get invitations and information that somehow reflect that, there wouldn't have been a January 6th? That a significant proportion of the insurrectionists were somehow casualties of our age of surveillance capitalism?BRESCIA: That's a great question. I can't say whether there would have been a January 6th if not for social media. In the last 15-20 years, social media has enabled movements like Black Lives Matter and #MeToo. Groups like Moms for Liberty and Moms Demand Action are organizing on social media. Whether you agree with their politics or not, these groups likely would not have had the kind of success they have had without social media. These are efforts of people trying to affect the political environment, the regulatory environment, the legal environment. I applaud such efforts, even if I don't agree with them. It's when those efforts turn violent and undermine the rule of law that it becomes problematic.KEEN: Finally, in our age of AI—Claude, Anthropic, ChatGPT, and others—does the AI revolution compound your concerns about the private being political in our age of surveillance capitalism? Is it the problem or the solution?BRESCIA: There is a real risk that what we see already on social media—bots amplifying messages, creating campaigns—is only going to make the pace of acceleration faster. The AI companies—OpenAI, Anthropic, Google, Meta—should absolutely be graded in the same way as social media companies. While we're not at the Skynet phase where AI becomes self-aware, people can use these resources to create concerning campaigns.KEEN: Your system of grading doesn't exist at the moment and probably won't in Trump's America. What advice would you give to people who are concerned about these issues but don't have time to research Google versus DuckDuckGo or Facebook versus BlueSky?BRESCIA: There are a few simple things folks can do. Look at the privacy settings on your phone. Use browsers that don't harvest your data. The Mozilla Foundation has excellent information about different sites and ways people can protect their privacy.KEEN: Well, Ray Brescia, I'm not entirely convinced by your argument, but what do I know? "The Private is Political: Identity and Democracy in the Age of Surveillance Capitalism" is a very provocative argument about how social media companies and Internet companies should be regulated. Thank you so much, and best of luck with the book.BRESCIA: Thanks, it's been a pleasure to have this conversation.Ray Brescia is the Associate Dean for Research & Intellectual Life and the Hon. Harold R. Tyler Professor in Law & Technology at Albany Law School. He is the author of Lawyer Nation: The Past, Present, and Future of the American Legal Profession and The Future of Change: How Technology Shapes Social Revolutions; and editor of Crisis Lawyering: Effective Legal Advocacy in Emergency Situations; and How Cities Will Save the World: Urban Innovation in the Face of Population Flows, Climate Change, and Economic Inequality.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Mark Zuckerberg Is Done Listening to His UsersSlate Magazine, By Nitish Pahwa, on January 8, 2025https://slate.com/technology/2025/01/mark-zuckerberg-meta-fact-checking-announcement-elon-musk-trump.htmlMark Zuckerberg's decision to replace third-party fact-checkers with community-based content moderation at Meta has sparked significant debate. Citing the inherent biases of experts, Zuckerberg claims this shift aligns with Meta's commitment to free expression. Critics argue this move opens the door to increased misinformation and erodes accountability, reminiscent of past controversies like the Cambridge Analytica scandal. Skeptics worry this decision conveniently follows political pressures, potentially enabling unchecked political propaganda. While some view this as an effort to reduce perceived censorship, others fear it could amplify hate speech and conspiracy theories. Concerns persist about whether this change genuinely promotes free speech or prioritizes profit over public safety.The Non-Prophets, Episode 24.03.1 featuring Scott Dickie, Chris Mallard and Eli SlackBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-non-prophets--3254964/support.
The Stanford Business School professor Michal Kosinski has spent his career warning about the corrosive impact of technology, and particularly social media, on democratic institutions and individual freedom. The Polish born academic gained notoriety for his research at Cambridge University on how social media data could predict intimate personal traits. His work became particularly relevant during the Cambridge Analytica scandal in 2016, leading to significant legal consequences for Facebook, including a $50 billion fine. In this KEEN ON conversation with Kosinski, recorded in Munich at DLD, he emphasizes that Facebook wasn't inherently malicious but failed to understand the full implications of their intrusive technology. Kosinksi connects social media's rise with the growth of populism, explaining how platforms enabled figures like Trump and even Bernie Sanders to bypass traditional political gatekeepers. Kosinski also discusses his controversial 2017 research showing that AI can predict personal characteristics, including sexual orientation, from facial features. On privacy, Kosinski believes that complete privacy protection may be impossible in the modern digital age. Instead, he advocates for building social and legal systems that make privacy invasions less dangerous. Looking to the future, Kosinski expresses short-term optimism about AI's potential to improve lives but long-term concern about the risks of artificial general intelligence (AGI). He notes that while we may see increased prosperity and advancement in the near future, the exponential acceleration of technological progress means long-term risks could materialize much sooner than expected.Michal Kosinski is an Associate Professor of Organizational Behavior at Stanford Graduate School of Business. His research interests encompass both human and artificial cognition. His current work centers on examining the psychological processes in Large Language Models and leveraging Artificial Intelligence, Machine Learning, Big Data, and computational techniques to model and predict human behavior. He co-authored Handbook of Social Psychology and Modern Psychometrics, two popular textbooks, and has published over 100 peer-reviewed papers in prominent journals such as Proceedings of the National Academy of Sciences, Nature Computational Science, Psychological Science, Journal of Personality and Social Psychology, Machine Learning, and Scientific Reports, which have been cited over 22,000 times. He is among the Top 1% of the Highly Cited Researchers according to Clarivate. His research has inspired a cover of The Economist, a 2014 theatre production titled “Privacy,” several TED talks, and a video game. It has been featured in thousands of press articles, books, podcasts, and documentaries. He received a Rising Star award from the Association of Psychological Science (2015) and an Early Achievement Award from the European Association of Personality Psychology (2023). He was behind the first press article warning against Cambridge Analytica. His research exposed the privacy risks they exploited and assessed the effectiveness of their methods. More about his role in uncovering their actions can be found in Steven Levy's insightful book Facebook: The Inside Story and Sander van der Linden's article, “Weapons of Mass Persuasion.” He earned a PhD in psychology from the University of Cambridge and two master's degrees in psychometrics and social psychology. Before his current appointment, he held positions as a post-doctoral scholar in Stanford's Computer Science Department, Deputy Director of the University of Cambridge Psychometrics Centre, and a researcher in Microsoft Research's Machine Learning Group.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
The Biden administration takes its first step to retaliate against China for the Salt Typhoon cyberattack. The Feds release a draft National Cyber Incident Response Plan. Telecom Namibia suffers a cyberattack. The Australian Information Commissioner has reached a $50 million settlement with Meta over the Cambridge Analytica scandal. CISA releases its 2024 year in review. LastPass hackers nab an additional five millions dollars. Texas Tech University notifies over 1.4 million individuals of a ransomware attack. Researchers discover a new DarkGate RAT attack vector using vishing. A fraudster gets 69 months in prison. On our Threat Vector segment, David Moulton speaks with Nir Zuk, Founder and CTO of Palo Alto Networks about predictions for 2025. Surveillance tweaks our brains in unexpected ways. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector Segment On our Threat Vector segment, we preview this week's episode where host David Moulton talks with Nir Zuk, Founder and CTO of Palo Alto Networks. They talk about Palo Alto Networks' predictions for 2025, focusing on the shift to unified data security platforms and the growing importance of AI in cybersecurity. You can catch new episodes of Threat Vector every Thursday here and on your favorite podcast app. Selected Reading Biden Administration Takes First Step to Retaliate Against China Over Hack (The New York Times) US Unveils New National Cyber Incident Response Plan (Infosecurity Magazine) Telecom Namibia Cyberattack: 400,000 Files Leaked (The Cyber Express) Landmark settlement of $50m from Meta for Australian users impacted by Cambridge Analytica incident (OAIC) CISA Warns of New Windows Vulnerability Used in Hacker Attacks (CyberInsider) CISA 2024 Year in review (CISA) LastPass threat actor steals $5.4M from victims just a week before Xmas (Cointelegraph) Texas Tech University Data Breach Impacts 1.4 Million People (SecurityWeek) Microsoft Teams Vishing Spreads DarkGate RAT (Dark Reading) Man Accused of SQL Injection Hacking Gets 69-Month Prison Sentence (SecurityWeek) The psychological implications of Big Brother's gaze (SCIMEX) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
How can big tech and a healthy democracy coexist? In this episode Rick is joined by Marietje Schaake, author of The Tech Coup: How to Save Democracy from Silicon Valley. Together, they delve into the growing influence of big tech companies on global democracy, the dangers posed by unchecked data collection, and the rise of tools like Pegasus spyware. Schaake highlights the need for greater transparency, regulation, and accountability in the tech sector, while addressing the potential threats to privacy and democratic freedoms. They also discuss the weaponization of data by companies like Palantir and Cambridge Analytica, as well as potential solutions to safeguard democratic systems in the face of advancing technology. Marietje's book, The Tech Coup: How to Save Democracy from Silicon Valley, available now. Timestamps: (00:01:26) The Tech Coup (00:05:37) Government against large corporations (00:09:56) Which country's are doing it right? (00:15:05) Palantir Follow Resolute Square: Instagram Twitter TikTok Find out more at Resolute Square Learn more about your ad choices. Visit megaphone.fm/adchoices
We have a very special episode of Decoder today. It's become a tradition every fall to have Verge deputy editor Alex Heath interview Meta CEO Mark Zuckerberg on the show at Meta Connect. This year, before his interview with Mark, Alex got to try a new pair of experimental AR glasses the company is calling Orion. Alex talked to Mark about a whole lot more, including why the company is investing so heavily in AR, why he's shifted away from politics, Mark's thoughts on the link between teen mental health and social media, and why the Meta chief executive is done apologizing for corporate scandals like Cambridge Analytica that he feels were overblown and misrepresented. Links: Hands-on with Orion, Meta's first pair of AR glasses | The Verge The biggest news from Meta Connect 2024 | The Verge Mark Zuckerberg: publishers ‘overestimate the value' of their work for training AI | The Verge Meta extends its Ray-Ban smart glasses deal beyond 2030 | The Verge The Ray-Ban Meta smart glasses actually make the future look cool | The Verge Meta has a major opportunity to win the AI hardware race | The Verge Instagram is putting every teen into more private and restrictive new account | The Verge Threads isn't for news and politics, says Instagram's boss | The Verge Facebook puts news on the back burner | The Verge Meta is losing a billion dollars on VR and AR every single month | The Verge Transcript: https://www.theverge.com/e/24017522 Credits: Decoder is a production of The Verge and is part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt; our editor is Callie Wright. This episode was additionally produced by Brett Putman and Vjeran Pavic. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this insightful episode, host Mark Stinson sits down with Ben Gillenwater, a technology expert with a background as a CTO, former NSA contractor, and entrepreneur. Together, they delve into the critical topic of internet safety in the home, exploring the intersection of creativity, technology, and security. Ben shares his unique perspective on how the digital age's innovations come with significant privacy and security concerns that every family should be aware of. He offers practical advice on how to protect personal information and minimize risks, emphasizing the importance of staying vigilant in the face of increasingly sophisticated online threats.5 Key Takeaways:1. Understanding the Stakes: Internet safety extends far beyond physical threats; it includes privacy, emotional well-being, and protecting against the monetization of our digital behaviors. Major companies like Meta, Google, and Microsoft are financially motivated to collect and utilize personal data to enhance their advertising revenue, often at the cost of user privacy.2. The Hidden Dangers of Data Collection: Seemingly harmless online activities can lead to significant behavioral and information exposure. For instance, social media platforms like Facebook track user behavior across the internet, leading to data breaches like the Cambridge Analytica scandal, where personal data was exploited for political gain.3. Children and Internet Safety: Giving children unfiltered access to the internet is akin to leaving your home's front door unlocked. Ben stresses the importance of monitoring and controlling internet access for children and gradually increasing their exposure as they mature and develop better judgment.4. The Illusion of Free Services: Many free apps and services are designed to capture and monetize user data. Ben advises listeners to be skeptical of "free" offerings, as they often come with hidden costs, such as compromised privacy and personal information being sold to third parties.5. Practical Solutions for Families: There are actionable steps that families can take to protect themselves, such as using child safety apps like Bark, which offer comprehensive monitoring of children's online activities. However, as children grow older, it's essential to balance technical controls with education on digital literacy and responsible online behavior.Quote:*"You are safer removing the locks in your house than you are giving your kids unfiltered access to the Internet." — Ben GillenwaterDon't miss this episode if you want to learn how to safeguard your family's digital life and navigate the complexities of internet safety in the modern age.Subscribe & Follow: Stay updated with the latest episodes of Your World of Creativity by subscribing on your favorite podcast platform and following us on social media.
In 2000, Republicans stole Florida by scrubbing voter rolls and stopping the recount. In 2004, Republicans stole Ohio by suppressing the vote through deliberately engineered long voting lines and other election day chaos. In 2024, Republicans are throwing everything they can at Georgia. Republican officials in the state are helping them, even those harassed by Trump's Big Lie. If MAGA can't steal the Electoral College outright, their fallback plan is to get close, drawing a tie. This would send the election to the House of Representatives. There, MAGA loyalist Mike Johnson, who blocked urgently needed Ukraine aid for six critical months, giving Russia the advantage on the battlefield, would likely pick Trump.The Senate, in control of Democrats, would pick the Vice President, likely Kamala Harris, who would then very likely be replaced by Trump's White House as the Heritage Foundation massacres the government with Project 2025, Christian nationalism's decades long plan to establish a dictatorship. They have 10,000 Trump loyalists already in place, ready to carry this out, according to a recent report by ProPublica. In March, the U.S. sanctioned two PR firms in Russia hired by the Kremlin to manipulate Americans into spreading disinformation to “divide and conquer” Trump's opposition in the 2024 election. So it begs the question, why isn't the DOJ, the FBI, the Department of Homeland Security, and Secretaries of State stepping up to stop homegrown threats?: Why is the Christian nationalist group Ziklag allowed to fund EagleAI, which empowers MAGA activists to challenge voter registrations, including 10,000 voters in Georgia? (Remember, Trump only needed 11k votes to win Georgia in 2020). Why is the Department of Homeland Security allowing voting machines in Georgia or anywhere in the country whose data was breached by MAGA loyalist Sidney Powell, with that data already spread among MAGA and possibly to foreign adversaries, like Russia? Why is the DOJ letting Elon Musk fund a new version of Cambridge Analytica, targeting voters in swing states? Why are the Secretaries of State allowing around 70 MAGA loyalists with histories of refusing to certify elections to remain in positions of power in 16 counties in six swing states, including Georgia, Arizona, Michigan, Nevada, North Carolina, and Pennsylvania, especially given the history of such loyalists to break the law, as they did in Georgia, helping Sidney Powell steal sensitive voting machine data? This week's bonus show also looks at J.D. Vance's war on people who don't birth children. That brings us to this viral quote from a recent article published on the site of The Heritage Foundation, architects of Project 2025: “Joseph Stalin had an utter disregard for human life, and his regime claimed the lives of 9,000,000-20,000,000 of its own subjects. Yet even Stalin understood that society depended on strong, intact families.” This is an actual quote from Heritage Foundation Senior Research Associate Emma Waters. Her piece was taken down after it went viral, but you can read an archived version in the show notes. All that and more is discussed in this week's Q&A shaped by listeners' questions subscribed at the Democracy Defender ($10/month) and higher on Patreon. To join the conversation, subscribe at Patreon.com/Gaslit. Discounted annual memberships are available! Thank you to everyone who supports the show – we could not make Gaslit Nation without you! RSVP to the live taping of Gaslit Nation featuring Terrell Starr of the Black Diplomats Podcast at the Ukrainian Institute of America on September 16 at 7pm. Patreon supporters at the Truth-teller level and higher get in free–message us to be added to the guest list! More details here: https://ukrainianinstitute.org/event/books-at-the-institute-chalupa/ Show Notes: These Swing State Election Officials Are Pro-Trump Election Deniers At least 70 pro-Trump conspiracists are election officials in key battleground counties — and they are poised to make a giant mess on Election Day https://www.rollingstone.com/politics/politics-features/trump-swing-state-officials-election-deniers-1235069692/ Russia is relying on unwitting Americans to spread election disinformation, US officials say https://apnews.com/article/russia-trump-biden-harris-china-election-disinformation-54d7e44de370f016e87ab7df33fd11c8 Inside Ziklag, the Secret Organization of Wealthy Christians Trying to Sway the Election and Change the Country https://www.propublica.org/article/inside-ziklag-secret-christian-charity-2024-election Politics How an Elon Musk PAC is using voter data to help Trump beat Harris in 2024 election https://www.cnbc.com/2024/08/02/elon-musk-pac-voter-data-trump-harris.html Web Archive: The Heritage Foundation took down this piece really handing it to Stalin https://web.archive.org/web/20240306073726/https://www.heritage.org/marriage-and-family/commentary/crushing-societys-building-block Democrats Should Remember that They Won Florida in 2000 https://theintercept.com/2018/11/10/democrats-should-remember-al-gore-won-florida-in-2000-but-lost-the-presidency-with-a-preemptive-surrender/ Ohio in 2004: None Dare Call It Stolen https://harpers.org/archive/2005/08/none-dare-call-it-stolen/ Reform the Electoral College – Learn More About the National Vote Pact https://www.gaslitnationpod.com/episodes-transcripts-20/2024/5/28/reform-the-electoral-college “The work of Project 2025 is largely done. Under Paul Dans, the project has assembled a database of more than 10,000 names — job candidates vetted for loyalty to Trump's cause — who will be ready to deploy into federal agencies after the 2024 election.” https://x.com/AlisonKodjak/status/1819335000434548958