POPULARITY
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Three Buddy Problem - Episode 59: Apple drops another emergency iOS patch and we unpack what that “may have been exploited” language really means: zero-click chains, why notifications help but forensics don't, and the uncomfortable truth that Lockdown Mode is increasingly the default for high-risk users. We connect the dots from ImageIO bugs to geopolitics, discuss who's likely using these exploits, why Apple's guidance stops short, and the practical playbook (ADP on, reboot often, reduce attack surface) that actually works. Plus, we debate Microsoft throttling MAPP access for Chinese vendors, the idea of “letters of marque” for cyber (outsourced offense: smart deterrent or Pandora's box?), and dissect two case studies that blur APT and crimeware: PipeMagic's CLFS zero-day and Russia-linked “Static Tundra” riding seven-year-old Cisco bugs. Cast: Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Ryan Naraine (https://twitter.com/ryanaraine) and Costin Raiu (https://twitter.com/craiu).
Today we're sharing an episode from the Click Here podcast from Recorded Future News and PRX.The early Internet was all about hope and utopian possibilities. But the founder of the Citizen Lab, Ron Deibert, always had an unsettled feeling about the web and its dark underbelly. So he created a team of digital sleuths to investigate.More episodes of Click here are available at: https://podcasts.apple.com/us/podcast/click-here/id1225077306
Register for Free, Live webcasts & summits:https://poweredbybhis.com00:00 - PreShow Banter™ — Government Linux04:16 - Denmark is Done with Teams! - Talkin' Bout [infosec] News 2025-06-1605:02 - Story # 1: ‘We're done with Teams': German state hits uninstall on Microsoft17:34 - Story # 1b: Denmark Wants to Dump Microsoft Software for Linux, LibreOffice18:14 - Story # 2: Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot25:50 - Story # 3: Fog ransomware attacks use employee monitoring tool to break into business networks30:25 - Story # 4: Expired Discord Invites Hijacked for Stealthy Malware Attacks34:00 - Story # 5: SmartAttack uses smartwatches to steal data from air-gapped systems40:25 - Story # 6: Mirai Botnets Exploiting Wazuh Security Platform Vulnerability44:47 - Story # 7: Google Cloud and Cloudflare hit by widespread service outages48:04 - Story # 8: UNFI cyberattack shuts down network and leaves Whole Foods and others in limbo50:34 - Story # 9: New SharePoint Phishing Attacks Using Lick Deceptive Techniques51:08 - Story # 10: US-backed Israeli company's spyware used to target European journalists, Citizen Lab finds53:32 - Story # 11: Five Zero-Days, 15 Misconfigurations Found in Salesforce Industry Cloud
I limiti pratici dell'ipotesi di arrivare a una forma di voto digitale in remoto, le novità sul caso Paragon emerse dopo la pubblicazione dell'analisi forense di The Citizen Lab e i possibili retroscena dell'attivazione di Starlink in Iran nel pieno del conflitto con Israele. A cura di Marco Schiaffino.
Three Buddy Problem - Episode 50: This week, we dissect cyber flashpoints in the Iran-Israel war, revisit the “magnet of threats” server in Iran that attracted APTs from multiple nation-states, and react to Israel's Mossad sneaking explosive drone swarms deep into Iran to support airstrikes. Plus, Stealth Falcon's new WebDAV zero-day, SentinelOne's brush with Chinese APTs, Citizen Lab's forensic takedown of Paragon's iPhone spyware, and the sneaky Meta/Yandex trick that links Android web browsing to app IDs. Cast: Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Ryan Naraine (https://twitter.com/ryanaraine) and Costin Raiu (https://twitter.com/craiu).
We are all vulnerable to digital surveillance, as there's little protection to prevent our phones from getting hacked. Mercenary spyware products like Pegasus are powerful and sophisticated, marketed to government clients around the world. Cybersecurity expert Ron Deibert tells IDEAS, "the latest versions can be implanted on anyone's device anywhere in the world and as we speak, there is literally no defence against it.” Deibert is the founder of the Citizen Lab at the University of Toronto, a group of tech-savvy researchers who dig into the internet, looking for the bad actors in the marketplace for high-tech surveillance and disinformation. In his new book, Chasing Shadows, he shares notorious cases he and his colleagues have worked on and reveals the dark underworld of digital espionage and subversion.
Since 2001, Ron Deibert and his team at the University of Toronto's Citizen Lab have uncovered dozens of covert spy operations around the world, including the creators of the phone hacking spyware, Pegasus, created by the Israeli company, NSO group, whose clients include Saudi Arabia's Mohammed bin Salman, and Rwanda's Paul Kagame. In a wide-ranging discussion, Deibert tells host Steve Paikin about his recent trip to the White House, the impact that the Trump administration's policies will have on cyber security worldwide, and why Canadians ought to be concerned by a bilateral agreement with the U.S. called the Cloud Act. His new book is called, "Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy." See omnystudio.com/listener for privacy information.
Three Buddy Problem - Episode 39: Luta Security CEO Katie Moussouris joins the buddies to parse news around a coordinated Chinese exposure of Taiwan APT actors, CitizenLab's report on Paragon spyware and WhatsApp exploits, an “official” Russian government exploit-buying operation shopping for Telegram exploits, the fragmentation of exploit markets and the future of CISA in the face of budget cuts and layoffs. Cast: Katie Moussouris (https://lutasecurity.com), Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Costin Raiu (https://twitter.com/craiu) and Ryan Naraine (https://twitter.com/ryanaraine).
The Citizen Lab said it believes several governments may be customers of spyware maker Paragon Solutions. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Three Buddy Problem - Episode 35: Juanito is live from DistrictCon with notes on discussion of an elusive iOS zero-day by a company called QuaDream and Apple's controversial removal of iCloud backup end-to-end encryption in the UK. We also cover a staggering $1.4 billion hack by the Lazarus Group against Bybit, new angles in NSA-linked cyber-espionage against China's top universities, Chinese hacking gangs moonlighting as ransomware criminals, and Russian APTs abusing Signal's “linked devices” feature. Plus, Costin explains Microsoft's quantum computing breakthrough. Cast: Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Costin Raiu (https://twitter.com/craiu) and Ryan Naraine (https://twitter.com/ryanaraine).
As founder and director of the University of Toronto's Citizen Lab, Ron Deibert has spent his career tracking down and uncovering some of the world's most clandestine cyber espionage operations. Now, the cybersecurity expert is pulling back the curtain on this shadowy world in his new book, Chasing Shadows. Deibert tells David Common how our democracies have become vulnerable to these threats as we become more reliant on technology – and what we need to do to protect them.
Guest: Ron Deibert, founder, Citizen Lab at the University of Toronto and author of Chasing Shadows.
It's become pretty easy to spot phishing scams: UPS orders you never made, banking alerts from companies you don't bank with, phone calls from unfamiliar area codes. But over the past decade, these scams – and the technology behind them – have become more sophisticated, invasive and sinister, largely due to the rise of something called ‘mercenary spyware.'The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you'd been hacked.We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.Mentioned:“Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert“Meta's WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, ReutersFurther Reading:“The Autocrat in Your iPhone,” by Ron Deibert“A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem“Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert
Guest host David Common speaks with The Washington Post's Toluse "Tolu" Olorunnipa and Semafor's Kadia Goba about Donald Trump's flurry of controversial actions this past week, The Citizen Lab's Ron Deibert takes us inside the shadowy world of cyber espionage, Microsoft co-founder Bill Gates reflects on how his early years shaped him, and political economist Mark Manger and Mathew Holmes from the Canadian Chamber of Commerce consider strategies to bolster Canada's economy amid Trump's tariff pause.Discover more at https://www.cbc.ca/sunday
The Herle Burly was created by Air Quotes Media with support from our presenting sponsor TELUS, as well as CN Rail, and TikTok Canada.Greetings, you ever-curious Herle Burly-ites. I'm going to get right to it today, because we've got a topic that's both pervasive and invasive, and I've wanted to explore it for a while... It's Cybersecurity and Cyber Espionage Day on the podcast!With me is a guest whose CV in the field is as long and tall as a grain silo in my hometown of Prelate, Saskatchewan. Ron Deibert is here.He is a professor of political science the founder and director of the Citizen Lab – the world's foremost digital watchdog – at the Munk School of Global Affairs & Public Policy, University of Toronto. The Citizen Lab focuses on research, development, as well as strategic policy and legal engagement at the intersection of information and communication technologies, human rights, and global security.Ron's been a principal investigator and contributing author to more than 160 reports covering cyber espionage, commercial spyware, Internet censorship and human rights. Those reports have over 25 front page exclusives in the New York Times, Washington Post, Financial Times and other media outlets, and have been cited by policymakers and academics. And his brand-new book – “Chasing Shadows” – tells the story of Citizen Lab and the dozens of cyber espionage cases it's exposed.So, we're going to find out more about Ron today, his backstory and what led him to the field. We'll dive into some of the cases he's been involved with. How governments and bad actors use these surveillance techniques, via our own computers and smartphones. And ask the question: how the hell can we be protected from all of this?Thank you for joining us on #TheHerleBurly podcast. Please take a moment to give us a rating and review on iTunes, Spotify, or your favourite podcast app.Watch episodes of The Herle Burly via Air Quotes Media on YouTube.
WhatsApp, used by millions of people around the world, says its users were hacked by the Paragon Solutions spyware company. The World's Host Marco Werman speaks with John Scott-Railton, a senior researcher at Citizen Lab, about the continuing threat of sophisticated spyware. The post WhatsApp identifies dozens of users hacked by Paragon spyware company appeared first on The World from PRX.
WhatsApp, used by millions of people around the world, says its users were hacked by the Paragon Solutions spyware company. The World's Host Marco Werman speaks with John Scott-Railton, a senior researcher at Citizen Lab, about the continuing threat of sophisticated spyware. The post WhatsApp identifies dozens of users hacked by Paragon spyware company appeared first on The World from PRX.
Tra le persone informate da Meta di essere state spiate su WhatsApp c'è anche Luca Casarini, capo missione e tra i fondatori di Mediterranea, la ONG che si occupa di salvare vite in mezzo al mare. Nel suo messaggio Meta consigliava di cambiare subito il cellulare e di rivolgersi ai propri consulenti, un team di ricerca basato all'Università di Toronto, "The Citizen Lab". Quasi contestualmente testate e agenzie giornalistiche internazionali davano la notizia della violazione dei sistemi di sicurezza di WhatsApp che coinvolgeva 90 'target' in tutto il mondo, in particolare attivisti della società civile e giornalisti. Il software utilizzato per effettuare l'infiltrazione spyware - precisa la ONG - è chiamato 'Paragon' ed è messo a punto dalla società israeliana Paragon Solutions, che ha dichiarato di averlo fornito 'al governo degli Usa e ad altre agenzie governative di intelligence di paesi alleati'. Non è la prima volta che giornalisti e attivisti che si occupano di migrazione finiscono in una torbida vicenda di intercettazioni abusive. Non è una novità nemmeno che le ONG siano da tempo nel mirino del governo e dei Servizi come già è accaduto negli anni passati con processi farseschi che si sono chiusi con un buco nell'acqua. Intimidire spiare e delegittimare chi si occupa di persone migranti è un'abitudine da tempo. Resta una domanda sostanziale: il governo sapeva di questo spionaggio? C'entra qualcosa? Perché sarebbe davvero grave riportare in Libia i torturatori su un volo di Stato e intercettare illegalmente chi salva le sue vittime. O no? #LaSveglia per La NotiziaDiventa un supporter di questo podcast: https://www.spreaker.com/podcast/la-sveglia-di-giulio-cavalli--3269492/support.
Ronald Deibert of Citizen Lab tells host Michael Isikoff how his research firm has uncovered the shocking abuse of commercial spyware by foreign governments and spy agencies around the world, resulting in a proliferation of "Watergate-like” scandals.Follow our guest:Ron Deiberthttps://x.com/RonDeibertChasing Shadowshttps://www.simonandschuster.ca/books/Chasing-Shadows/The Citizen Labhttps://citizenlab.ca/https://x.com/citizenlab Follow Jeff Stein on Twitter:https://twitter.com/SpyTalkerFollow Michael Isikoff on Twitter:https://twitter.com/isikoff Follow SpyTalk on Twitter:https://twitter.com/talk_spySubscribe to SpyTalk on Substackhttps://www.spytalk.co/Take our listener survey where you can give us feedback.http://survey.podtrac.com/start-survey.aspx?pubid=BffJOlI7qQcF&ver=short
Tech oligarchs have risen to dominate global politics and public discourse, posing grave threats to democracy and governance. Under Donald Trump's presidency, the consolidation of power among Silicon Valley elites has exacerbated critical challenges, including the spread of misinformation, the weaponization of social media, and the unchecked development of artificial intelligence. These forces have not only deepened political polarization but also paved the way for the normalization of extremism, undermining the foundations of truth in the digital era. The intersection of technological exploitation, political radicalization, and the information war presents urgent questions for the future of democratic societies. In this episode of Beyond the Headlines, we unpack these critical dynamics with two distinguished guests. Andres Kasekamp, an expert on populist radical right movements and European governance, explores the historical and political parallels of authoritarian trends. Ron Deibert, a global authority on cybersecurity and digital rights, highlights the ways in which tech platforms enable political manipulation and disinformation campaigns. Together, they offer in-depth insights into the complex role of digital platforms in amplifying authoritarianism and discuss potential pathways for mitigating their impact on democracy. Andres Kasekamp is the Elmar Tampõld Chair of Estonian Studies and Professor of History at the University of Toronto's Department of History and the Munk School of Global Affairs and Public Policy. He is a leading scholar on Baltic politics, memory politics, and populist radical right movements. Formerly a Professor of Baltic Politics at the University of Tartu and Director of the Estonian Foreign Policy Institute in Tallinn, Kasekamp has held visiting positions at esteemed institutions such as Humboldt University in Berlin and the Norwegian Institute for International Affairs. Among his acclaimed works is A History of the Baltic States, which has been translated into multiple languages and remains a definitive text in the field. His research explores European foreign and security policy and the intricate dynamics of cooperation and conflict in the Baltic Sea region. Currently, he is editing The Oxford Handbook of Modern Baltic History. Ron Deibert is a Professor of Political Science and the Director of the Citizen Lab at the Munk School of Global Affairs & Public Policy, University of Toronto. A pioneer in cybersecurity and human rights, Deibert has led the Citizen Lab's groundbreaking investigations into cyber espionage, commercial spyware, and digital censorship, producing over 120 influential reports. These include the Tracking Ghostnet investigation into cyber-espionage and the Reckless series, which revealed spyware abuses targeting journalists and activists. Deibert is also the author of RESET: Reclaiming the Internet for Civil Society, a winner of the Shaughnessy Cohen Prize for Political Writing. His work has earned numerous accolades, including the Electronic Frontier Foundation Pioneer Award and the Order of Ontario. Beyond academia, he serves on advisory boards for organizations like Amnesty International and PEN Canada, making him a critical voice in addressing the intersection of technology, democracy, and civil liberties. Produced by: Julia Brahy
Three Buddy Problem - Episode 24: In this episode, we did into Lumen/Microsoft's revelations on Russia's Turla APT stealing from a Pakistani APT, and issues around fourth-party espionage and problems with threat actor attribution. We also discuss Citizen Lab's findings on Monokle-like spyware implanted by Russian authorities, the slow pace of Salt Typhoon disinfection, the Solana web3.js supply chain attack affecting crypto projects, and the Romanian election crisis over Russian interference via TikTok. Cast: Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Costin Raiu (https://twitter.com/craiu)and Ryan Naraine (https://twitter.com/ryanaraine).
Ronald Deibert, director de la organización Citizen Lab de la Universidad de Toronto, conversó con La W acerca del software espía ‘Pegasus'.
By now, most people know how to recognize the signs of a phishing e-mail – poor spelling and grammar, strange sender e-mail addresses, and of course, an instruction to click on a link, where you're asked to put in your banking or login credentials. But these scams are becoming more sophisticated and politically motivated.Last week, Citizen Lab at the University of Toronto uncovered what they're calling the River of Phish campaign, which uses sophisticated social engineering practices to target people, including a former U.S. ambassador to Ukraine. The Globe's telecom reporter Alexandra Posadzki is on the show to talk about what Citizen Lab found, how the scheme works, and what we know about the Russia-linked group behind it.Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Rebekah Brown and John Scott-Railton of the Citizen Lab join Dennis Fisher to dive into their group's new report on highly targeted spear phishing campaigns by the Russian threat actor COLDRIVER and then discuss the emergence of a new, possibly related group called COLDWASTREL.
On 30 May, a new report produced by the digital rights organisation Access Now and Citizen Lab revealed the details of the latest Pegasus attack on Russian and Belarusian journalists and activists. Pegasus, a sophisticated spyware made by Israel's NSO Group made headlines in 2021 when it was discovered to have been targeted at thousands of people from all over the globe, including human rights activists and media workers. What makes this spyware so dangerous is that it doesn't require clicking on a link and some victims may never discover that they've been hacked. It can penetrate IOS and Android systems and gain full access to a device- including photos, passwords emails and even microphone. In this episode of Trouble with the Truth, Lana talks to Natalia Krapiva, the Senior Tech-Legal Counsel at Access Now about the latest targets of Pegasus attacks. Among them: the CEO of Novaya Gazeta, Maria Epifanova, journalists Evgeny Pavlov and Evgeny Erlikh, Belarusian activist Andrei Sannikovand, and the editor-in-chief of independent Belarusian media website Charter97.org - Natallia Radzina. They discuss what makes Pegasus so hard to identify and who could be behind it - while Russian and Belarusian authorities are the most obvious suspects, the truth is more complex. Finally, Natalia shares some useful advice on how journalists can protect themselves from spyware and what steps they should take if they discover they've been hacked. Useful resources: Access Now Digital Security helpline: https://www.accessnow.org/help/Citizen Lab Tools & Resources: https://citizenlab.ca/category/research/tools-resources/Justice for Journalists Media Safety Academy: https://jfj.academy/en/
In episode 334, hosts Tom Eston, Scott Wright, and Kevin Johnson discuss two major topics. First, they explore the ongoing legal battle between Citizen Lab and the Israeli spyware company NSO Group. The courts have consistently blocked NSO's attempts to access Citizen Lab's documents to protect victim privacy. Second, they discuss Apple's new AI features […] The post Citizen Lab vs. NSO Group, Apple AI and Privacy appeared first on Shared Security Podcast.
(***TIMESTAMPS in description below) ~Jonathan Scott is a Gray Hat Hacker. He is known for exposing the *real* story behind "Hotel Rwanda" and for his expertise on NSO Group's Mysterious Spyware, "Pegasus." - BUY Guest's Books & Films IN MY AMAZON STORE: https://amzn.to/3RPu952 EPISODE LINKS: - Julian Dorey PODCAST MERCH: https://juliandorey.myshopify.com/ - Support our Show on PATREON: https://www.patreon.com/JulianDorey - Join our DISCORD: https://discord.gg/Ajqn5sN6 JONATHON SCOTT'S LINKS: - JONATHON'S YOUTUBE: https://www.youtube.com/c/jonathandata1 JULIAN YT CHANNELS: - SUBSCRIBE to Julian Dorey Clips YT: https://www.youtube.com/@juliandoreyclips - SUBSCRIBE to Julian Dorey Daily YT: https://www.youtube.com/@JulianDoreyDaily - SUBSCRIBE to Best of JDP: https://www.youtube.com/@bestofJDP ***TIMESTAMPS*** 00:00 - Grey Hat Hacker, Pegasus, Bitcoin Controversy Case
From August 12, 2020: President Trump recently issued executive orders aimed at banning TikTok and WeChat from operating in the United States. To discuss the sanction, Bobby Chesney sat down with Dr. Sheena Chestnut Greitens, an associate professor at the LBJ School of Public Affairs at the University of Texas at Austin and a faculty affiliate with the Strauss Center for International Security and Law and the Clements Center for National Security at UT; and Dr. Ronald Deibert, a professor of political science and the founder and director of The Citizen Lab at the University of Toronto's Munk School of Global Affairs and Public Policy. In addition to the executive orders concerning TikTok and WeChat, they also discussed the larger U.S.-China relationship and the role of technology competition in that space.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Around the globe, journalists, human rights activists, scholars and others are facing digital attacks from Pegasus, military-grade spyware originally developed to go after criminals. Some of the people targeted have been killed or are in prison.In this episode, Reveal partners with the Shoot the Messenger podcast to investigate one of the biggest Pegasus hacks ever uncovered: the targeting of El Faro newspaper in El Salvador.In the opening story, hosts Rose Reid and Nando Vila speak with El Faro co-founder Carlos Dada and reporter Julia Gavarrete. El Faro has been lauded for its investigations into government corruption and gang violence. The newspaper is no stranger to threats and intimidation, which have increased under the administration of President Nayib Bukele.Reid and Vila also speak with John Scott-Railton of Citizen Lab, a Toronto-based digital watchdog group. Scott-Railton worked to identify the El Faro breach, and it was one of the most obsessive cases of spying Citizen Lab has ever seen.Over the course of one year, 22 members of the newspaper's staff had their phones infected with Pegasus and were surveilled by a remote operator. Researchers suspect Bukele's government was behind the spying, though officials have denied those allegations. The breach forced El Faro's journalists to change the way they work and live and take extreme measures to protect sources and themselves. Then Reid talks with Reveal's Al Letson about growing efforts to hold the NSO Group, the company behind Pegasus, accountable for the massive digital attacks. Support Reveal's journalism at Revealnews.org/donatenow Subscribe to our weekly newsletter to get the scoop on new episodes at Revealnews.org/newsletter Connect with us on Twitter, Facebook and Instagram
Citizen Lab founder and director Ron Deibert reflects on what's changed in the world of spyware, surveillance, and social media since he delivered his 2020 CBC Massey Lectures, Reset: Reclaiming the Internet for Civil Society. *This episode is part of an ongoing series of episodes marking the 60th anniversary of Massey College, a partner in the Massey Lectures.
Explore cybersecurity threats and solutions with experts analyzing critical vulnerabilities in keyboard apps, a $5.6 million privacy breach settlement for Ring users, and the latest trends in cyber attacker dwell times. Gain insights on global security measures and personal privacy protection. Sources: https://citizenlab.ca/2024/04/vulnerabilities-across-keyboard-apps-reveal-keystrokes-to-network-eavesdroppers/ https://www.bleepingcomputer.com/news/security/ring-customers-get-56-million-in-privacy-breach-settlement/ https://www.helpnetsecurity.com/2024/04/24/2023-attacker-dwell-time/ 00:00 Intro 01:03 Deep Dive into Keyboard App Vulnerabilities and User Protection Tips 03:39 Ring's Privacy Breach: Details and Consumer Compensation 06:09 Cybersecurity Wins: Decreased Attacker Dwell Time and Enhanced Defenses 09:53 Conclusion: The Future of Cybersecurity and the Role of Large Language Models Tags: cybersecurity, privacy breach, keyboard apps, encryption, Ring settlement, attacker dwell time, data protection, smart home security Search Phrases: keyboard app security flaws Ring privacy breach settlement details reducing cyber attacker dwell time encryption vulnerabilities in keyboard apps FTC refund to Ring users how to protect against cybersecurity threats latest trends in cybersecurity attacks privacy and security in smart home devices Summarized Transcript: Welcome to the Daily Decrypt, your essential guide to navigating the digital domain. In today's episode, we're uncovering critical vulnerabilities in popular Chinese pinyin keyboard apps, exploring a substantial privacy breach with Ring's camera system, and diving into the global improvements in cybersecurity detection times. Join us as we decode the digital world, keeping your data safe and your curiosity alive. Our journey begins with a startling revelation from Citizen Lab. Over 1 billion users of popular Chinese pinyin keyboard apps are at a crossroads, facing the risk of having their keystrokes decrypted. Among the inspected vendors - Baidu, Honor, Huawei, iFlytec, Oppo, Samsung, Tencent, Vivo, and Xiaomi - most apps remain a breach waiting to happen, with network eavesdroppers able to exploit vulnerabilities passively. How can users shield themselves against such invasive threats? Turning off cloud-based services and opting for a more secure keyboard ecosystem are steps in the right direction. Next, we delve into the breach that shook trust to its core - Ring's privacy debacle. A staggering 5.6 million in refunds are being distributed to affected customers, a move prompted by the Federal Trade Commission after unauthorized access of private video feeds came to light. The case brings to the forefront the critical need for robust security measures in IoT devices, especially those designed for security like cameras. How did Ring respond to the breach, and what can consumers learn from this incident to protect their own digital footprints? On a brighter note, global security saw an inspiring leap forward in 2023. Organizations now detect intrusions in a median of 10 days, a significant improvement from the previous 16 days in 2022. This progress indicates a strengthening in defense mechanisms against cyber threats. But with ransomware and zero-day exploits on the rise, how can organizations maintain this momentum and ensure the safety of our digital realms? Additionally, the emergence of large language models like OpenAI introduces new dynamics in both defense and offense within cybersecurity. These powerful tools aid in the development of new technologies and the fast analysis of vast datasets. However, the unrestricted usage by attackers versus the ethical constraints on defenders presents unique challenges. How will this play out in the evolving cybersecurity landscape? This has been the Daily Decrypt. If today's episode unlocked new perspectives for you, show your support with a rating on Spotify or Apple Podcasts. Follow us on Instagram, or catch our episodes on YouTube for more insights into the cyber world. Until next time, keep your data safe and your curiosity sparked.
This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.
The sixth episode of Season VI "Saturated" : IU Edition welcomes Jason Q. Ng, Author of Blocked On Weibo where he writes about Chinese internet censorship and currently works as a Data Scientist at Duolingo.In this episode Jason discusses how he has used data throughout his career to democratize information in order to help people learn about everything from incarceration rates in his work as a researcher with Citizen Lab, to helping artists on connect with listeners on Spotify, to helping people worldwide learn a new language through his work at Duolingo. He talks about the use of propaganda in China to misinform citizens through social media and the complexity of who should be the arbiter of free speech on a global stage. From the moment he started a blog on Chinese censorship that would eventually led to his book, Jason has been moved to help people learn and act on data in a way that positively impacts society. He currently lives and works in New York City with his wife and son.
In this bonus episode of Shoot the Messenger, we share a special interview host Rose Reid did with Russian journalist and founder of the media outlet Meduza, Galina Timchenko. Citizen Lab and Access Now confirmed Galina Timchenko had been infected with Pegasus — which is the first documented case of the use of Pegasus against a Russian journalist. Before Galina Timchenko was the editor in chief of Meduza, she ran one of Russia's most popular media outlets, called Lenta.ru. She was fired as Lenta.ru's chief editor in 2014, after Vladimir Putin returned to power, the same year of Russia's annexation of Crimea. Since 2014, Galina and her team have been reporting on Russia in exile. We'll launch our second season in early 2024 - which investigates “Who Killed the President of Haiti?” In the meantime, we will bring you monthly bonus episodes - featuring our favorite shows and updates on Pegasus.
From August 12, 2020: President Trump recently issued executive orders aimed at banning TikTok and WeChat from operating in the United States. To discuss the sanction, Bobby Chesney sat down with Dr. Sheena Chestnut Greitens, an associate professor at the LBJ School of Public Affairs at the University of Texas at Austin and a faculty affiliate with the Strauss Center for International Security and Law and the Clements Center for National Security at UT; and Dr. Ronald Deibert, a professor of political science and the founder and director of The Citizen Lab at the University of Toronto's Munk School of Global Affairs and Public Policy. In addition to the executive orders concerning TikTok and WeChat, they also discussed the larger U.S.-China relationship and the role of technology competition in that space.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Around the globe, journalists, human rights activists, scholars and others are facing digital attacks from Pegasus, military-grade spyware originally developed to go after criminals. Some of the people targeted have been killed or are in prison. In this episode, Reveal partners with the Shoot the Messenger podcast to investigate one of the biggest Pegasus hacks ever uncovered: the targeting of El Faro newspaper in El Salvador. In the opening story, hosts Rose Reid and Nando Vila speak with El Faro co-founder Carlos Dada and reporter Julia Gavarrete. El Faro has been lauded for its investigations into government corruption and gang violence. The newspaper is no stranger to threats and intimidation, which have increased under the administration of President Nayib Bukele. Reid and Vila also speak with John Scott-Railton of Citizen Lab, a Toronto-based digital watchdog group. Scott-Railton worked to identify the El Faro breach, and it was one of the most obsessive cases of spying Citizen Lab has ever seen. Over the course of one year, 22 members of the newspaper's staff had their phones infected with Pegasus and were surveilled by a remote operator. Researchers suspect Bukele's government was behind the spying, though officials have denied those allegations. The breach forced El Faro's journalists to change the way they work and live and take extreme measures to protect sources and themselves. Then Reid talks with Reveal's Al Letson about growing efforts to hold the NSO Group, the company behind Pegasus, accountable for the massive digital attacks. Support Reveal's journalism at Revealnews.org/donatenow Subscribe to our weekly newsletter to get the scoop on new episodes at Revealnews.org/newsletter Connect with us on Twitter, Facebook and Instagram
On June 23, 2023, hours before Yevgeny Prigozhin would shock the world by staging a mutiny against the Russian military, Meduza co-founder and CEO Galina Timchenko learned that her iPhone had been infected months earlier with “Pegasus.” The spyware's Israeli designers market the product as a crimefighting super-tool against “terrorists, criminals, and pedophiles,” but states around the world have abused Pegasus to track critics and political adversaries who sometimes end up arrested or even murdered. Access to Pegasus isn't cheap: Researchers believe the service costs tens of millions of dollars, meaning that somebody — some government agency out there — paid maybe a million bucks to hijack Timchenko's smartphone. Why would somebody do that? How would somebody do that? And who could have done it? For answers, The Naked Pravda turned to two experts: Natalia Krapiva, tech-legal counsel for Access Now, a nonprofit organization committed to “defending and extending” the digital civil rights of people worldwide, and John Scott-Railton, a senior researcher at Citizen Lab, an interdisciplinary laboratory at the University of Toronto that investigates digital espionage against civil society. Timestamps for this episode: (3:39) Galina Timchenko's hacked iPhone is the first confirmed case of a Pegasus infection against a Russian journalist (6:16) NSO Group's different contract tiers for Pegasus users (9:59) How aware is NSO Group of Pegasus's rampant misuse? (12:29) Why hasn't Europe done more to restrict the use of such spyware? (15:50) Russian allies using Pegasus (17:58) E.U. members using Pegasus (21:37) Training required to use Pegasus and the spyware's technical side (27:38) The forensics needed to detect a Pegasus infection (35:46) Is Pegasus built more to find criminals or members of civil society? (40:10) Imagining a global moratorium on military-grade spyware (43:22) “A German solution” (45:14) Where the West goes from hereКак поддержать нашу редакцию — даже если вы в России и вам очень страшно
A new type of mercenary spyware came on the radar called Predator. It'll infect a mobile phone, and then suck up all the data from it. Contacts, text messages, location, and more. This malware is being sold to intelligence agencies around the world. In this episode we hear from Crofton Black at Lighthouse Reports who spent 6 months with a team of journalists researching this story which was published here: https://www.lighthousereports.com/investigation/flight-of-the-predator/. We also hear from Bill Marczak and John Scott-Railton from Citizen Lab. If you want to hear about other mercenary spyware, check out episodes 99 and 100, about NSO group and Pegasus. To hear another episode about Greece check out episode 64 called Athens Shadow Games. Sponsors Support for this show comes from Axonius. The Axonius solution correlates asset data from your existing IT and security solutions to provide an always up-to-date inventory of all devices, users, cloud instances, and SaaS apps, so you can easily identify coverage gaps and automate response actions. Axonius gives IT and security teams the confidence to control complexity by mitigating threats, navigating risk, decreasing incidents, and informing business-level strategy — all while eliminating manual, repetitive tasks. Visit axonius.com/darknet to learn more and try it free. Support for this show comes from Varonis. Do you wonder what your company's ransomware blast radius is? Varonis does a free cyber resilience assessment that tells you how many important files a compromised user could steal, whether anything would beep if they did, and a whole lot more. They actually do all the work – show you where your data is too open, if anyone is using it, and what you can lock down before attackers get inside. They also can detect behavior that looks like ransomware and stop it automatically. To learn more visit www.varonis.com/darknet. Support for this show comes from Akamai Connected Cloud (formerly Linode). Akamai Connected Cloud supplies you with virtual servers. Visit linode.com/darknet and get a special offer. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This is a special edition of the Southeast Asia Dispatches, the first of six episodes in the Pegasus series that we are co-producing together with KBR. Three episodes will be conducted in English, while three in Bahasa Indonesia, which you can find at Ruang Publik at KBRPrime.id. In 2022, iLaw, Digital Reach, and The Citizen Lab discovered a large-scale espionage campaign targeting pro-democracy demonstrators and activists calling for monarchy reform in Thailand. At least 30 people were infected with NSO Group's Pegasus spyware. Thailand's parliament has so far failed to introduce regulation or countermeasures against the government's extensive and evolving surveillance powers, while government accountability mechanisms have weakened and attacks on civil society continue. You can also find this podcast on our website. Our movement needs your support. A movement is only as strong as its members. If you believe in a more democratic Southeast Asia, join New Naratif as a member here.
ChatGPT hallucinations cause turbulence in court, a riot in Wales may have been ignited on social media, and do you think .MOV is a good top-level domain for "a website that moves you"?All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Mark Stockley.Plus don't miss our featured interview with David Ahn of Centripetal.Warning: This podcast may contain nuts, adult themes, and rude language.Episode links:8 new top-level domains for dads, grads and techies - Google.Tweet by Citizen Lab's John Scott-Railton - Twitter.File Archiver in the browser - mr.d0x.A Lawyer's Filing "Is Replete with Citations to Non-Existent Cases" - Thanks, ChatGPT? - Reason.Ely riot: Live updates as police investigate CCTV showing police van following bike moments before fatal crash - Wales Online.Cardiff riot: Police force refers itself to watchdog as CCTV shows its van following e-bike before fatal crash - Sky News.Two boys killed in Cardiff crash which was followed by riot are named - Sky News.Cardiff riots: social media rumours about crash started unrest, says police commissioner - The Guardian.Black Butterflies - Netflix.Black Butterflies trailer - YouTube.“The End of the World Is Just the Beginning: Mapping the Collapse of Globalization” by Peter Zeihan - Amazon.Science Vs - Gimlet Media Podcast.Smashing Security merchandise (t-shirts, mugs, stickers and stuff)Sponsored by:Bitwarden – Password security you can trust. Bitwarden is an open source password manager trusted by millions of individuals, teams, and organizations worldwide for secure password storage and sharing.Kolide – Kolide ensures that if your device isn't secure it can't access your cloud...
A special bonus episode from one of our favorite podcasts, Click Here. Click Here is a podcast about the world of cyber and intelligence hosted by Dina Temple-Raston. Click Here did a special episode about Pegasus spyware in Mexico: Classified documents and internal memos in a new report from digital activists in Mexico make clear the Mexican Army systematically deployed Pegasus spyware against local journalists and activists. R3D, a Mexican digital rights group, and University of Toronto's Citizen Lab, also discovered the existence of a formerly unknown military intelligence unit whose sole purpose appears to be secret surveillance and deployment of spyware. https://podcasts.apple.com/us/podcast/click-here/id1225077306
Nobel Peace Prize recipient Maria Ressa believes online disinformation could pose an existential threat to democracy — and she's not alone. Ressa joins Citizen Lab founder Ron Deibert for a conversation about how online impunity is eroding civil society and how we can fight back.
We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That's the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they've been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they've dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the victims of cyberattacks (makes sense: foreign hackers are often a national security concern, and their whole business model is to use U.S. infrastructure to communicate [in a very special way] with U.S. networks.) So it turns out that all those civil libertarians who want to make it hard for the government to search 702 for the names of Americans are proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks! Justin Sherman covers China's push to attack and even take over enemy (U.S.) satellites. This story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that the Defense Department has gotten a little too comfortable waging war against people who don't really have an army, and that the Ukraine conflict shows how much tougher things get when there's an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.) Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. We can look forward to an argument next year about the constitutionality of public officials blocking people who post mean comments on the officials' Facebook pages. Justin and I break down a story about whether Twitter is complying with more government demands under Elon Musk. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it's much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don't fight. But with criticism of Elon Musk's Twitter already turned up to 11, that's not likely to persuade him. Adam and I are impressed by Citizen Labs' report on search censorship in China. We'd both kind of like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less transparency. If you suspect that's because there's more censorship than U.S. companies want to admit, here's a straw in the wind: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling political speech than China's main search engine, Baidu. This fits with my discovery that Bing's Image Creator refused to construct an image using Taiwan's flag. (It was OK using U.S. and German flags, but not China's.) I also credit Microsoft for fixing that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags. Adam covers the EU's enthusiasm for regulating other countries' companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American companies. I cover a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the Environmental Protection Agency's legally questionable effort to impose cybersecurity requirements on public water systems, using an “interpretation” of a law that doesn't say much about cybersecurity into a law that never had it before. Michael Ellis and I cover the story detailing a former NSA director's business ties to Saudi Arabia—and expand it to confess our unease at the number of generals and admirals moving from command of U.S. forces to a consulting gig with the countries they were just negotiating with. Recent restrictions on the revolving door for intelligence officers gets a mention. Adam covers the Quebec decision awarding $500 thousand to a man who couldn't get Google to consistently delete a false story portraying him as a pedophile and conman. Justin and I debate whether Meta's Reels feature has what it takes to be a plausible TikTok competitor? Justin is skeptical. I'm a little less so. Meta's claims about the success of Reels aren't entirely persuasive, but perhaps it's too early to tell. The D.C. Circuit has killed off the state antitrust case trying to undo Meta's long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn't apply the same way to the Federal Trade Commission (FTC), which will get to pursue a lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into battle like conscripts in Bakhmut, I ask, when will the commission start recruiting in Russian prisons? That was fast. Adam tells us that the Brazil court order banning on Telegram because it wouldn't turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn't out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance. And in another regulatory walkback, Italy's privacy watchdog is letting ChatGPT back into the country. I suspect the Italian government of cutting a deal to save face as it abandons its initial position on ChatGPT's scraping of public data to train the model. Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don't see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education. Download 455th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That's the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they've been intercepted and stored, and particularly whether the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they've dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were also made public this week. It has also emerged that the government is using section 702 millions of times a year to identify the victims of cyberattacks (makes sense: foreign hackers are often a national security concern, and their whole business model is to use U.S. infrastructure to communicate [in a very special way] with U.S. networks.) So it turns out that all those civil libertarians who want to make it hard for the government to search 702 for the names of Americans are proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks! Justin Sherman covers China's push to attack and even take over enemy (U.S.) satellites. This story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that the Defense Department has gotten a little too comfortable waging war against people who don't really have an army, and that the Ukraine conflict shows how much tougher things get when there's an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.) Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. We can look forward to an argument next year about the constitutionality of public officials blocking people who post mean comments on the officials' Facebook pages. Justin and I break down a story about whether Twitter is complying with more government demands under Elon Musk. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it's much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don't fight. But with criticism of Elon Musk's Twitter already turned up to 11, that's not likely to persuade him. Adam and I are impressed by Citizen Labs' report on search censorship in China. We'd both kind of like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less transparency. If you suspect that's because there's more censorship than U.S. companies want to admit, here's a straw in the wind: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling political speech than China's main search engine, Baidu. This fits with my discovery that Bing's Image Creator refused to construct an image using Taiwan's flag. (It was OK using U.S. and German flags, but not China's.) I also credit Microsoft for fixing that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags. Adam covers the EU's enthusiasm for regulating other countries' companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American companies. I cover a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the Environmental Protection Agency's legally questionable effort to impose cybersecurity requirements on public water systems, using an “interpretation” of a law that doesn't say much about cybersecurity into a law that never had it before. Michael Ellis and I cover the story detailing a former NSA director's business ties to Saudi Arabia—and expand it to confess our unease at the number of generals and admirals moving from command of U.S. forces to a consulting gig with the countries they were just negotiating with. Recent restrictions on the revolving door for intelligence officers gets a mention. Adam covers the Quebec decision awarding $500 thousand to a man who couldn't get Google to consistently delete a false story portraying him as a pedophile and conman. Justin and I debate whether Meta's Reels feature has what it takes to be a plausible TikTok competitor? Justin is skeptical. I'm a little less so. Meta's claims about the success of Reels aren't entirely persuasive, but perhaps it's too early to tell. The D.C. Circuit has killed off the state antitrust case trying to undo Meta's long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn't apply the same way to the Federal Trade Commission (FTC), which will get to pursue a lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into battle like conscripts in Bakhmut, I ask, when will the commission start recruiting in Russian prisons? That was fast. Adam tells us that the Brazil court order banning on Telegram because it wouldn't turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn't out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance. And in another regulatory walkback, Italy's privacy watchdog is letting ChatGPT back into the country. I suspect the Italian government of cutting a deal to save face as it abandons its initial position on ChatGPT's scraping of public data to train the model. Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don't see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education. Download 455th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The problem in both cases is not Zuckerberg or Musk, but the idea of a Zuckerberg or Musk. The idea that, simply by virtue of owning and controlling a particular technology, someone wields arbitrary or unaccountable power which can touch every aspect of our liberty and our democracy.Jamie SusskindAccess Bonus Episodes on PatreonMake a one-time Donation to Democracy Paradox.Jamie Susskind is an author and barrister. He has held fellowships at Cambridge and Harvard Universities. His work is at the crossroads of technology, politics, and law. His most recent book is The Digital Republic: On Freedom and Democracy in the 21st Century.Key HighlightsIntroduction - 0:44Challenges of Digital Technology - 3:18Artificial Intelligence - 20:09A Digital Republic - 40:27Possible Solutions - 43:42Key LinksThe Digital Republic: On Freedom and Democracy in the 21st Century by Jamie SusskindFollow Jamie Susskind on Twitter @jamiesusskindLearn more about Jamie SusskindDemocracy Paradox PodcastSamuel Woolley on Bots, Artificial Intelligence, and Digital PropagandaRonald Deibert from Citizen Lab on Cyber Surveillance, Digital Subversion, and Transnational RepressionMore Episodes from the PodcastMore InformationDemocracy GroupApes of the State created all MusicEmail the show at jkempf@democracyparadox.comFollow on Twitter @DemParadox, Facebook, Instagram @democracyparadoxpodcast100 Books on DemocracyDemocracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.Support the show
Shoot the Messenger: Espionage, Murder and Pegasus Spyware continues with its eighth episode, a special interview with acclaimed journalist Carlos Dada about the intense targeting of him and his newsroom, El Faro, in El Salvador. El Faro is no stranger to threats and intimidation - which has increased under the administration of President Nayib Bukele. Pegasus was used to spy on Carlos Dada for more than a 100 days in a row. Between June 2020 to November 2021, more than 20 members of El Faro were infected with NSO Group's Pegasus spyware. John Scott-Railton of Citizen Lab worked to identify the El Faro breach — this was one of the most obsessive cases of spying Citizen Lab has ever seen. Shoot the Messenger is hosted by Rose Reid and Nando Vila and is a production of Exile Content Studio. Guests: Carlos Dada and John Scott-Railton
Shoot the Messenger: Espionage, Murder and Pegasus Spyware continues with its seventh episode, revealing a pattern of Pegasus as a bargaining chip for foreign relations. Over the past decade, under the leadership of Prime Minister Benjamin Netanyahu - there is a direct correlation between his travels, his meet and greets with world leaders…and the proliferation of Pegasus spyware. Where Netanyahu goes, Pegasus seems to follow. As Netanyahu asserts his control over a divisive Israel, should we expect to see an increase in the scope of NSO Group's capabilities in digital surveillance? This industry has boomed during Netanyahu's tenure - and he has famously said, "Don't over-regulate." Shoot the Messenger is hosted by Rose Reid and Nando Vila and is a production of Exile Content Studio. Guests: Keshet's Amitai Ziv; Financial Times' Mehul Srivastava; Citizen Lab's Scott Stedman
Recorded Future - Inside Threat Intelligence for Cyber Security
A new report has published classified documents and internal memos that make clear the Mexican Army bought Pegasus spyware and systematically deployed it against journalists and activists in Mexico. R3D, a Mexican digital rights group, and University of Toronto's Citizen Lab, also found evidence of a formerly unknown military intelligence unit whose sole focus appears to be secret surveillance and deployment of spyware. Some of the sensitive material published in the report came from a massive hack into the Ministry of Defense by the hacktivist group Guacamaya last year. Click Here was part of a small group of journalists given early access to their findings.
On this episode of the podcast, we sit down with Ron Deibert, who runs the University of Toronto's Citizen Lab, to discuss the “mercenary spyware” industry - and its proclivity for providing “almost god-like” spyware programs to governments who've been proven to use them to surveil “opposition politicians, human rights activists, journalists, academics, embassy workers, and political dissidents.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.