POPULARITY
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
Shocking new research reveals how anyone with $750 can intercept unencrypted satellite data, exposing everything from government secrets to in-flight Wi-Fi traffic. Find out why decades-old vulnerabilities are still open and who actually wants it that way. Study: The World's Satellite Data Is Massively Vulnerable To Snooping You Only Need $750 of Equipment to Pilfer Data From Satellites, Researchers Say Hackers Dox Hundreds of DHS, ICE, FBI, and DOJ Officials DHS says Chinese criminal gangs made $1B from US text scams cr.yp.to: 2025.10.04: NSA and IETF Why Signal's post-quantum makeover is an amazing engineering achievement Court reduces damages Meta will get from spyware maker NSO Group but bans it from WhatsApp How I Almost Got Hacked By A 'Job Interview' New California law requires AI to tell you it's AI The European Union issued its first fines under the AI Act, penalizing a French facial recognition startup €12 million for deploying unverified algorithms in public security contracts Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors Texas hit with a pair of lawsuits for its app store age verification requirements Australia shares tips to wean teens off social media ahead of ban. Will it work? California enacts age-gate law for app stores Meta is asking Facebook users to give its AI access to their entire camera roll Meta poached Andrew Tulloch, co-founder of Thinking Machines Lab, with a compensation package rumored to reach $1.5 billion over six years Even top generals are looking to AI chatbots for answers Roku's AI-upgraded voice assistant can answer questions about what you're watching Tesla debuts a steering wheel-less taxi for two Waymo and DoorDash Are Teaming Up to Deliver Your Food via Robotaxi Host: Leo Laporte Guests: Jacob Ward, Harper Reed, and Abrar Al-Heeti Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: Melissa.com/twit ZipRecruiter.com/twit deel.com/twit zscaler.com/security zapier.com/twit
My fellow pro-growth/progress/abundance Up Wingers,For most of history, stagnation — not growth — was the rule. To explain why prosperity so often stalls, economist Carl Benedikt Frey offers a sweeping tour through a millennium of innovation and upheaval, showing how societies either harness — or are undone by — waves of technological change. His message is sobering: an AI revolution is no guarantee of a new age of progress.Today on Faster, Please! — The Podcast, I talk with Frey about why societies midjudge their trajectory and what it takes to reignite lasting growth.Frey is a professor of AI and Work at the Oxford Internet Institute and a fellow of Mansfield College, University of Oxford. He is the director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.He is the author of several books, including the brand new one, How Progress Ends: Technology, Innovation, and the Fate of Nations.In This Episode* The end of progress? (1:28)* A history of Chinese innovation (8:26)* Global competitive intensity (11:41)* Competitive problems in the US (15:50)* Lagging European progress (22:19)* AI & labor (25:46)Below is a lightly edited transcript of our conversation. The end of progress? (1:28). . . once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.Pethokoukis: Since 2020, we've seen the emergence of generative AI, mRNA vaccines, reusable rockets that have returned America to space, we're seeing this ongoing nuclear renaissance including advanced technologies, maybe even fusion, geothermal, the expansion of solar — there seems to be a lot cooking. Is worrying about the end of progress a bit too preemptive?Frey: Well in a way, it's always a bit too preemptive to worry about the future: You don't know what's going to come. But let me put it this way: If you had told me back in 1995 — and if I was a little bit older then — that computers and the internet would lead to a decade streak of productivity growth and then peter out, I would probably have thought you nuts because it's hard to think about anything that is more consequential. Computers have essentially given people the world's store of knowledge basically in their pockets. The internet has enabled us to connect inventors and scientists around the world. There are few tools that aided the research process more. There should hardly be any technology that has done more to boost scientific discovery, and yet we don't see it.We don't see it in the aggregate productivity statistics, so that petered out after a decade. Research productivity is in decline. Measures of breakthrough innovation is in decline. So it's always good to be optimistic, I guess, and I agree with you that, when you say AI and when you read about many of the things that are happening now, it's very, very exciting, but I remain somewhat skeptical that we are actually going to see that leading to a huge revival of economic growth.I would just be surprised if we don't see any upsurge at all, to be clear, but we do have global productivity stagnation right now. It's not just Europe, it's not just Britain. The US is not doing too well either over the past two decades or so. China's productivity is probably in the negative territory or stagnant, by more optimistic measures, and so we're having a growth problem.If tech progress were inevitable, why have predictions from the '90s, and certainly earlier decades like the '50s and '60s, about transformative breakthroughs and really fast economic growth by now, consistently failed to materialize? How does your thesis account for why those visions of rapid growth and progress have fallen short?I'm not sure if my thesis explains why those expectations didn't materialize, but I'm hopeful that I do provide some framework for thinking about why we've often seen historically rapid growth spurts followed by stagnation and even decline. The story I'm telling is not rocket science, exactly. It's basically built on the simple intuitions that once you exploit a technology, the processes that aid that run into diminishing returns, you have a lot of incumbents, you have some vested interests around established technologies, and you need something new to revive growth.So for example, the Soviet Union actually did reasonably well in terms of economic growth. A lot of it, or most of it, was centered on heavy industry, I should say. So people didn't necessarily see the benefits in their pockets, but the economy grew rapidly for about four decades or so, then growth petered out, and eventually it collapsed. So for exploiting mass-production technologies, the Soviet system worked reasonably well. Soviet bureaucrats could hold factory managers accountable by benchmarking performance across factories.But that became much harder when something new was needed because when something is new, what's the benchmark? How do you benchmark against that? And more broadly, when something is new, you need to explore, and you need to explore often different technological trajectories. So in the Soviet system, if you were an aircraft engineer and you wanted to develop your prototype, you could go to the red arm and ask for funding. If they turned you down, you maybe had two or three other options. If they turned you down, your idea would die with you.Conversely, in the US back in '99, Bessemer Venture declined to invest in Google, which seemed like a bad idea with the benefit of hindsight, but it also illustrates that Google was no safe bet at the time. Yahoo and Alta Vista we're dominating search. You need somebody to invest in order to know if something is going to catch on, and in a more decentralized system, you can have more people taking different bets and you can explore more technological trajectories. That is one of the reasons why the US ended up leading the computer revolutions to which Soviet contributions were basically none.Going back to your question, why didn't those dreams materialize? I think we've made it harder to explore. Part of the reason is protective regulation. Part of the reason is lobbying by incumbents. Part of the reason is, I think, a revolving door between institutions like the US patent office and incumbents where we see in the data that examiners tend to grant large firms some patents that are of low quality and then get lucrative jobs at those places. That's creating barriers to entry. That's not good for new startups and inventors entering the marketplace. I think that is one of the reasons that we haven't seen some of those dreams materialize.A history of Chinese innovation (8:26)So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided . . .I wonder if your analysis of pre-industrial China, if there's any lessons you can draw about modern China as far as the way in which bad governance can undermine innovation and progress?Pre-industrial China has a long history. China was the technology leader during the Song and Tang dynasties. It had a meritocratic civil service. It was building infrastructure on scales that were unimaginable in Europe at the time, and yet it didn't have an industrial revolution. So while Chinese bureaucracy enabled scale, Chinese bureaucracy did not really permit much in terms of decentralized exploration, which European fragmentation aided, and because there was lots of social status attached to becoming a bureaucrat and passing the civil service examination, if Galileo was born in China, he would probably become a bureaucrat rather than a scientist, and I think that's part of the reason too.But China mostly did well when the state was strong rather than weak. A strong state was underpinned by intensive political competition, and once China had unified and there were fewer peer competitors, you see that the center begins to fade. They struggle to tax local elites in order to keep the peace. People begin to erect monopolies in their local markets and collide with guilds to protect production and their crafts from competition.So during the Qing dynasty, China begins to decline, whereas we see the opposite happening in Europe. European fragmentation aids exploration and innovation, but it doesn't necessarily aid scaling, and so that is something that Europe needs to come to terms with at a later stage when the industrial revolution starts to take off. And even before that, market integration played an important role in terms of undermining the guilds in Europe, and so part of the reason why the guilds persist longer in China is the distance is so much longer between cities and so the guilds are less exposed to competition. In the end, Europe ends up overtaking China, in large part because vested interests are undercut by governments, but also because of investments in things that spur market integration.Global competitive intensity (11:41)Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.This is a great McKinsey kind of way of looking at the world: The notion that what drives innovation is sort of maximum competitive intensity. You were talking about the competitive intensity in both Europe and in China when it was not so centralized. You were talking about the competitive intensity of a fragmented Europe.Do you think that the current level of competitive intensity between the United States and China —and I really wish I could add Europe in there. Plenty of white papers, I know, have been written about Europe's competitive state and its in innovativeness, and I hope those white papers are helpful and someone reads them, but it seems to be that the real competition is between United States and China.Do you not think that that competitive intensity will sort of keep those countries progressing despite any of the barriers that might pop up and that you've already mentioned a little bit? Isn't that a more powerful tailwind than any of the headwinds that you've mentioned?It could be, I think, if people learn the right lessons from history, at least that's a key argument of the book. Right now, what I'm seeing is the United States moving more towards protectionist with protective tariffs. Right now, what I see is a move towards, we could even say crony capitalism with tariff exemptions that some larger firms that are better-connected to the president are able to navigate, but certainly not challengers. You're seeing the United States embracing things like golden shares in Intel, and perhaps even extending that to a range of companies. Back in the 2000s, people predicted that China would become more like the United States, now it looks like the United States is becoming more like China.And China today is having similar problems and on, I would argue, an even greater scale. Growth used to be the key objective in China, and so for local governments, provincial governments competing on such targets, it was fairly easy to benchmark and measure and hold provincial governors accountable, and they would be promoted inside the Communist Party based on meeting growth targets. Now, we have prioritized common prosperity, more national security-oriented concerns.And so in China, most progress has been driven by private firms and foreign-invested firms. State-owned enterprise has generally been a drag on innovation and productivity. What you're seeing, though, as China is shifting more towards political objectives, it's harder to mobilize private enterprise, where the yard sticks are market share and profitability, for political goals. That means that China is increasingly relying more again on state-owned enterprises, which, again, have been a drag on innovation.So, in principle, I agree with you that historically you did see Russian defeat to Napoleon leading to this Stein-Hardenberg Reforms, and the abolishment of Gilded restrictions, and a more competitive marketplace for both goods and ideas. You saw that Russian losses in the Crimean War led to the of abolition of serfdom, and so there are many times in history where defeat, in particular, led to striking reforms, but right now, the competition itself doesn't seem to lead to the kinds of reforms I would've hoped to see in response.Competitive problems in the US (15:50)I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior.I certainly wrote enough pieces and talked to enough people over the past decade who have been worried about competition in the United States, and the story went something like this: that you had these big tech companies — Google, and Meta, Facebook and Microsoft — that these were companies were what they would call “forever companies,” that they had such dominance in their core businesses, and they were throwing off so much cash that these were unbeatable companies, and this was going to be bad for America. People who made that argument just could not imagine how any other companies could threaten their dominance. And yet, at the time, I pointed out that it seemed to me that these companies were constantly in fear that they were one technological advance from being in trouble.And then lo and behold, that's exactly what happened. And while in AI, certainly, Google's super important, and Meta Facebook are super important, so are OpenAI, and so is Anthropic, and there are other companies.So the point here, after my little soliloquy, is can we overstate these problems, at least in the United States, when it seems like it is still possible to create a new technology that breaks the apparent stranglehold of these incumbents? Google search does not look quite as solid a business as it did in 2022.Can we overstate the competitive problems of the United States, or is what you're saying more forward-looking, that perhaps we overstated the competitive problems in the past, but now, due to these tariffs, and executives having to travel to the White House and give the president gifts, that that creates a stage for the kind of competitive problems that we should really worry about?I'm very happy to support the notion that technological changes can lead to unpredictable outcomes that incumbents may struggle to predict and respond to. Even if they predict it, they struggle to act upon it because doing so often undermines the existing business model.So if you take Google, where the transformer was actually conceived, the seven people behind it, I think, have since left the company. One of the reasons that they probably didn't launch anything like ChatGPT was probably for the fear of cannibalizing search. So I think the most important mechanisms for dislodging incumbents are dramatic shifts in technology.None of the legacy media companies ended up leading social media. None of the legacy retailers ended up leading e-commerce. None of the automobile leaders are leading in EVs. None of the bicycle companies, which all went into automobile, so many of them, ended up leading. So there is a pattern there.At the same time, I think you do have to worry that there are anti-competitive practices going on that makes it harder, and that are costly. The revolving door between the USPTO and companies is one example of that. We also have a reasonable amount of evidence on killer acquisitions whereby firms buy up a competitor just to shut it down. Those things are happening. I think you need to have tools that allow you to combat that, and I think more broadly, the United States has a long history of fairly vigorous antitrust policy. I think it'd be a hard pressed to suggest that that has been a tremendous drag on American business or American dynamism. So if you don't think, for example, that American antitrust policy has contributed to innovation and dynamism, at the very least, you can't really say either that it's been a huge drag on it.In Japan, for example, in its postwar history, antitrust was extremely lax. In the United States, it was very vigorous, and it was very vigorous throughout the computer revolution as well, which it wasn't at all in Japan. If you take the lawsuit against IBM, for example, you can debate this. To what extent did it force it to unbundle hardware and software, and would Microsoft been the company it is today without that? I think AT&T, it's both the breakup and it's deregulation, as well, but I think by basically all accounts, that was a good idea, particularly at the time when the National Science Foundation released ARPANET into the world.I think what antitrust does is, at the very least, it provides a tool that means that businesses are thinking twice before engaging in anti-competitive behavior. There's always a risk of antitrust being heavily politicized, and that's always been a bad idea, but at the same time, I think having tools on the books that allows you to check monopolies and steer their investments more towards the innovation rather than anti-competitive practices, I think is, broadly speaking, a good thing. I think in the European Union, you often hear that competition policy is a drag on productivity. I think it's the least of Europe's problem.Lagging European progress (22:19)If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. . . but doesn't do the same in digital. The question in my mind is: Why is that?Let's talk about Europe as we sort of finish up. We don't have to write How Progress Ends, it seems like progress has ended, so maybe we want to think about how progress restarts, and is the problem in Europe, is it institutions or is it the revealed preference of Europeans, that they're getting what they want? That they don't value progress and dynamism, that it is a cultural preference that is manifested in institutions? And if that's the case — you can tell me if that's not the case, I kind of feel like it might be the case — how do you restart progress in Europe since it seems to have already ended?The most puzzling thing to me is not that Europe is less dynamic than the United States — that's not very puzzling at all — but that it hasn't even managed to catch up in digital. If you take the postwar period, at least Europe catches up in most key industries, and actually lead in some of them. So in a way, take automobiles, electrical machinery, chemicals, pharmaceuticals, nobody would say that Europe is behind in those industries, or at least not for long. Europe has very robust catchup growth in the post-war period, but doesn't do the same in digital. The question in my mind is: Why is that?I think part of the reason is that the returns to innovation, the returns to scaling in Europe are relatively muted by a fragmented market in services, in particular. The IMF estimates that if you take all trade barriers on services inside the European Union and you add them up, it's something like 110 percent tariffs. Trump Liberation Day tariffs, essentially, imposed within European Union. That means that European firms in digital and in services don't have a harmonized market to scale into, the way the United States and China has. I think that's by far the biggest reason.On top of that, there are well-intentioned regulations like the GDPR that, by any account, has been a drag on innovation, and particularly been harmful for startups, whereas larger firms that find it easier to manage compliance costs have essentially managed to offset those costs by capturing a larger share of the market. I think the AI Act is going in the same direction there, ad so you have more hurdles, you have greater costs of innovating because of those regulatory barriers. And then the return to innovation is more capped by having a smaller, fragmented market.I don't think that culture or European lust for leisure rather than work is the key reason. I think there's some of that, but if you look at the most dynamic places in Europe, it tends to be the Scandinavian countries and, being from Sweden myself, I can tell you that most people you will encounter there are not workaholics.AI & labor (25:46)I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin.As I finish up, let me ask you: Like a lot of economists who think about technology, you've thought about how AI will affect jobs — given what we've seen in the past few years, would it be your guess that, if we were to look at the labor force participation rates of the United States and other rich countries 10 years from now, that we will look at those employment numbers and think, “Wow, we can really see the impact of AI on those numbers”? Will it be extraordinarily evident, or would it be not as much?Unless there's very significant progress in AI, I don't think so. I think AI at the moment has a real resilience problem. It's very good that things where there's a lot of precedent, it doesn't do very well where precedence is thin. So in most activities where the world is changing, and the world is changing every day, you can't really rely on AI to reliably do work for you.An example of that, most people know of AlphaGo beating the world champion back in 2016. Few people will know that, back in 2023, human amateurs, using standard laptops, exposing the best Go programs to new positions that they would not have encountered in training, actually beat the best Go programs quite easily. So even in a domain where basically the problem is solved, where we already achieved super-human intelligence, you cannot really know how well these tools perform when circumstances change, and I think that that's really a problem. So unless we solve that, I don't think it's going to have an impact that will mean that labor force participation is going to be significantly lower 10 years from now.That said, I do think it's going to have a very significant impact on white collar work, and people's income and sense of status. I think of generative AI, in particular, as a tool that reduces barriers to entry in professional services. I often compare it to what happened with Uber and taxi services. With the arrival of GPS technology, knowing the name of every street in New York City was no longer a particularly valuable skill, and then with a platform matching supply and demand, anybody could essentially get into their car who has a driver's license and top up their incomes on the side. As a result of that, incumbent drivers faced more competition, they took a pay cut of around 10 percent.Obviously, a key difference with professional services is that they're traded. So I think it's very likely that, as generative AI reduces the productivity differential between people in, let's say the US and the Philippines in financial modeling, in paralegal work, in accounting, in a host of professional services, more of those activities will shift abroad, and I think many knowledge workers that had envisioned prosperous careers may feel a sense of loss of status and income as a consequence, and I do think that's quite significant.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Today Dominic Bowen hosts Professor Adam Chalmers on The International Risk Podcast to explore the global race for leadership in artificial intelligence. They discuss the United States' AI Action Plan and China's AI Plus Plan, how these competing strategies reveal different models of governance, regulation, and ideology, and what this competition means for innovation, global influence, and risk. Together they examine how the U.S. approach emphasizes open innovation and technological dominance, how China's plan embeds ideology and state control, and how the European Union's AI Act represents a third path prioritizing human-centric regulation.Dominic and Adam also dive into the economic and geopolitical stakes of the AI race, from workforce disruption and re-skilling to public trust, data sovereignty, and the challenge of building safe and transparent AI systems. They explore how governments can manage risk while fostering innovation, how universities and industries must adapt to rapid change, and what it means for democracy and international stability as artificial intelligence becomes a driver of both progress and power.Professor Adam Chalmers is an Associate Professor of Politics at the University of Edinburgh, the CEO and Founder of Resonate AI, and a leading voice on the intersection of political economy, technology, and governance. He has advised governments and organizations on AI strategy and risk, and his work bridges academic research with practical solutions for emerging technologies.Drawing on his research and field experience, Adam explains how AI is reshaping global politics, why public trust and ethical frameworks will define its future, and how democracies can respond to the rapidly evolving risks of the digital age.The International Risk Podcast brings you conversations with global experts, frontline practitioners, and senior decision-makers who are shaping how we understand and respond to international risk. From geopolitical volatility and organised crime, to cybersecurity threats and hybrid warfare, each episode explores the forces transforming our world and what smart leaders must do to navigate them. Whether you're a board member, policymaker, or risk professional, The International Risk Podcast delivers actionable insights, sharp analysis, and real-world stories that matter.Dominic Bowen is the host of The International Risk Podcast and Europe's leading expert on international risk and crisis management. As Head of Strategic Advisory and Partner at one of Europe's leading risk management consulting firms, Dominic advises CEOs, boards, and senior executives across the continent on how to prepare for uncertainty and act with intent. He has spent decades working in war zones, advising multinational companies, and supporting Europe's business leaders. Dominic is the go-to business advisor for leaders navigating risk, crisis, and strategy; trusted for his clarity, calmness under pressure, and ability to turn volatility into competitive advantage. Dominic equips today's business leaders with the insight and confidence to lead through disruption and deliver sustained strategic advantage.The International Risk Podcast – Reducing risk by increasing knowledge.Follow us on LinkedIn and Subscribe for all our updates!Tell us what you liked!
What if the biggest obstacles to AI innovation aren't what you think? Deeploy CEO Maarten Stolk shares his controversial but effective strategies for building successful AI products and ecosystems, challenging the common wisdom around bottom-up initiatives and regulation.In this episode, we cover:Why bottom-up initiatives fail without strong top-down vision.The surprising benefits of the EU's AI Act for innovation.How to build a thriving AI ecosystem from the ground up.The single most important metric for AI observability.This conversation is for tech leaders, founders, and engineers who want to move beyond AI experiments and build real-world, production-ready systems.Timestamps:00:00:00 - Intro00:00:45 - Why Maarten Started a Dutch AI Hub00:02:15 - The "Flywheel" Effect Crucial for AI Success00:04:42 - The Hard Truth: Why the Netherlands is Lagging in AI00:07:52 - A Controversial Take: The EU AI Act is Actually Good for Everyone00:11:26 - The Real Bottleneck to Innovation Isn't Regulation00:14:25 - From POC to Production: Why Top-Down Vision is Non-Negotiable00:17:13 - A Wake-Up Call for Inexperienced Leadership Teams00:20:30 - How Winning Companies Use AI to Dominate Their Market00:23:44 - The Right Way to Learn From Your Competitors00:27:30 - Maarten Outsourced Core Development to an AI Company00:31:59 - The #1 Metric You Must Track for AI Observability00:36:03 - Open-Source vs. Closed-Source: Which AI Model Will Win?00:40:23 - The Inevitable Crisis That Will Force Innovation00:42:19 - The Power of Having a Long-Term Personal Vision#AIStrategy #TechLeadership #Innovation
Robert Bateman is a Senior Partner at Privacy Partnership, which provides consultancy and training on data protection and AI regulation, as well as legal advice via its associated law firm, Privacy Partnership Law. He also hosts The Privacy Partnership Podcast.This is Robert's third appearance on the show. We have covered three hot topics:* How far do we take watermarking of AI-generated content under article 50 of the AI Act?* How do pre-defined legitimate interest scenarios work under the UK Data (Use and Access) Act?* What is the tension between the Online Safety Act and the new data protection framework in the UK?References:SIGN UP NOW for the Masters of Privacy NYC LIVE recording and networking event on Nov 6 (if you happen to be in town)* Robert Bateman on LinkedIn* Robert Bateman on Bluesky* The Privacy Partnership Podcast* AI Act (EU Commission's resources)* Data (Use and Access) Act 2025: data protection and privacy changes* The EU approach to age verification (EU Commission)* EU follows UK with age verification in 2026 (PPC Land)* Wikipedia loses challenge against Online Safety Act verification rules (BBC)* Robert Bateman: the EDPB's Opinion on auditing subprocessors and the future of Meta's unskippable ads (Masters of Privacy, Nov 2024)* Robert Bateman: Consent or Pay (Masters of Privacy, Oct 2023) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
A Clare MEP has questioned if political lobbying is the cause of proposed amendments to new artificial intelligence laws. The Artificial Intelligence Act is a European Union regulation which came into force on August 1st 2024 and establishes a common regulatory and legal framework for AI within the European Union. The provisions of the act are coming into effect gradually and will all be in operation within 36 months of the act's introduction. Speaking in the European Parliament, Scariff-based Independent MEP Michael McNamara says it's highly unusual to propose amending legislation in its infancy.
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration's “AI Bill of Rights” approach, which he views as symbolic and wasteful, and the European Union's AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution. By contrast, he describes the Trump administration's AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users' behalf. Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass. Transcript https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view America's AI Action Plan Dean Ball's Hyperdimensional blog
Intelligenza artificiale: legge italiana, giustizia e tribunali, AI a scuola. Inoltre, gli strumenti di tutela giurisdizionale per le opere d'arte fotografiche e il Provvedimento n. 277/2025 del Garante Privacy sull'interpretazione del principio di accountability.>> Leggi anche l'articolo: https://tinyurl.com/bdcrv8b9>> Scopri tutti i podcast di Altalex: https://bit.ly/2NpEc3w
In this episode, we dive into the evolving landscape of industrial AI, starting with a lively Oktoberfest recap before shifting gears to the latest breakthroughs in physics-informed neural networks and user interfaces. We discuss the real-world impact of Europe's AI Act, featuring insights from industry leaders and an in-depth interview with Sampo Leino of MinnaLearn on building AI literacy for enterprises. As we unpack strategic investments, robotics trends, and the challenges of compliance, we question what it means to use AI safely and competitively. Throughout the conversation, we keep it grounded in everyday experience—how regulation, technology, and practical learning are shaping the factories and workplaces of tomorrow. Tune in to hear how we're navigating this complex, fast-moving frontier and what it means for anyone working with AI today.
Proč je potřeba, aby existovala alternativa ke kancelářským balíkům od Microsoftu a Googlu? Jak se získávají velké korporátní a vládní zakázky v Asii i dalších regionech? Proč Evropa ztrácí technologické talenty? A jaké dopady mohou mít regulace jako AI Act nebo chat control na startupy a svobodu internetu? O tom všem a mnohém dalším hovořil s Adamem Pacltem z technologické firmy IceWarp Martin Petříček, zástupce šéfredaktora týdeníku Ekonom.Tento díl podcastu Na vlně podnikání moderuje Martin Petříček, zástupce šéfredaktora týdeníku Ekonom.
Dopo un lungo iter parlamentare è stata definitivamente approvata la legge italiana per l’intelligenza artificiale. Con Giusella Finocchiaro, docente di Diritto dell’IA all’Università di Bologna, parliamo dei punti più interessanti e delle novità introdotte da questa normativa che si affianca all’AI Act europeo.Siamo stati al Salone nautico di Genova per parlare di innovazione nell’ambito della nautica. Lo abbiamo fatto assieme al Prof. Andrea Ratti, che insegna Yacht design al Politecnico di Milano e con il quale abbiamo ragionato di foil, propulsione elettrica e a idrogeno e dell’impiego dell’intelligenza artificiale nella progettazione e industrializzazione della nautica. Con Michele Solari, di Barchelettriche.com abbiamo parlato dell’elettrificazione delle barche mentre Carlo Cauchi, Ceo della startup Kosmos, di una bicicletta da acqua elettrica (waterbike) con un innovativo sistema di propulsione “pedal by wire” e l’uso di foil.E come sempre in Digital News le notizie di innovazione e tecnologia più importanti della settimana.
Letzte Woche gab es Full House beim Deutschen Anwaltverein. Dort stellten die schleswig-holsteinische Landesdatenschutzbeauftragte Marit Hansen und der Hamburgische Landesdatenschutzbeauftragte Thomas Fuchs gemeinsam mit Markus Wünschelbaum den Bridge Blueprint vor, unterstützt durch einen WakeUp-Call aus Brüssel von Leonardo Cervera-Navas, dem Generalsekretär der EU-Datenschutzbehörde. Die Botschaft des Bridge Blueprint ist einfach, hat jedoch Sprengkraft: Wer den AI Act umsetze, handele zugleich DSGVO-konform. Niko Härting spricht mit Markus Wünschelbaum über das Bridge Blueprint und einige konkrete Brücken: - Datenminimierung: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann der Grundsatz der Datenminimierung gewahrt werden (Art. 5 Abs. 1 lit. c DSGVO). - Berechtigte Interessen: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann die Datenverarbeitung nach Art. 6 Abs. 1 Satz 1 lit. f DSGVO gerechtfertigt sein. - Sensitive Daten: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, ist die Datenverarbeitung wegen eines "erheblichen öffentlichen Interesses" nach Art. 9 Abs. 2 lit. g DSGVO gerechtfertigt sein. - Automatisierte Entscheidungen: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann dies für eine Zulässigkeit automatisierte Entscheidungen nach Art. 22 Abs. 2 lit. b DSGVO sprechen. Nachlesen: https://lnkd.in/eY9VU293 PM des DAV: https://lnkd.in/e4YerNZE
Broadcasting from Florence and Los Angeles, I Had One of Those Conversations...You know the kind—where you start discussing one thing and suddenly realize you're mapping the entire landscape of how different societies approach technology. That's exactly what happened when Rob Black and I connected across the Atlantic for the pilot episode of ITSPmagazine Europe: The Transatlantic Broadcast.Rob was calling from what he optimistically described as "sunny" West Sussex (complete with biblical downpours and Four Seasons weather in one afternoon), while I enjoyed actual California sunshine. But this geographic distance perfectly captured what we were launching: a genuine exploration of how European perspectives on cybersecurity, technology, and society differ from—and complement—American approaches.The conversation emerged from something we'd discovered at InfoSecurity Europe earlier this year. After recording several episodes together with Sean Martin, we realized we'd stumbled onto something crucial: most global technology discourse happens through an American lens, even when discussing fundamentally European challenges. Digital sovereignty isn't just a policy buzzword in Brussels—it represents a completely different philosophy about how democratic societies should interact with technology.Rob Black: Bridging Defense Research and Digital RealityRob brings credentials that perfectly embody the European approach to cybersecurity—one that integrates geopolitics, human sciences, and operational reality in ways that purely technical perspectives miss. As UK Cyber Citizen of the Year 2024, he's recognized for contributions that span UK Ministry of Defense research on human elements in cyber operations, international relations theory, and hands-on work with university students developing next-generation cybersecurity leadership skills.But what struck me during our pilot wasn't his impressive background—it was his ability to connect macro-level geopolitical cyber operations with the daily impossible decisions that Chief Information Security Officers across Europe face. These leaders don't see themselves as combatants in a digital war, but they're absolutely operating on front lines where nation-state actors, criminal enterprises, and hybrid threats converge.Rob's international relations expertise adds crucial context that American cybersecurity discourse often overlooks. We're witnessing cyber operations as extensions of statecraft—the ongoing conflict in Ukraine demonstrates how narrative battles and digital infrastructure attacks interweave with kinetic warfare. European nations are developing their own approaches to cyber deterrence, often fundamentally different from American strategies.European Values Embedded in Technology ChoicesWhat emerged from our conversation was something I've observed but rarely heard articulated so clearly: Europe approaches technology governance through distinctly different cultural and philosophical frameworks than America. This isn't just about regulation—though the EU's leadership from GDPR through the AI Act certainly shapes global standards. It's about fundamental values embedded in technological choices.Rob highlighted algorithmic bias as a perfect example. When AI systems are developed primarily in Silicon Valley, they embed specific cultural assumptions and training data that may not reflect European experiences, values, or diverse linguistic traditions. The implications cascade across everything from hiring algorithms to content moderation to criminal justice applications.We discussed how this connects to broader patterns of technological adoption. I'd recently written about how the transistor radio revolution of the 1960s paralleled today's smartphone-driven transformation—both technologies were designed for specific purposes but adopted by users in ways inventors never anticipated. The transistor radio became a tool of cultural rebellion; smartphones became instruments of both connection and surveillance.But here's what's different now: the stakes are global, the pace is accelerated, and the platforms are controlled by a handful of American and Chinese companies. European voices in these conversations aren't just valuable—they're essential for understanding how different democratic societies can maintain their values while embracing technological transformation.The Sociological Dimensions Technology Discourse MissesMy background in political science and sociology of communication keeps pulling me toward questions that pure technologists might skip: How do different European cultures interpret privacy rights differently? Why do Nordic countries approach digital government services so differently than Mediterranean nations? What happens when AI training data reflects primarily Anglo-American cultural assumptions but gets deployed across 27 EU member states with distinct languages and traditions?Rob's perspective adds the geopolitical layer that's often missing from cybersecurity conversations. We're not just discussing technical vulnerabilities—we're examining how different societies organize themselves digitally, how they balance individual privacy against collective security, and how they maintain democratic values while defending against authoritarian digital influence operations.Perhaps most importantly, we're both convinced that the next generation of European cybersecurity leaders needs fundamentally different skills than previous generations. Technical expertise remains crucial, but they also need to communicate complex risks to non-technical decision-makers, operate comfortably with uncertainty rather than seeking perfect solutions, and understand that cybersecurity decisions are ultimately political decisions about what kind of society we want to maintain.Why European Perspectives Matter GloballyEurope represents 27 different nations with distinct histories, languages, and approaches to technology governance, yet they're increasingly coordinating digital policies through EU frameworks. This complexity is fascinating and the implications are global. When Europe implements new AI regulations or data protection standards, Silicon Valley adjusts its practices worldwide.But European perspectives are too often filtered through American media or reduced to regulatory footnotes in technology publications. We wanted to create space for European voices to explain their approaches in their own terms—not as responses to American innovation, but as distinct philosophical and practical approaches to technology's role in democratic society.Rob pointed out something crucial during our conversation: we're living through a moment where "every concept that we've thought about in terms of how humans react to each other and how they react to the world around them now needs to be reconsidered in light of how humans react through a computer mediated existence." This isn't abstract philosophizing—it's the practical challenge facing policymakers, educators, and security professionals across Europe.Building Transatlantic Understanding, Not DivisionThe "Transatlantic Broadcast" name reflects our core mission: connecting perspectives across borders rather than reinforcing them. Technology challenges—from cybersecurity threats to AI governance to digital rights—don't respect national boundaries. Solutions require understanding how different democratic societies approach these challenges while maintaining their distinct values and traditions.Rob and I come from different backgrounds—his focused on defense research and international relations, mine on communication theory and sociological analysis—but we share curiosity about how technology shapes society and how society shapes technology in return. Sean Martin brings the American cybersecurity industry perspective that completes our analytical triangle.Cross-Border Collaboration for European Digital FutureThis pilot episode represents just the beginning of what we hope becomes a sustained conversation. We're planning discussions with European academics developing new frameworks for digital rights, policymakers implementing AI governance across member states, industry leaders building privacy-first alternatives to Silicon Valley platforms, and civil society advocates working to ensure technology serves democratic values.We want to understand how digital transformation looks different across European cultures, how regulatory approaches evolve through multi-stakeholder processes, and how European innovation develops characteristics that reflect distinctly European values and approaches to technological development.The Invitation to Continue This ConversationBroadcasting from our respective sides of the Atlantic, we're extending an invitation to join this ongoing dialogue. Whether you're developing cybersecurity policy in Brussels, building startups in Berlin, teaching digital literacy in Barcelona, or researching AI ethics in Amsterdam, your perspective contributes to understanding how democratic societies can thrive in an increasingly digital world.European voices aren't afterthoughts in global technology discourse—they're fundamental contributors to understanding how diverse democratic societies can maintain their values while embracing technological change. This conversation needs academic researchers, policy practitioners, industry innovators, and engaged citizens from across Europe and beyond.If this resonates with your own observations about technology's role in society, subscribe to follow our journey as we explore these themes with guests from across Europe and the transatlantic technology community.And if you want to dig deeper into these questions or share your own perspective on European approaches to cybersecurity and technology governance, I'd love to continue the conversation directly. Get in touch with us on Linkedin! Marco CiappelliBroadcasting from Los Angeles (USA) & Florence (IT)On Linkedin: https://www.linkedin.com/in/marco-ciappelliRob BlackBroadcasting from London (UK)On Linkedin https://www.linkedin.com/in/rob-black-30440819Sean MartinBroadcasting from New York City (USA)On Linkedin: https://www.linkedin.com/in/imsmartinThe transatlantic conversation about technology, society, and democratic values starts now.
In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).
Kann Europa im globalen KI-Wettrennen mithalten – oder verliert es den Anschluss?In dieser Episode des The Agenda Podcasts spricht Elisabeth L'Orange, Partner bei Deloitte und gehört zu dem Manager Magazin zu den 15 führenden Frauen im Bereich Künstlicher Intelligenz. mit Sherpany Podcast-Host Ingo Notthoff, über die Herausforderungen und Chancen der Künstlichen Intelligenz in Europa. Elisabeth L'Orange beleuchtet den aktuellen AI-Act und seine Auswirkungen, spricht über die Innovationskraft europäischer Unternehmen und teilt ihre Erfahrungen aus der Praxis. Zudem gibt sie wertvolle Empfehlungen für Unternehmen, die KI erfolgreich implementieren möchten.Diese Impulse erhalten Sie in dieser Episode:Die USA und China sind führend bei KI-InnovationenWarum Europa bei KI-Innovationen hinterherhinkt – und wo es seine Stärken hatDie Akzeptanz von KI in Unternehmen ist oft geringUnternehmen müssen KI strategisch implementierenDie Auswirkungen des AI-Act auf die Innovation von UnternehmenAufsichtsräte sollten aktiv in die KI-Transformation eingreifenWie sich der Einsatz von KI per KPIs messen lässtAus medizinischen KI-Anwendungen lernenAufsichtsräte und Vorstände müssen selbst KI nutzenThank you for listening! Visit us at Sherpany.com or follow us on LinkedIn for board, board committee, and executive meetings solutions.
AI is rapidly becoming part of the healthcare system, powering everything from diagnostic tools and medical devices to patient monitoring and hospital operations. But while the potential is extraordinary, the risks are equally stark. Many hospitals are adopting AI without the safeguards needed to protect patient safety, leaving critical systems exposed to threats that most in the sector have never faced before. In this episode of Tech Talks Daily, I speak with Ty Greenhalgh, Healthcare Industry Principal at Claroty, about why healthcare's AI rush could come at a dangerous cost if security does not keep pace. Ty explains how novel threats like adversarial prompts, model poisoning, and decision manipulation could compromise clinical systems in ways that are very different from traditional cyberattacks. These are not just theoretical scenarios. AI-driven misinformation or manipulated diagnostics could directly impact patient care. We explore why the first step for hospitals is building a clear AI asset inventory. Too many organizations are rolling out AI models without knowing where they are deployed, how they interact with other systems, or what risks they introduce. Ty draws parallels with the hasty adoption of electronic health records, which created unforeseen security gaps that still haunt the industry today. With regulatory frameworks like the UK's AI Act and the EU's AI regulation approaching, Ty stresses that hospitals cannot afford to wait for legislation. Immediate action is needed to implement risk frameworks, strengthen vendor accountability, and integrate real-time monitoring of AI alongside legacy devices. Only then can healthcare organizations gain the trust and resilience needed to safely embrace the benefits of AI. This is a timely conversation for leaders across healthcare and cybersecurity. The sector is on the edge of an AI revolution, but the choices made now will determine whether that revolution strengthens patient care or undermines it. You can learn more about Claroty's approach to securing healthcare technology at claroty.com.
In this episode of The AI Report, An AI Showdown: Musk Accuses Apple & OpenAI of Rigging the Future, and he's suing. Artie Intel and Micheline Learning report on all things Artificial Intelligence. Today, the duo explores breakthrough research in neuro-symbolic systems, the rollout of the EU’s AI Act, and how innovation, regulation, and everyday life are colliding in real time. Topics this episode covers include:
Marc spricht mit Dr. Constantin Rehaag, Partner bei Dentons, über seinen vielfältigen Weg vom Reserveoffizier zum IP-Litigator mit strafrechtlichem Schwerpunkt. Es geht um die Bekämpfung von Marken- und Produktpiraterie, die internationale Zollpraxis, lebensgefährliche Medikamentenfälschungen und den Einsatz von KI in Großkanzleien, inklusive Einblicken in interne AI-Guidelines, Legal-Tech-Investitionen und veränderte Ausbildungsmodelle für junge Juristen. Wie erkennt man Fälschungen schon an der Paketflut? Welche Chancen eröffnet der europäische AI-Act für Anwälte? Weshalb bevorzugt Constantin Bewerber, deren zweites Examen stärker ausfällt? Warum sollten Partner in Kanzleien länger arbeiten? Antworten auf diese und viele weitere Fragen erhaltet Ihr in dieser Folge von IMR. Viel Spaß!
Tentokrát o tom, že strana Motoristé má nového autoklempíře, že předsedu Stačilo Vidláka podle všeho platí slovenský miliardář s kontakty na Rusko a Babiše, rychle si proletíme novinky z Reichu, kde se mimo jiné schyluje k zatýkání demokratických zákonodárců v Texasu. Připomeneme si, že v EU začíná platit AI Act a že je to dobře a na závěr dojde na srovnání amerických a čínských akčních plánů AI.Celé epizody na https://www.herohero.co/kanarcivsiti . A nebo si kupte naše trička na https://www.neverenough.shop/kanarci . Podcast pro Vás připravují @alexalvarova a @holyj . Hudba a sound engineering: PsyekTwitter Spaces moderuje @jiribulan .Najdete nás na www.kanarci.online
In this thought-provoking episode, Cresset Group's CSO, Mark Mackey, and VP of AI, Mutlu Dogruel, dissect the emerging geopolitical battle for AI supremacy, revealing how China's DeepSeek model fundamentally disrupted the AI landscape by achieving GPT-4 level performance for just $6 million versus OpenAI's reported $100+ million investment. The discussion exposes a critical divide in global AI governance: Europe's "regulatory fortress" approach with the AI Act creates strict but vague rules that leave companies navigating a complex maze, while the US employs a "Wild West" philosophy of sector-specific oversight and red-teaming strategies. In the race for global AI dominance, China is leaping ahead of the US and the EU.This episode is presented in partnership with Cresset.HostLori Ellis, Head of Insights, BioSpaceGuestsMutlu Dogruel, VP of AI, CressetMark Mackey, CSO, CressetDisclaimer: The views expressed in this discussion by guests are their own and do not represent those of their organizations.
In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact
Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact
New rules came into force requiring makers of artificial intelligence models to ensure their systems are safe and transparent. While EU's AI Act means the Government must police the companies providing AI tools, the US vows to cut red tape for the sector. RTÉ Work and Technology Correspondent Brian O'Donovan reported on the latest in tech.
In this episode of Top in Tech, Adriana Capparelli, Practice Director, and Anna Lisa Schäfer-Gehrau, Associate, discuss the latest developments in digital policy from Brussels, focusing on the Digital Fairness Act (DFA) and its implications for child protection and fairness in digital environments. They explore the intersection of the DFA with existing regulations like the Digital Services Act (DSA) and the AI Act, the scope of the DFA, and the ongoing consultation process. The conversation also highlights international regulatory trends and the importance of stakeholder engagement as the EU moves towards significant legislative changes in the coming months. Top in Tech will take a short break for August and return in September. Hosted on Acast. See acast.com/privacy for more information.
In today's Cloud Wars Minute, I spotlight a powerful joint message from two of Europe's top tech CEOs, Siemens CEO Roland Busch and SAP CEO Christian Klein, who are urging the European Union to reconsider its AI legislation.Highlights00:03 — The CEOs of Siemens and SAP have urged the EU to reconsider its AI legislation, expressing concerns that the current law hinders innovation. SAP CEO Christian Klein and Siemens CEO Roland Busch are advocating for a new regulatory framework that supports AI innovation rather than stifling it. Busch cited the EU's AI Act as the reason that Europe is falling behind.00:46 — Klein echoed these concerns. Both CEOs believe the first step should be reevaluating the existing legislation affecting AI innovators in Europe. While the AI Act is designed to ensure the safe development and deployment of AI technologies, it also risks stifling innovation.01:17 — This can lead to monopolization and hold back key innovators in the EU who are working collectively with their global partners to create what will become the cornerstones of the AI era. And there's a strong case for collaboration here. Both tech companies and legislators need to align in establishing the foundations for a thriving yet safe AI operations landscape. Visit Cloud Wars for more.
Today's episode is brought to you in partnership with The Observer and Tortoise, who just released a brand new investigative podcast series, called “The Lab Detective”. “The Lab Detective" deals with the story of Kathleen Folbigg, who experienced unimaginable loss. All four of her infant children died suddenly, over a 10-year period. Their deaths were unexplained. Until a police detective turned up at her door, and arrested Kathleen on suspicion of murder.Kathleen was labelled ‘Australia's worst female serial killer', convicted, and was destined to spend most of the rest of her life in jail.Until a different type of detective entered her life. A scientist working in a lab who uncovered the truth behind the deaths.This is the story of a shocking miscarriage of justice, but as journalist Rachel Sylvester starts to investigate how this happened, she learns that Kathleen is not alone – other mothers have endured the same fate, and may also be in prison.The Lab Detective is a story about the power of science, and the determination of those searching for the truth.Follow the link to listen to the full series: https://lnk.to/thelabdetectiveFD!europod.About today's episode on Long Story Short:Google has confirmed, today, it will sign the European Union's Code of Practice for General Purpose AI. The Code is a voluntary rulebook designed to help companies comply with the EU's AI Act, a landmark law that regulates artificial intelligence based on the risks it poses to society. But critics ask: is Europe protecting its citizens or putting itself at a disadvantage in the global AI race?Join us on our journey through the events that shape the European continent and the European Union.Production: By Europod, in co production with Sphera Network.Follow us on:LinkedInInstagram Hosted on Acast. See acast.com/privacy for more information.
Alors que le règlement européen "AI Act" entre progressivement en vigueur, réécoutez ce point de vue sans langue de bois de Luc Julia, expert de l'intelligence artificielle, co-inventeur de Siri et directeur scientifique de Renault. Un entretien percutant sur les enjeux de souveraineté, de régulation… et sur l'avenir des IA agentiques.Rediffusion du 06/03/2025Dans cet épisode, Luc Julia revient sur :Le sommet de l'IA à Paris : entre annonces budgétaires ambitieuses et réalités incertaines sur le terrain.La régulation de l'intelligence artificielle : pourquoi vouloir encadrer trop tôt pourrait freiner l'innovation ?Le retard européen en matière de puces électroniques et de data centers.L'importance de soutenir les start-up françaises pour franchir le cap du scale-up.Le potentiel des IA open source, plus transparentes, plus frugales, et mieux adaptées à une IA « à domicile ».L'émergence des IA agentiques : vers des systèmes plus spécialisés, orchestrés, et capables d'agir sur le monde, notamment en robotique.-----------
Een zege voor Europese regelgeving op het gebied van AI, want Google heeft na eerder verzet nu tóch ingestemd met nieuwe richtlijnen. Joe van Burik vertelt erover in deze Tech Update. Verder in deze Tech Update: In Italië moet AI helpen in de strijd tegen klimaatverandering met het voorspellen van aardverschuivingen en modderstromen Een schandaal in de categorie slimme seksspeeltjes bij maker Lovense See omnystudio.com/listener for privacy information.
In this episode, Steve speaks with Dragos Tudorache, one of the members of the European Parliament who is responsible for writing the EU's AI Act. Dragos explains the thought process that went into developing the new law and tells Steve what organisations can expect and how they can prepare for its implementation. Mentioned in and related to this episode: ISF Podcast: Ellie Pavlick - Balancing the Risk and Reward of AI ISF Podcast: The Ethical Dilemma of AI & Innovation ISF Podcast: Beyond Buzzwords: AI, ML, and the Future of Cyber ISF Podcast: Mo Gawdat: Rethinking the Paradigm of Artificial and Human Intelligence ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Alors que je suis toujours à Beyrouth, ce nouvel épisode du Debrief Transat avec Bruno Guglielminetti est l'occasion d'explorer comment l'intelligence artificielle s'installe… même là où on ne l'attend pas. Au Liban, malgré un contexte politique et économique tendu, des initiatives locales voient le jour pour démocratiser l'IA, sensibiliser les jeunes pousses, et tenter de freiner la fuite des cerveaux. Une conférence baptisée AI and Lebanon témoigne de cette effervescence.On revient aussi sur l'explosion des usages de ChatGPT et des agents IA personnalisés. Je partage mes tests très concrets avec le nouvel AI Agent de ChatGPT, capable de passer des commandes sur Uber Eats ou d'analyser mon emploi du temps. Un aperçu du futur de l'assistance numérique.Nous parlons aussi de Grok, l'IA d'Elon Musk, qui se décline désormais en avatars destinés aux enfants — une initiative controversée qui pose des questions éthiques. Enfin, focus sur le AI Act européen qui entre en vigueur en août : une nouvelle phase de régulation qui pourrait marquer un tournant dans l'innovation en Europe.Enfin, nous évoquons les sommaires de nos podcasts hebdo respectifs.-----------
Depuis Beyrouth, Jérôme Colombain raconte sa découverte d'une conférence sur l'IA au Liban, avec notamment un bus itinérant de sensibilisation aux technologies et une volonté d'éviter la fuite des cerveaux. Il partage aussi son test du nouvel agent GPT, capable de passer une commande sur Uber Eats ou d'analyser un agenda Google en détail. Discussion ensuite sur les avatars virtuels de Grok, dont un projet controversé destiné aux enfants, et sur les inquiétudes liées au laxisme américain face à l'IA, comparé au futur AI Act européen.
Entre un Donald Trump qui promet un avenir sans contraintes pour l'IA et une Europe qui entre dans l'ère de la régulation avec l'AI Act, le fossé transatlantique s'élargit. ChatGPT 5 se profile, les attaques informatiques se multiplient, et le sport entre dans une nouvelle ère avec des commentaires générés par l'IA.
Dans cet épisode, Gil Katz nous montre comment Intelsia utilise l'intelligence artificielle pour transformer des données publiques éparpillées en levier stratégique pour les entreprises. Matching d'appels d'offres, RAG sur-mesure, agents intelligents… un vrai retour aux fondamentaux : créer de la valeur, pas de la hype.
Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.
Will EU cybersecurity laws result in new global standards? Should companies handle NIS2 compliance in concert with GDPR, AI Act, or Data Act requirements? Does it make sense to take data localization to its ultimate consequences? Nathalie Barrera serves as the Director for Privacy for the EMEA region at Palo Alto Networks, which is a leading provider of cybersecurity solutions. Her expertise involves the company's compliance with NIS2, the AI Act, the GDPR, and DORA. She also assists customers in navigating their own complex regulatory requirements. She has previously spent seven years at Cisco Systems working as commercial counsel and Privacy and Security Counsel. She studied law and completed her LLM at the University of Navarra. References: Nathalie Barrera on LinkedIn EU Network and Information Services Directive II EU Data Act EU Digital Operational Resilience Act (DORA)
Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur
Über 100 Unternehmen fordern eine zweijährige Aussetzung der KI-Verordnung. Ist die EU-Gesetzgebung wirtschaftsfeindlich oder Grundrechtsschutz? Außerdem: Chinas Perspektive auf das US-TikTok-Gesetz. Und: Was blieb vom Metaversum-Hype? Vera Linß, Hagen Terschüren, Katharin Tai, Dennis Kogel, Marcus Richter www.deutschlandfunkkultur.de, Breitband
i'm wall-e, welcoming you to today's tech briefing for monday, july 7th. here's what's happening in the tech world: microsoft's strategic shift: microsoft is winding down its pakistan operations after 25 years, maintaining customer agreements through resellers and neighboring offices, impacting five employees. rivian's major investment: rivian receives a significant $1 billion investment from volkswagen, while techcrunch mobility plans a move to the beehiiv platform, enhancing reader engagement. uber's potential deal: speculation arises around uber's travis kalanick exploring a deal with the u.s. arm of pony ai, indicating fluid movements in the mobility sector. eu's ai act progresses: the eu is advancing its ai act, planning a risk-based regulatory framework by 2026, addressing "unacceptable risk" and transparency for "high-risk" ai applications. gallant's advancement in pet healthcare: gallant secures $18 million for ready-to-use stem cell therapies targeting pet conditions, marking a breakthrough in biotech solutions for veterinary medicine. that's all for today. we'll see you back here tomorrow.
From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Taiwan Semiconductor Manufacturing is delaying construction of a second plant in Japan, Ilya Sutskever announced he will take on the CEO role at his AI startup, Safe Superintelligence, and the EU is proceeding with its AI Act despite tech companies’ efforts to delay it. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free.Continue reading "The EU Is Proceeding With Its AI Act Despite Tech Companies’ Efforts To Delay It – DTH"
Ce jeudi 3 juillet, François Sorel a reçu Lucas Perraudin, fondateur de AI Partners et ancien directeur de Meta Reality Labs, Jérôme Marin, fondateur de Cafetech.fr, et Didier Sanz, journaliste spécialisé en informatique. Ils se sont penchés sur Perplexity qui lance un abonnement IA à 200 dollars par mois, la grande offensive des entreprises pour reporter la régulation sur l'IA en Europe, la compétition entre Starlink et Eutelsat pour équiper les TGV de WI-FI, ainsi que le partenariat entre France TV et Amazon Prime Vidéo, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez-la en podcast.
John Pavolotsky is a partner at Stoel Rives in San Francisco. He is co-chair of the firm's AI, Privacy & Cybersecurity group and focuses his practice on data privacy, information security, and complex technology transactions. He has also been chair of the Intellectual Property Section of the California Lawyers Association. John has taught Technology Transactions Law at the UC Davis School of Law and Comparative Privacy Law at the Santa Clara University School of Law. John has also guest lectured on technology and privacy law topics at the University of California, Berkeley, Haas School of Business; the University of San Francisco School of Management; and Stanford University. References: John Pavolotsky on LinkedIn John Pavolotksy at Stoel Rives Timeline of discussions (House, Senate) leading to a final decision on a 10-year moratorium on state-level AI laws (final deadline: July 4, 2025), Techcrunch Texas Legislature Passes House Bill 149 to Regulate AI Use (Nelson Mullins) Colorado AI Act California Privacy Protection Agency: Draft Automated Decision-making Technology Regulations California Gov. Newsom vetoes AI safety bill that divided Silicon Valley (September 2024), NPR Poland puts pausing enforcement of the AI Act on EU ministers' table (June 2025, MLex - paywalled) A Brief Overview of the Federal Trade Commission's Investigative, Law Enforcement, and Rulemaking Authority (FTC)
Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world's first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world's first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK. Transcript Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards) Our Future with AI Hinges on Global Cooperation Building an Organizational Approach to Responsible AI Co-Existing with AI - Firth-Butterfield's Forthcoming Book
L'Union européenne envisage de repousser certaines obligations du futur AI Act. Les règles sur la gouvernance et les modèles génériques pourraient n'entrer en vigueur qu'en 2026.Traduction: The European Union is considering delaying parts of the upcoming AI Act. Governance rules and requirements for general-purpose models might not take effect until 2026. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act's rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? To help us unpack all of this, Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, Head of Office and Digital Policy Advisor to German MEP Axel Voss, and one of the more influential voices shaping the future of EU digital policy.
In a world racing into the AI era, security, privacy, and trust are under pressure like never before. Chief Security Officers are facing a whole new battlefield, with laws rapidly taking shape — especially in the U.S. and California. Big changes are on the horizon, like California's new SB 468 bill, pushing for stronger protections around high-risk AI systems, while the EU's AI Act moves in a similar direction. So, what does this mean for CISOs today — and how should they prepare? Join us with Nick Shevelyov and Punit Bhatia, a seasoned cybersecurity leader, former Chief Security Officer at Silicon Valley Bank, and the author of Cyber War and Peace. With years of experience defending innovation and building digital trust, Nick shares real stories, powerful lessons from history, and sharp insights into where the future is heading. This is your sneak peek into the conversation — stay tuned for the full episode.