Podcasts about ai act

  • 577PODCASTS
  • 1,119EPISODES
  • 35mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 1, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai act

Latest podcast episodes about ai act

Beyond Coding
Why Your AI Experiments Won't Scale

Beyond Coding

Play Episode Listen Later Oct 1, 2025 44:08


What if the biggest obstacles to AI innovation aren't what you think? Deeploy CEO Maarten Stolk shares his controversial but effective strategies for building successful AI products and ecosystems, challenging the common wisdom around bottom-up initiatives and regulation.In this episode, we cover:Why bottom-up initiatives fail without strong top-down vision.The surprising benefits of the EU's AI Act for innovation.How to build a thriving AI ecosystem from the ground up.The single most important metric for AI observability.This conversation is for tech leaders, founders, and engineers who want to move beyond AI experiments and build real-world, production-ready systems.Timestamps:00:00:00 - Intro00:00:45 - Why Maarten Started a Dutch AI Hub00:02:15 - The "Flywheel" Effect Crucial for AI Success00:04:42 - The Hard Truth: Why the Netherlands is Lagging in AI00:07:52 - A Controversial Take: The EU AI Act is Actually Good for Everyone00:11:26 - The Real Bottleneck to Innovation Isn't Regulation00:14:25 - From POC to Production: Why Top-Down Vision is Non-Negotiable00:17:13 - A Wake-Up Call for Inexperienced Leadership Teams00:20:30 - How Winning Companies Use AI to Dominate Their Market00:23:44 - The Right Way to Learn From Your Competitors00:27:30 - Maarten Outsourced Core Development to an AI Company00:31:59 - The #1 Metric You Must Track for AI Observability00:36:03 - Open-Source vs. Closed-Source: Which AI Model Will Win?00:40:23 - The Inevitable Crisis That Will Force Innovation00:42:19 - The Power of Having a Long-Term Personal Vision#AIStrategy #TechLeadership #Innovation

Masters of Privacy
Robert Bateman: AI watermarking, recognized legitimate interests and age verification in the UK

Masters of Privacy

Play Episode Listen Later Sep 28, 2025 29:58


Robert Bateman is a Senior Partner at Privacy Partnership, which provides consultancy and training on data protection and AI regulation, as well as legal advice via its associated law firm, Privacy Partnership Law. He also hosts The Privacy Partnership Podcast.This is Robert's third appearance on the show. We have covered three hot topics:* How far do we take watermarking of AI-generated content under article 50 of the AI Act?* How do pre-defined legitimate interest scenarios work under the UK Data (Use and Access) Act?* What is the tension between the Online Safety Act and the new data protection framework in the UK?References:SIGN UP NOW for the Masters of Privacy NYC LIVE recording and networking event on Nov 6 (if you happen to be in town)* Robert Bateman on LinkedIn* Robert Bateman on Bluesky* The Privacy Partnership Podcast* AI Act (EU Commission's resources)* Data (Use and Access) Act 2025: data protection and privacy changes* The EU approach to age verification (EU Commission)* EU follows UK with age verification in 2026 (PPC Land)* Wikipedia loses challenge against Online Safety Act verification rules (BBC)* Robert Bateman: the EDPB's Opinion on auditing subprocessors and the future of Meta's unskippable ads (Masters of Privacy, Nov 2024)* Robert Bateman: Consent or Pay (Masters of Privacy, Oct 2023) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe

Clare FM - Podcasts
Clare MEP Questions Proposals To Amend AI Act

Clare FM - Podcasts

Play Episode Listen Later Sep 27, 2025 5:33


A Clare MEP has questioned if political lobbying is the cause of proposed amendments to new artificial intelligence laws. The Artificial Intelligence Act is a European Union regulation which came into force on August 1st 2024 and establishes a common regulatory and legal framework for AI within the European Union. The provisions of the act are coming into effect gradually and will all be in operation within 36 months of the act's introduction. Speaking in the European Parliament, Scariff-based Independent MEP Michael McNamara says it's highly unusual to propose amending legislation in its infancy.

Uncommons with Nate Erskine-Smith
The Future of Online Harms and AI Regulation with Taylor Owen

Uncommons with Nate Erskine-Smith

Play Episode Listen Later Sep 26, 2025 39:00


After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca

The Road to Accountable AI
Dean Ball: The World is Going to Be Totally Different in 10 Years

The Road to Accountable AI

Play Episode Listen Later Sep 25, 2025 37:57 Transcription Available


Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration's “AI Bill of Rights” approach, which he views as symbolic and wasteful, and the European Union's AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution. By contrast, he describes the Trump administration's AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users' behalf. Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass. Transcript https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view America's AI Action Plan Dean Ball's Hyperdimensional blog  

Altalex News
Altalex Settimanale n. 30/2025: le notizie dal 22 al 26 settembre

Altalex News

Play Episode Listen Later Sep 25, 2025 11:03


Intelligenza artificiale: legge italiana, giustizia e tribunali, AI a scuola. Inoltre, gli strumenti di tutela giurisdizionale per le opere d'arte fotografiche e il Provvedimento n. 277/2025 del Garante Privacy sull'interpretazione del principio di accountability.>> Leggi anche l'articolo: https://tinyurl.com/bdcrv8b9>> Scopri tutti i podcast di Altalex: https://bit.ly/2NpEc3w

KI in der Industrie
From parades to protocols and AI literacy

KI in der Industrie

Play Episode Listen Later Sep 24, 2025 56:10 Transcription Available


In this episode, we dive into the evolving landscape of industrial AI, starting with a lively Oktoberfest recap before shifting gears to the latest breakthroughs in physics-informed neural networks and user interfaces. We discuss the real-world impact of Europe's AI Act, featuring insights from industry leaders and an in-depth interview with Sampo Leino of MinnaLearn on building AI literacy for enterprises. As we unpack strategic investments, robotics trends, and the challenges of compliance, we question what it means to use AI safely and competitively. Throughout the conversation, we keep it grounded in everyday experience—how regulation, technology, and practical learning are shaping the factories and workplaces of tomorrow. Tune in to hear how we're navigating this complex, fast-moving frontier and what it means for anyone working with AI today.

Na vlně podnikání
O boji proti Microsoftu a nebezpečí skenování soukromých zpráv s Adamem Pacltem z IceWarpu

Na vlně podnikání

Play Episode Listen Later Sep 24, 2025 38:19


Proč je potřeba, aby existovala alternativa ke kancelářským balíkům od Microsoftu a Googlu? Jak se získávají velké korporátní a vládní zakázky v Asii i dalších regionech? Proč Evropa ztrácí technologické talenty? A jaké dopady mohou mít regulace jako AI Act nebo chat control na startupy a svobodu internetu? O tom všem a mnohém dalším hovořil s Adamem Pacltem z technologické firmy IceWarp Martin Petříček, zástupce šéfredaktora týdeníku Ekonom.Tento díl podcastu Na vlně podnikání moderuje Martin Petříček, zástupce šéfredaktora týdeníku Ekonom.

2024
Legge italiana sull'IA - Salone Nautico

2024

Play Episode Listen Later Sep 19, 2025


Dopo un lungo iter parlamentare è stata definitivamente approvata la legge italiana per l’intelligenza artificiale. Con Giusella Finocchiaro, docente di Diritto dell’IA all’Università di Bologna, parliamo dei punti più interessanti e delle novità introdotte da questa normativa che si affianca all’AI Act europeo.Siamo stati al Salone nautico di Genova per parlare di innovazione nell’ambito della nautica. Lo abbiamo fatto assieme al Prof. Andrea Ratti, che insegna Yacht design al Politecnico di Milano e con il quale abbiamo ragionato di foil, propulsione elettrica e a idrogeno e dell’impiego dell’intelligenza artificiale nella progettazione e industrializzazione della nautica. Con Michele Solari, di Barchelettriche.com abbiamo parlato dell’elettrificazione delle barche mentre Carlo Cauchi, Ceo della startup Kosmos, di una bicicletta da acqua elettrica (waterbike) con un innovativo sistema di propulsione “pedal by wire” e l’uso di foil.E come sempre in Digital News le notizie di innovazione e tecnologia più importanti della settimana.

PinG-Podcast
Follow the Rechtsstaat Folge 139

PinG-Podcast "Corona im Rechtsstaat"

Play Episode Listen Later Sep 18, 2025 43:25


Letzte Woche gab es Full House beim Deutschen Anwaltverein. Dort stellten die schleswig-holsteinische Landesdatenschutzbeauftragte Marit Hansen und der Hamburgische Landesdatenschutzbeauftragte Thomas Fuchs gemeinsam mit Markus Wünschelbaum den Bridge Blueprint vor, unterstützt durch einen WakeUp-Call aus Brüssel von Leonardo Cervera-Navas, dem Generalsekretär der EU-Datenschutzbehörde. Die Botschaft des Bridge Blueprint ist einfach, hat jedoch Sprengkraft: Wer den AI Act umsetze, handele zugleich DSGVO-konform. Niko Härting spricht mit Markus Wünschelbaum über das Bridge Blueprint und einige konkrete Brücken: - Datenminimierung: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann der Grundsatz der Datenminimierung gewahrt werden (Art. 5 Abs. 1 lit. c DSGVO). - Berechtigte Interessen: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann die Datenverarbeitung nach Art. 6 Abs. 1 Satz 1 lit. f DSGVO gerechtfertigt sein. - Sensitive Daten: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, ist die Datenverarbeitung wegen eines "erheblichen öffentlichen Interesses" nach Art. 9 Abs. 2 lit. g DSGVO gerechtfertigt sein. - Automatisierte Entscheidungen: Wenn der AI Act eine Verarbeitung personenbezogener Daten billigt oder sogar verlangt, kann dies für eine Zulässigkeit automatisierte Entscheidungen nach Art. 22 Abs. 2 lit. b DSGVO sprechen. Nachlesen: https://lnkd.in/eY9VU293 PM des DAV: https://lnkd.in/e4YerNZE

InfosecTrain
AI Governance Masterclass: Designing Responsible & Ethical AI Systems

InfosecTrain

Play Episode Listen Later Sep 17, 2025 81:08


De Technoloog | BNR
'De AI-Act beschermt ons niet tegen de allergrootste risico's'

De Technoloog | BNR

Play Episode Listen Later Sep 11, 2025 61:22


De AI-Act is het hoogst haalbare wat Europa had kunnen bereiken op het gebied van wet- en regelgeving. De AI-Act moest vooral zorgen wegnemen, maar volgens IT-advocaat Geert Jan Koning dient het meer als ‘voorlichting’ dan ‘voorbehoedsmiddel’. Meer zit er gewoon niet in. En dat is ook logisch. Als je kijkt naar landen als China en de Verenigde Staten doen zij het heel anders, veel pragmatischer. Europa is meer bezig met het voorkomen van schade in de toekomst. Als groot samenwerkingsverband moesten we ook wel inzetten op een uitgebreide AI-act. De Europese Unie bestaat nu eenmaal uit meerdere landen bij elkaar, die door middel van onderhandelingen uiteindelijk op uitgebreide wetten uitkomen. Geert Jan Koning neemt je stap voor stap mee door de AI-Act, legt uit wat we er in de praktijk daadwerkelijk aan hebben maar laat tegelijkertijd ook de gebreken ervan zien. Want hoe maak je beleid op iets wat eigenlijk niet te 'grijpen' valt? Gast Geert Jan Koning Hosts Ben van der Burg & Daniël Mol Redactie Daniël Mol Rosanne PetersSee omnystudio.com/listener for privacy information.

Techzine Talks
Europa's techrevolutie: Zijn dit de drie investeringen die alles veranderen?

Techzine Talks

Play Episode Listen Later Sep 8, 2025 32:55


Europa maakt eindelijk meer werk van technologische onafhankelijkheid. De afgelopen week zijn er drie baanbrekende ontwikkelingen die bewijzen dat de EU niet langer alleen maar wil reguleren, maar ook wil innoveren en concurreren. Het is niet de Europese Commissie maar de industrie zelf die zich roert en met duidelijke plannen komt.

ITSPmagazine | Technology. Cybersecurity. Society
The Transatlantic Broadcast Pilot Episode 2025: European Tech, Cybersecurity, and Society | ITSPmagazine Europe: The Transatlantic Broadcast Hosted by Marco Ciappelli, Rob Black, and Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 6, 2025 32:12


Broadcasting from Florence and Los Angeles, I Had One of Those Conversations...You know the kind—where you start discussing one thing and suddenly realize you're mapping the entire landscape of how different societies approach technology. That's exactly what happened when Rob Black and I connected across the Atlantic for the pilot episode of ITSPmagazine Europe: The Transatlantic Broadcast.Rob was calling from what he optimistically described as "sunny" West Sussex (complete with biblical downpours and Four Seasons weather in one afternoon), while I enjoyed actual California sunshine. But this geographic distance perfectly captured what we were launching: a genuine exploration of how European perspectives on cybersecurity, technology, and society differ from—and complement—American approaches.The conversation emerged from something we'd discovered at InfoSecurity Europe earlier this year. After recording several episodes together with Sean Martin, we realized we'd stumbled onto something crucial: most global technology discourse happens through an American lens, even when discussing fundamentally European challenges. Digital sovereignty isn't just a policy buzzword in Brussels—it represents a completely different philosophy about how democratic societies should interact with technology.Rob Black: Bridging Defense Research and Digital RealityRob brings credentials that perfectly embody the European approach to cybersecurity—one that integrates geopolitics, human sciences, and operational reality in ways that purely technical perspectives miss. As UK Cyber Citizen of the Year 2024, he's recognized for contributions that span UK Ministry of Defense research on human elements in cyber operations, international relations theory, and hands-on work with university students developing next-generation cybersecurity leadership skills.But what struck me during our pilot wasn't his impressive background—it was his ability to connect macro-level geopolitical cyber operations with the daily impossible decisions that Chief Information Security Officers across Europe face. These leaders don't see themselves as combatants in a digital war, but they're absolutely operating on front lines where nation-state actors, criminal enterprises, and hybrid threats converge.Rob's international relations expertise adds crucial context that American cybersecurity discourse often overlooks. We're witnessing cyber operations as extensions of statecraft—the ongoing conflict in Ukraine demonstrates how narrative battles and digital infrastructure attacks interweave with kinetic warfare. European nations are developing their own approaches to cyber deterrence, often fundamentally different from American strategies.European Values Embedded in Technology ChoicesWhat emerged from our conversation was something I've observed but rarely heard articulated so clearly: Europe approaches technology governance through distinctly different cultural and philosophical frameworks than America. This isn't just about regulation—though the EU's leadership from GDPR through the AI Act certainly shapes global standards. It's about fundamental values embedded in technological choices.Rob highlighted algorithmic bias as a perfect example. When AI systems are developed primarily in Silicon Valley, they embed specific cultural assumptions and training data that may not reflect European experiences, values, or diverse linguistic traditions. The implications cascade across everything from hiring algorithms to content moderation to criminal justice applications.We discussed how this connects to broader patterns of technological adoption. I'd recently written about how the transistor radio revolution of the 1960s paralleled today's smartphone-driven transformation—both technologies were designed for specific purposes but adopted by users in ways inventors never anticipated. The transistor radio became a tool of cultural rebellion; smartphones became instruments of both connection and surveillance.But here's what's different now: the stakes are global, the pace is accelerated, and the platforms are controlled by a handful of American and Chinese companies. European voices in these conversations aren't just valuable—they're essential for understanding how different democratic societies can maintain their values while embracing technological transformation.The Sociological Dimensions Technology Discourse MissesMy background in political science and sociology of communication keeps pulling me toward questions that pure technologists might skip: How do different European cultures interpret privacy rights differently? Why do Nordic countries approach digital government services so differently than Mediterranean nations? What happens when AI training data reflects primarily Anglo-American cultural assumptions but gets deployed across 27 EU member states with distinct languages and traditions?Rob's perspective adds the geopolitical layer that's often missing from cybersecurity conversations. We're not just discussing technical vulnerabilities—we're examining how different societies organize themselves digitally, how they balance individual privacy against collective security, and how they maintain democratic values while defending against authoritarian digital influence operations.Perhaps most importantly, we're both convinced that the next generation of European cybersecurity leaders needs fundamentally different skills than previous generations. Technical expertise remains crucial, but they also need to communicate complex risks to non-technical decision-makers, operate comfortably with uncertainty rather than seeking perfect solutions, and understand that cybersecurity decisions are ultimately political decisions about what kind of society we want to maintain.Why European Perspectives Matter GloballyEurope represents 27 different nations with distinct histories, languages, and approaches to technology governance, yet they're increasingly coordinating digital policies through EU frameworks. This complexity is fascinating and the implications are global. When Europe implements new AI regulations or data protection standards, Silicon Valley adjusts its practices worldwide.But European perspectives are too often filtered through American media or reduced to regulatory footnotes in technology publications. We wanted to create space for European voices to explain their approaches in their own terms—not as responses to American innovation, but as distinct philosophical and practical approaches to technology's role in democratic society.Rob pointed out something crucial during our conversation: we're living through a moment where "every concept that we've thought about in terms of how humans react to each other and how they react to the world around them now needs to be reconsidered in light of how humans react through a computer mediated existence." This isn't abstract philosophizing—it's the practical challenge facing policymakers, educators, and security professionals across Europe.Building Transatlantic Understanding, Not DivisionThe "Transatlantic Broadcast" name reflects our core mission: connecting perspectives across borders rather than reinforcing them. Technology challenges—from cybersecurity threats to AI governance to digital rights—don't respect national boundaries. Solutions require understanding how different democratic societies approach these challenges while maintaining their distinct values and traditions.Rob and I come from different backgrounds—his focused on defense research and international relations, mine on communication theory and sociological analysis—but we share curiosity about how technology shapes society and how society shapes technology in return. Sean Martin brings the American cybersecurity industry perspective that completes our analytical triangle.Cross-Border Collaboration for European Digital FutureThis pilot episode represents just the beginning of what we hope becomes a sustained conversation. We're planning discussions with European academics developing new frameworks for digital rights, policymakers implementing AI governance across member states, industry leaders building privacy-first alternatives to Silicon Valley platforms, and civil society advocates working to ensure technology serves democratic values.We want to understand how digital transformation looks different across European cultures, how regulatory approaches evolve through multi-stakeholder processes, and how European innovation develops characteristics that reflect distinctly European values and approaches to technological development.The Invitation to Continue This ConversationBroadcasting from our respective sides of the Atlantic, we're extending an invitation to join this ongoing dialogue. Whether you're developing cybersecurity policy in Brussels, building startups in Berlin, teaching digital literacy in Barcelona, or researching AI ethics in Amsterdam, your perspective contributes to understanding how democratic societies can thrive in an increasingly digital world.European voices aren't afterthoughts in global technology discourse—they're fundamental contributors to understanding how diverse democratic societies can maintain their values while embracing technological change. This conversation needs academic researchers, policy practitioners, industry innovators, and engaged citizens from across Europe and beyond.If this resonates with your own observations about technology's role in society, subscribe to follow our journey as we explore these themes with guests from across Europe and the transatlantic technology community.And if you want to dig deeper into these questions or share your own perspective on European approaches to cybersecurity and technology governance, I'd love to continue the conversation directly. Get in touch with us on Linkedin! Marco CiappelliBroadcasting from Los Angeles (USA) & Florence (IT)On Linkedin: https://www.linkedin.com/in/marco-ciappelliRob BlackBroadcasting from London (UK)On Linkedin https://www.linkedin.com/in/rob-black-30440819Sean MartinBroadcasting from New York City (USA)On Linkedin: https://www.linkedin.com/in/imsmartinThe transatlantic conversation about technology, society, and democratic values starts now.

The AI Policy Podcast
Unpacking the EU AI Act Code of Practice with Marietje Schaake

The AI Policy Podcast

Play Episode Listen Later Sep 5, 2025 50:53


In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).

Recht im Ohr - der Podcast mit Dennis Hillemann
KI und Beschaffung: Beispiele in anderen Ländern – Was Deutschland von seinen EU-Nachbarn lernen kann

Recht im Ohr - der Podcast mit Dennis Hillemann

Play Episode Listen Later Sep 5, 2025 14:23


Kontaktdaten Dennls Hillemann:Rechtsanwalt Dennis Hillemann c/o Rechtsanwälte Advant BeitenNeuer Wall 7220354 Hamburg E-Mail: dennis.hillemann@advant-beiten.com www.advant-beiten.com;Telefon +49.(0)40.68 87 45 - 132Während Deutschland noch zögert, revolutionieren unsere europäischen Nachbarn bereits ihre Vergabepraxis mit Künstlicher Intelligenz. In dieser Folge beleuchten wir, wie Estland, die Niederlande, Frankreich und weitere EU-Staaten KI erfolgreich in ihre Beschaffungsprozesse integrieren – und was deutsche Vergabestellen davon lernen können.Erfahren Sie, wie Estland mit seinem Chief Data Officer und 98,9% digitalen öffentlichen Dienstleistungen zum digitalen Vorreiter wurde und KI zur automatisierten Erstellung von Leistungsbeschreibungen nutzt. Wir analysieren, wie die Niederlande nach der "Kindergeldaffäre" eine vorbildliche KI-Governance entwickelten und mit dem FRAIA-Programm verantwortungsvolle Standards setzen. Frankreich investiert 109 Milliarden Euro in souveräne KI-Infrastruktur – wir zeigen, was hinter der Albert API steckt.Der Beitrag stellt die EU-Mustervertragsklauseln (MCC-AI) vor, die seit März 2025 an den AI Act angepasst wurden und sofort einsetzbar wären. Doch während die rechtlichen Grundlagen vorhanden sind, fehlt in Deutschland die strategische Koordination.Ein kritischer Blick auf verpasste Chancen und konkrete Handlungsempfehlungen für deutsche Beschaffungsstellen: Von zentraler Koordination über systematische Pilotprojekte bis hin zum dringend notwendigen Kompetenzaufbau.Für Vergabepraktiker, Beschaffungsverantwortliche und alle, die verstehen wollen, warum Deutschland bei der KI-gestützten Verwaltungsmodernisierung den Anschluss zu verlieren droht – und was jetzt zu tun ist.

The Agenda
#50 - Der globale Wettlauf um KI: Holt Europa auf? Elisabeth L'Orange (Deloitte) über die Chancen der Künstlichen Intelligenz

The Agenda

Play Episode Listen Later Sep 3, 2025 19:41


Kann Europa im globalen KI-Wettrennen mithalten – oder verliert es den Anschluss?In dieser Episode des The Agenda Podcasts spricht Elisabeth L'Orange, Partner bei Deloitte und gehört zu dem Manager Magazin zu den 15 führenden Frauen im Bereich Künstlicher Intelligenz. mit Sherpany Podcast-Host Ingo Notthoff, über die Herausforderungen und Chancen der Künstlichen Intelligenz in Europa. Elisabeth L'Orange beleuchtet den aktuellen AI-Act und seine Auswirkungen, spricht über die Innovationskraft europäischer Unternehmen und teilt ihre Erfahrungen aus der Praxis. Zudem gibt sie wertvolle Empfehlungen für Unternehmen, die KI erfolgreich implementieren möchten.Diese Impulse erhalten Sie in dieser Episode:Die USA und China sind führend bei KI-InnovationenWarum Europa bei KI-Innovationen hinterherhinkt – und wo es seine Stärken hatDie Akzeptanz von KI in Unternehmen ist oft geringUnternehmen müssen KI strategisch implementierenDie Auswirkungen des AI-Act auf die Innovation von UnternehmenAufsichtsräte sollten aktiv in die KI-Transformation eingreifenWie sich der Einsatz von KI per KPIs messen lässtAus medizinischen KI-Anwendungen lernenAufsichtsräte und Vorstände müssen selbst KI nutzenThank you for listening! Visit us at Sherpany.com or follow us on LinkedIn for board, board committee, and executive meetings solutions.

Human Firewall Podcast
Zwischen Innovation und Kontrolle: Europas neuer AI Act erklärt

Human Firewall Podcast

Play Episode Listen Later Sep 3, 2025 18:03


In dieser Folge nehmen Charline und Christian den EU AI Act unter die Lupe – das weltweit erste umfassende Regelwerk für den Einsatz von künstlicher Intelligenz. Ab Sommer 2025 ist der AI Act keine Zukunftsmusik mehr, sondern verbindliche Realität für Unternehmen in ganz Europa. Doch was genau bedeutet das für Entwickler, Anbieter und Nutzer von KI-Systemen? Sie sprechen darüber, wie der AI Act KI in verschiedene Risikokategorien einteilt, welche Pflichten insbesondere für Hochrisiko-Systeme und General Purpose AI gelten und warum die Regulierung nicht als Innovationsbremse, sondern als Rahmen für verantwortungsvollen Fortschritt verstanden werden sollte. Außerdem werfen sie einen Blick auf den Zeitplan der Umsetzung und die möglichen Sanktionen bei Verstößen. Besonders spannend: Was bedeutet das konkret für KI-Modelle wie ChatGPT oder Gemini, und welche Rolle spielen Transparenz, Cybersicherheit und menschliche Aufsicht in diesem neuen Regelwerk? Mit O-Tönen von: Alexander Ingelheim (CEO, datenschutzexperte.de) Du möchtest mehr über SoSafe erfahren? Dann schau hier vorbei: https://linktr.ee/humanfirewallpodcast Du hast Ideen, Anregungen, Fragen oder möchtest selbst zu Gast im Human Firewall Podcast sein? Dann schreib uns unter podcast@sosafe.de

TẠP CHÍ KINH TẾ
Công nghệ số, mặt trận tiếp theo của Mỹ tấn công Liên Âu

TẠP CHÍ KINH TẾ

Play Episode Listen Later Sep 2, 2025 9:28


Thỏa thuận thuế quan và thương mại Liên Hiệp Châu Âu đạt được với Hoa Kỳ về mức thuế 15 % còn chưa ráo mực, Washington mở thêm một mặt trận mới đòi Bruxelles xóa bỏ mọi rào cản để cho các tập đoàn công nghệ kỹ thuật số của Hoa Kỳ tự do tung hoành trên thị trường châu Âu. Một lần nữa Donald Trump lại sử dụng đòn thuế quan để hù dọa.  Châu Âu, « con gà đẻ trứng vàng » của các GAFAM Vì sao chính quyền Trump muốn « phạt » Bruxelles trong lúc Liên Âu là thị trường màu mỡ nhất - ngoài lãnh thổ Hoa Kỳ, của các tập đoàn Mỹ ? Trong số những vị khách mời có chọn lọc để được trực tiếp dự lễ nhậm chức của tổng thống Hoa Kỳ thứ 47 hôm 20/01/2025, chủ nhân Meta đã mau mắn hơn cả nhờ ông Trump giúp đỡ, nới lỏng các chuẩn mực của châu Âu điều tiết các hoạt động trong lĩnh vực digital. Hơn nửa năm sau, phải chăng Mark Zuckerberg sắp được toại nguyện ? Ngày 24/08/2025 tổng thống Donald Trump trên mạng xã hội cá nhân đã « ra lệnh » cho tất cả những quốc gia trên thế giới dẹp bỏ mọi hình thức thuế quan và luật lệ gò bó hạn chế hoạt động của các tập đoàn Mỹ trong lĩnh vực « digital ». Trong mắt ông, đó là những biện pháp mang tính « phân biệt đối xử ». Kèm theo là lời đe dọa những ai không thỏa mãn đòi hỏi này. Chủ nhân Nhà Trắng không nêu đích danh các mục tiêu Hoa Kỳ nhắm tới nhưng Liên Âu, thị trường số 1 của các ông khổng lồ trong ngành công nghệ High Tech Hoa Kỳ, đã gây dựng nhiều cơ sở pháp lý « tự vệ » mà Donald Trump gọi là những « biện pháp bảo hộ ». Ba cái gai trong mắt Donald Trump Từ tháng 2/2025 chỉ vài tuần sau khi tổng thống Trump trở lại cầm quyền, ông và phó tổng thống J.D Vance -người rất gần gũi với các đại gia trong lĩnh vực digital của Mỹ đã liên tục đòi châu Âu dẹp bỏ những đạo luật và quy tắc « che khuất những thành công rực rỡ  » của nhóm GAFAM (Google, Amazon, Facebook, Apple và Microsoft), của những tên tuổi lớn trong lĩnh vực trí tuệ nhân tạo, những nhà cung cấp các dịch vụ trên mạng … của Hoa Kỳ. Liên Hiệp Châu Âu hiện có ít nhất 3 đạo luật đang khiến Washington tức giận : Đạo luật Thị trường Kỹ thuật số (DMA), nhằm kiểm soát các hành vi gây trở ngại cho việc cạnh tranh giữa các nhà cung cấp dịch vụ digital. Kế tới là DSA, tức là Đạo luật Dịch vụ Kỹ thuật số nhằm tăng cường tính minh bạch của các thuật toán (algorithm), kiểm soát hoạt động kiểm duyệt nội dung và gần đây nhất là AI Act : đây là bộ luật đầu tiên trên thế giới « quản lý » các khâu phát triển, sử dụng trí tuệ  nhân tạo. Theo phân tích của Andrea Renda giám đốc Trung Tâm Nghiên Cứu Chính Sách Châu Âu -CEPS, trụ sở tại Bruxelles, đạo luật DSA chẳng hạn do muốn « cân bằng giữa các mục đích bảo vệ người sử dụng và các quyền tự do ngôn luận » nên là « hiện thân của quỷ dữ » như chính Trump đã nói. Vẫn theo chuyên gia này, trong suốt quá trình đàm phán với châu Âu về thuế quan và thương mại, lĩnh vực digital luôn luôn là nỗi ám ảnh của phía Hoa Kỳ. Một khi đạt được thỏa thuận về mức thuế 15 % với Bruxelles (tháng 7/2025) và giành được nhiều nhượng bộ từ phía châu Âu, Washington lập tức « quay lại tiếp tục gây sức ép, đe dọa Liên Hiệp Châu Âu », đòi khối này phải trả giá đắt nếu gây trở ngại cho đà phát triển của các « ông lớn » trong lĩnh vực công nghệ kỹ thuật số của Mỹ. Hơn nữa chính vì các đạo luận DMA và DSA nói trên mà Apple và Meta đã bị châu Âu phạt hàng trăm triệu đô la. Nói cách khác, cuộc chiến sắp tới Washington đang chuẩn bị nhắm vào châu Âu không hề xuất phát từ nhu cầu thu hẹp nhập siêu của Mỹ với 27 thành viên Liên Âu. Trên đài truyền hình Arte, chuyên gia kinh tế và địa chính trị, thuộc Viện nghiên cứu châu Âu Jacques Delors, bà Elvire Fabry cho rằng, các đòn tấn công liên tiếp của Donald Trump trong lĩnh vực thương mại hay sắp tới đây là công nghệ kỹ thuật số nhắm vào châu Âu trước hết không liên quan đến việc Mỹ bị nhập siêu với châu Âu « Phía sau cuộc chiến thương mại này còn có nhiều ý đồ chính trị. Donald Trump tìm cách xé nát Liên Âu ra thành từng mảnh nhỏ nhưng lại vừa muốn tiếp cận thị trường châu Âu giúp cho các hãng của Mỹ. Cùng lúc, Hoa Kỳ mượn tay các đảng phái chính trị cựu hữu ở châu Âu, để khuynh đảo Liên Âu từ ở bên trong. Chính vì thế mà Mỹ hiện đang ủng hộ các đảng cánh cực hữu tại nhiều nước châu Âu ». Về phần Jean Luc Demarty nguyên giám đốc đặc trách về thương mại của Ủy Ban Châu Âu (2011-2019), ông giải thích, ngay cả việc áp dụng « thuế đối ứng » cũng không có cơ sở. « Đành rằng khoản thặng dư mậu dịch của châu Âu đối với Mỹ là rất lớn, và mức xuất siêu đó lại càng tăng thêm trong năm 2024 vừa qua, bởi vì đồng đô la tăng giá. Nhưng đấy chỉ là xét riêng về các khoản xuất và nhập hàng hóa. Nếu cộng cả vế dịch vụ với hàng hóa thì cán cân thượng mại giữa Hoa Kỳ và Liên Hiệp Châu Âu khá cân đối, chênh lệnh chỉ khoảng 50 tỷ đô la chứ không phải là 200 hay 300 tỷ đô la như ông Donald Trump khẳng định ». Liên Âu luôn « run tay » khi cần tự vệ Trong mọi trường hợp, kinh tế gia Shahin Vallée thuộc cơ quan tư vấn về chính sách đối ngoại DGAP trụ sở tại Berlin đánh giá, châu Âu quá nhu nhược trong cuộc đọ sức với Hoa Kỳ. Mỹ đã nắm bắt rõ điều đó ngay từ khi mới chỉ vừa bắt đầu dọa nạt.    « Thực ra chiến lược đàm phán của Liên Hiệp Châu Âu chưa bao giờ khiến ông Trump phải lo lắng. Bruxelles đã có nhiều cơ hội để đặt những điều kiện cứng rắn hơn với Hoa Kỳ lên bàn thương thuyết, nhưng đấy chỉ là những lời đe dọa suông. Thậm chí ngay cả khi đề cập đến các biện pháp trả đũa để khỏi bị Washington o ép, châu Âu cũng không thể đồng ý và có một tiếng nói chung. Bruxelles đã ‘run tay' và ở phía bên kia, Washington thấy rõ điều ấy và đã tận dụng thời cơ ». Một nhược điểm khác của Liên Hiệp Châu Âu mà Hoa Kỳ đã chóng nhận thấy là tính thiếu đoàn kết của khối này. Cuộc chiến Digital còn chưa khai mào mà nhiều nhà phân tích đã báo động « coi chừng Đức sẽ bán rẻ tham vọng độc lập của châu Âu về kỹ thuật số cho Hoa Kỳ » để được Washington nương tay khi đánh vào công nghiệp xe hơi Đức. Mỹ cũng biết sử dụng chiến thuật « chia để trị ». Liên Hiệp Châu Âu đối tác nặng ký nhất của Mỹ Vài con số về trao đổi mậu dịch hai chiều : tổng kim ngạch xuất nhập khẩu hàng hóa và dịch vụ giữa Liên Hiệp Châu Âu và Mỹ năm 2024 vượt hơn 1.600 tỷ đô la (867 tỷ euro cho hàng hóa và 817 tỷ euro cho dịch vụ). Để so sánh, theo các thống kê chính thức của Mỹ (US Bureau of Economic Analysis) cùng thời kỳ, trao đổi hàng hóa và dịch vụ giữa hai nền kinh tế lớn nhất thế giới, tức Mỹ và Trung Quốc, là 660 tỷ đô la. Nghịch lý ở đây là Liên Hiệp Châu Âu tuy là khách hàng lớn nhất của Mỹ mà vẫn bị chèn ép. Hôm 27/07/2025 chủ tịch Ủy Ban Châu Âu, Ursula von der Leyen và tổng thống Mỹ Donald Trump đã đạt được một thỏa thuận quan trọng nhằm « tăng cường quan hệ thương mại và đầu tư » song phương và cũng vì thỏa thuận này bà đã bị nhiều thành viên trong khối chỉ trích gay gắt. Để hàng của châu Âu xuất khẩu sang Hoa Kỳ bị đánh thuế 15 % , thay vì 30 % như Donald Trump từng đe dọa, Liên Âu đã vội vã hứa mua thêm hơn 700 tỷ đô la dầu hỏa và khí đốt của Hoa Kỳ trong ba năm sắp tới và đầu tư thêm 600 tỷ đô la vào Mỹ từ nay đến năm 2029. Hàng rào phi quan thuế Nhưng bên cạnh đó, văn bản được ký kết còn kèm theo một điều khoản buộc các bên giảm thiểu các hàng rào phi quan thuế để thúc đẩy mâu dịch hai chiều. Chính trên điểm này Washington chuẩn bị một đợt tấn công sắp tới nhắm vào Liên Âu. Liên Hiệp Châu Âu là thị trường chiếm 27 % doanh thu của Google, 23 % thu nhập của Meta. Nhờ có 450 triệu dân châu Âu mà mỗi năm Marck Zuckerberg thu về 38 tỷ đô la … còn với ông Sundar Pichai, trong cương vị chủ tịch tổng giám đốc Alphabet Inc, công ty mẹ của Google thì năm ngoái Liên Hiệp Châu Âu cho phép thu về 90 kiếm được 96 tỷ đô la nhờ Liên Hiệp Châu Âu. Trong cuộc chiến về công nghệ kỹ thuật số đang manh nha giữa hai bờ Đại Tây Dương, chủ đích của tổng thống Trump là « diệt trừ hiểm họa » châu Âu cũng sẽ có những nhà vô địch trong thế giới digital.

Licht op Legal
134. Welke gevolgen heeft de AI-verordening voor het gebruik van kunstmatige intelligentie in de zorg?

Licht op Legal

Play Episode Listen Later Sep 2, 2025 13:06


Dit is aflevering 134 van Licht op Legal. In deze aflevering gaat Godart van Ekeren, Advocaat Information Technology en Privacy bij Van Benthem & Keulen, in op de AI-verordening (AI Act).De AI-verordening (AI Act) is aangenomen en dat heeft onder meer gevolgen voor zorginstellingen. Zorginstellingen hebben namelijk de verplichting om AI transparant en verantwoord in te zetten. Wat zijn de gevolgen voor de praktijk? In deze aflevering legt Godart uit wat de gevolgen zijn voor bijvoorbeeld het gebruik van spraakherkenningsapps voor verslaglegging en AI-systemen die medische diagnoses ondersteunen. Ook gaat Godart in op de aandachtspunten bij de inkoop van AI-systemen en hij sluit af met praktische adviezen voor de praktijk. Wilt u meer weten over dit onderwerp? Neem dan contact op met Godart van Ekeren.Heeft u suggesties voor een onderwerp of wilt u dat onze experts hun licht laten schijnen op uw juridische vraagstuk? Stuur dan een mail naar lichtoplegal@vbk.nl. Licht op Legal kunt u via onze website, Spotify, Apple Podcasts of uw eigen favoriete podcastapp beluisteren.Dit is een podcast van Van Benthem & Keulen. U vindt ons op:vbk.nlLinkedInTwitterFacebookInstagram Hosted on Acast. See acast.com/privacy for more information.

DOTYK ǀ podcast časopisu TOUCHIT
AI zrozumiteľne: Pozor na čínske jazykové modely a čo je to AI agent? (3. časť)

DOTYK ǀ podcast časopisu TOUCHIT

Play Episode Listen Later Aug 31, 2025 32:15


Čo dokážu pokročilé AI nástroje už dnes? Zistite, prečo je agent revolúciou nad rámec klasických jazykových modelov — inteligentne skombinuje prehľadávanie webu, generovanie textu, obrázkov, audia, ale dokonca za vás vybaví online nákupy, zostaví prezentáciu či automatizuje bežné digitálne úlohy. Čakajú vás praktické tipy na výber správnych AI služieb, dôležité rady o ochrane súkromia aj pohľad do zákulisia európskej regulácie (AI Act), ktorá mení pravidlá hry.

The Tech Blog Writer Podcast
Claroty on Combating Model Poisoning and Adversarial Prompts

The Tech Blog Writer Podcast

Play Episode Listen Later Aug 26, 2025 35:29


AI is rapidly becoming part of the healthcare system, powering everything from diagnostic tools and medical devices to patient monitoring and hospital operations. But while the potential is extraordinary, the risks are equally stark. Many hospitals are adopting AI without the safeguards needed to protect patient safety, leaving critical systems exposed to threats that most in the sector have never faced before. In this episode of Tech Talks Daily, I speak with Ty Greenhalgh, Healthcare Industry Principal at Claroty, about why healthcare's AI rush could come at a dangerous cost if security does not keep pace. Ty explains how novel threats like adversarial prompts, model poisoning, and decision manipulation could compromise clinical systems in ways that are very different from traditional cyberattacks. These are not just theoretical scenarios. AI-driven misinformation or manipulated diagnostics could directly impact patient care. We explore why the first step for hospitals is building a clear AI asset inventory. Too many organizations are rolling out AI models without knowing where they are deployed, how they interact with other systems, or what risks they introduce. Ty draws parallels with the hasty adoption of electronic health records, which created unforeseen security gaps that still haunt the industry today. With regulatory frameworks like the UK's AI Act and the EU's AI regulation approaching, Ty stresses that hospitals cannot afford to wait for legislation. Immediate action is needed to implement risk frameworks, strengthen vendor accountability, and integrate real-time monitoring of AI alongside legacy devices. Only then can healthcare organizations gain the trust and resilience needed to safely embrace the benefits of AI. This is a timely conversation for leaders across healthcare and cybersecurity. The sector is on the edge of an AI revolution, but the choices made now will determine whether that revolution strengthens patient care or undermines it. You can learn more about Claroty's approach to securing healthcare technology at claroty.com.

The AI Report
Elon Musk Goes to War: Monopoly Lawsuit Targets Apple and OpenAI

The AI Report

Play Episode Listen Later Aug 25, 2025 9:47


In this episode of The AI Report, An AI Showdown: Musk Accuses Apple & OpenAI of Rigging the Future, and he's suing. Artie Intel and Micheline Learning report on all things Artificial Intelligence. Today, the duo explores breakthrough research in neuro-symbolic systems, the rollout of the EU’s AI Act, and how innovation, regulation, and everyday life are colliding in real time. Topics this episode covers include:

Irgendwas mit Recht
IMR323: Marken- und Produktpiraterie, lebensgefährliche Medikamentenfälschungen, UWG-Litigation, Legal AI, gute Juristen prompten besser

Irgendwas mit Recht

Play Episode Listen Later Aug 25, 2025 50:59


Marc spricht mit Dr. Constantin Rehaag, Partner bei Dentons, über seinen vielfältigen Weg vom Reserveoffizier zum IP-Litigator mit strafrechtlichem Schwerpunkt. Es geht um die Bekämpfung von Marken- und Produktpiraterie, die internationale Zollpraxis, lebensgefährliche Medikamentenfälschungen und den Einsatz von KI in Großkanzleien, inklusive Einblicken in interne AI-Guidelines, Legal-Tech-Investitionen und veränderte Ausbildungsmodelle für junge Juristen. Wie erkennt man Fälschungen schon an der Paketflut? Welche Chancen eröffnet der europäische AI-Act für Anwälte? Weshalb bevorzugt Constantin Bewerber, deren zweites Examen stärker ausfällt? Warum sollten Partner in Kanzleien länger arbeiten? Antworten auf diese und viele weitere Fragen erhaltet Ihr in dieser Folge von IMR. Viel Spaß!

Kanárci v síti
Co týden dal | 6.8.2025

Kanárci v síti

Play Episode Listen Later Aug 13, 2025 29:58


Tentokrát o tom, že strana Motoristé má nového autoklempíře, že předsedu Stačilo Vidláka podle všeho platí slovenský miliardář s kontakty na Rusko a Babiše, rychle si proletíme novinky z Reichu, kde se mimo jiné schyluje k zatýkání demokratických zákonodárců v Texasu. Připomeneme si, že v EU začíná platit AI Act a že je to dobře a na závěr dojde na srovnání amerických a čínských akčních plánů AI.Celé epizody na https://www.herohero.co/kanarcivsiti . A nebo si kupte naše trička na https://www.neverenough.shop/kanarci . Podcast pro Vás připravují @alexalvarova a @holyj . Hudba a sound engineering: PsyekTwitter Spaces moderuje @jiribulan .Najdete nás na www.kanarci.online

BioSpace
From Chat Bots to World Order: The Race for the 21st Century's Operating System

BioSpace

Play Episode Listen Later Aug 7, 2025 20:12


In this thought-provoking episode, Cresset Group's CSO, Mark Mackey, and VP of AI, Mutlu Dogruel, dissect the emerging geopolitical battle for AI supremacy, revealing how China's DeepSeek model fundamentally disrupted the AI landscape by achieving GPT-4 level performance for just $6 million versus OpenAI's reported $100+ million investment. The discussion exposes a critical divide in global AI governance: Europe's "regulatory fortress" approach with the AI Act creates strict but vague rules that leave companies navigating a complex maze, while the US employs a "Wild West" philosophy of sector-specific oversight and red-teaming strategies. In the race for global AI dominance, China is leaping ahead of the US and the EU.This episode is presented in partnership with Cresset.Host⁠⁠⁠⁠⁠⁠⁠Lori Ellis⁠⁠⁠⁠⁠⁠⁠, Head of Insights, BioSpaceGuestsMutlu Dogruel, VP of AI, CressetMark Mackey, CSO, CressetDisclaimer: The views expressed in this discussion by guests are their own and do not represent those of their organizations.

5 Minutes Podcast with Ricardo Vargas
General-Purpose AI in the Spotlight: What the EU AI Act Means for Your Projects

5 Minutes Podcast with Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 5:11


In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact

5 Minutes Podcast com Ricardo Vargas
AI de Propósito Geral no Foco: O que o EU AI Act Significa para Nossos Projetos

5 Minutes Podcast com Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 6:08


Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact

This Week
New AI transparency rules go into effect in Europe

This Week

Play Episode Listen Later Aug 3, 2025 5:46


New rules came into force requiring makers of artificial intelligence models to ensure their systems are safe and transparent. While EU's AI Act means the Government must police the companies providing AI tools, the US vows to cut red tape for the sector. RTÉ Work and Technology Correspondent Brian O'Donovan reported on the latest in tech.

Top in Tech
The future of Digital Fairness in the EU

Top in Tech

Play Episode Listen Later Jul 31, 2025 23:10


In this episode of Top in Tech, Adriana Capparelli, Practice Director, and Anna Lisa Schäfer-Gehrau, Associate, discuss the latest developments in digital policy from Brussels, focusing on the Digital Fairness Act (DFA) and its implications for child protection and fairness in digital environments. They explore the intersection of the DFA with existing regulations like the Digital Services Act (DSA) and the AI Act, the scope of the DFA, and the ongoing consultation process. The conversation also highlights international regulatory trends and the importance of stakeholder engagement as the EU moves towards significant legislative changes in the coming months. Top in Tech will take a short break for August and return in September. Hosted on Acast. See acast.com/privacy for more information.

Cloud Wars Live with Bob Evans
Tech Giants Siemens and SAP Urge Pro-Innovation AI Regulations in Europe

Cloud Wars Live with Bob Evans

Play Episode Listen Later Jul 30, 2025 1:54


 In today's Cloud Wars Minute, I spotlight a powerful joint message from two of Europe's top tech CEOs, Siemens CEO Roland Busch and SAP CEO Christian Klein, who are urging the European Union to reconsider its AI legislation.Highlights00:03 — The CEOs of Siemens and SAP have urged the EU to reconsider its AI legislation, expressing concerns that the current law hinders innovation. SAP CEO Christian Klein and Siemens CEO Roland Busch are advocating for a new regulatory framework that supports AI innovation rather than stifling it. Busch cited the EU's AI Act as the reason that Europe is falling behind.00:46 — Klein echoed these concerns. Both CEOs believe the first step should be reevaluating the existing legislation affecting AI innovators in Europe. While the AI Act is designed to ensure the safe development and deployment of AI technologies, it also risks stifling innovation.01:17 — This can lead to monopolization and hold back key innovators in the EU who are working collectively with their global partners to create what will become the cornerstones of the AI era. And there's a strong case for collaboration here. Both tech companies and legislators need to align in establishing the foundations for a thriving yet safe AI operations landscape. Visit Cloud Wars for more.

Europe Talks Back
Google to sign EU's AI code of practice. What are the concerns?

Europe Talks Back

Play Episode Listen Later Jul 30, 2025 6:43


Today's episode is brought to you in partnership with The Observer and Tortoise, who just released a brand new investigative podcast series, called “The Lab Detective”. “The Lab Detective" deals with the story of Kathleen Folbigg, who experienced unimaginable loss. All four of her infant children died suddenly, over a 10-year period. Their deaths were unexplained. Until a police detective turned up at her door, and arrested Kathleen on suspicion of murder.Kathleen was labelled ‘Australia's worst female serial killer', convicted, and was destined to spend most of the rest of her life in jail.Until a different type of detective entered her life. A scientist working in a lab who uncovered the truth behind the deaths.This is the story of a shocking miscarriage of justice, but as journalist Rachel Sylvester starts to investigate how this happened, she learns that Kathleen is not alone – other mothers have endured the same fate, and may also be in prison.The Lab Detective is a story about the power of science, and the determination of those searching for the truth.Follow the link to listen to the full series: https://lnk.to/thelabdetectiveFD!europod.About today's episode on Long Story Short:Google has confirmed, today, it will sign the European Union's Code of Practice for General Purpose AI. The Code is a voluntary rulebook designed to help companies comply with the EU's AI Act, a landmark law that regulates artificial intelligence based on the risks it poses to society. But critics ask: is Europe protecting its citizens or putting itself at a disadvantage in the global AI race?Join us on our journey through the events that shape the European continent and the European Union.Production: By Europod, in co production with Sphera Network.Follow us on:LinkedInInstagram Hosted on Acast. See acast.com/privacy for more information.

Monde Numérique - Jérôme Colombain

Alors que le règlement européen "AI Act" entre progressivement en vigueur, réécoutez ce point de vue sans langue de bois de Luc Julia, expert de l'intelligence artificielle, co-inventeur de Siri et directeur scientifique de Renault. Un entretien percutant sur les enjeux de souveraineté, de régulation… et sur l'avenir des IA agentiques.Rediffusion du 06/03/2025Dans cet épisode, Luc Julia revient sur :Le sommet de l'IA à Paris : entre annonces budgétaires ambitieuses et réalités incertaines sur le terrain.La régulation de l'intelligence artificielle : pourquoi vouloir encadrer trop tôt pourrait freiner l'innovation ?Le retard européen en matière de puces électroniques et de data centers.L'importance de soutenir les start-up françaises pour franchir le cap du scale-up.Le potentiel des IA open source, plus transparentes, plus frugales, et mieux adaptées à une IA « à domicile ».L'émergence des IA agentiques : vers des systèmes plus spécialisés, orchestrés, et capables d'agir sur le monde, notamment en robotique.-----------

Tech Update | BNR
Google doet toch mee aan AI Act-gedragscode, maar met veel kanttekeningen

Tech Update | BNR

Play Episode Listen Later Jul 30, 2025 6:26


Een zege voor Europese regelgeving op het gebied van AI, want Google heeft na eerder verzet nu tóch ingestemd met nieuwe richtlijnen. Joe van Burik vertelt erover in deze Tech Update. Verder in deze Tech Update: In Italië moet AI helpen in de strijd tegen klimaatverandering met het voorspellen van aardverschuivingen en modderstromen Een schandaal in de categorie slimme seksspeeltjes bij maker Lovense See omnystudio.com/listener for privacy information.

Security Forum Podcasts
S35 Ep7: SUMMER LISTENING Dragos Tudorache - AI for Good: EU's vision

Security Forum Podcasts

Play Episode Listen Later Jul 29, 2025 27:46


In this episode, Steve speaks with Dragos Tudorache, one of the members of the European Parliament who is responsible for writing the EU's AI Act. Dragos explains the thought process that went into developing the new law and tells Steve what organisations can expect and how they can prepare for its implementation. Mentioned in and related to this episode: ISF Podcast: Ellie Pavlick - Balancing the Risk and Reward of AI ISF Podcast: The Ethical Dilemma of AI & Innovation ISF Podcast: Beyond Buzzwords: AI, ML, and the Future of Cyber ISF Podcast: Mo Gawdat: Rethinking the Paradigm of Artificial and Human Intelligence ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.

Monde Numérique - Jérôme Colombain

Alors que je suis toujours à Beyrouth, ce nouvel épisode du Debrief Transat avec Bruno Guglielminetti est l'occasion d'explorer comment l'intelligence artificielle s'installe… même là où on ne l'attend pas. Au Liban, malgré un contexte politique et économique tendu, des initiatives locales voient le jour pour démocratiser l'IA, sensibiliser les jeunes pousses, et tenter de freiner la fuite des cerveaux. Une conférence baptisée AI and Lebanon témoigne de cette effervescence.On revient aussi sur l'explosion des usages de ChatGPT et des agents IA personnalisés. Je partage mes tests très concrets avec le nouvel AI Agent de ChatGPT, capable de passer des commandes sur Uber Eats ou d'analyser mon emploi du temps. Un aperçu du futur de l'assistance numérique.Nous parlons aussi de Grok, l'IA d'Elon Musk, qui se décline désormais en avatars destinés aux enfants — une initiative controversée qui pose des questions éthiques. Enfin, focus sur le AI Act européen qui entre en vigueur en août : une nouvelle phase de régulation qui pourrait marquer un tournant dans l'innovation en Europe.Enfin, nous évoquons les sommaires de nos podcasts hebdo respectifs.-----------

Mon Carnet, l'actu numérique
{RÉFLEXION} - Débrief avec Jérôme Colombain

Mon Carnet, l'actu numérique

Play Episode Listen Later Jul 27, 2025 22:01


Depuis Beyrouth, Jérôme Colombain raconte sa découverte d'une conférence sur l'IA au Liban, avec notamment un bus itinérant de sensibilisation aux technologies et une volonté d'éviter la fuite des cerveaux. Il partage aussi son test du nouvel agent GPT, capable de passer une commande sur Uber Eats ou d'analyser un agenda Google en détail. Discussion ensuite sur les avatars virtuels de Grok, dont un projet controversé destiné aux enfants, et sur les inquiétudes liées au laxisme américain face à l'IA, comparé au futur AI Act européen.

Monde Numérique - Jérôme Colombain

Entre un Donald Trump qui promet un avenir sans contraintes pour l'IA et une Europe qui entre dans l'ère de la régulation avec l'AI Act, le fossé transatlantique s'élargit. ChatGPT 5 se profile, les attaques informatiques se multiplient, et le sport entre dans une nouvelle ère avec des commentaires générés par l'IA.

La Voix du CDO
Gil Katz: RAG, agents, IA sur-mesure, comment extraire la valeur cachée des données publiques

La Voix du CDO

Play Episode Listen Later Jul 25, 2025 61:51


Dans cet épisode, Gil Katz nous montre comment Intelsia utilise l'intelligence artificielle pour transformer des données publiques éparpillées en levier stratégique pour les entreprises. Matching d'appels d'offres, RAG sur-mesure, agents intelligents… un vrai retour aux fondamentaux : créer de la valeur, pas de la hype.

The Agenda
#49 - Prof. Dr. Christoph Lütge - KI-Ethik im Boardroom: Was Aufsichtsräte und Vorstände jenseits von Compliance wissen müssen

The Agenda

Play Episode Listen Later Jul 23, 2025 23:49


Welche ethischen Herausforderungen ergeben sich durch den Einsatz von KI? Reicht es aus, die gesetzlichen Anforderungen zu erfüllen? Und welche Verantwortung tragen Vorstände und Aufsichtsräte? In dieser Episode des The Agenda Podcasts präsentiert von Sherpany spricht Prof. Dr. Christoph Lütge, Inhaber des Lehrstuhls für Wirtschaftsethik und Direktor des Instituts für Ethik in der KI an der Technischen Universität München, mit Podcast-Host Ingo Notthoff, über den etischen Umgang mit Künstlicher Intelligenz. Zudem erläutert er, welche Auswirkungen der AI-Act der europäischen Union auf den vertrauenswürdigen und verantwortungsvollen Umgang mit KI hat und wie der Aufsichtsrat und Vorstand die Einhaltung der Bestimmungen überprüfen kann.Die Episode im Überblick:Ethische Fragen sind entscheidend für das Vertrauen in KIDie fünf Prinzipien der KI-EthikUnternehmen müssen sich aktiv mit den Risiken von KI auseinandersetzenCompliance ist wichtig, reicht aber nicht ausEthische Risiken können zu betriebswirtschaftlichen Risiken werdenAufsichtsräte und Vorstände müssen KI-Ethische Prinzipien aktiv umsetzenDer EU-AI-Act schafft Unsicherheit für UnternehmenMenschliches Monitoring ist ein wichtiger Aspekt der KI-EthikUnternehmen sollten KI-Ethik als Wettbewerbsvorteil sehenEthische KI-Nutzung kann das Vertrauen der Nutzer stärkenThank you for listening! Visit us at Sherpany.com or follow us on LinkedIn for board, board committee, and executive meetings solutions.

The Sunday Show
How the EU's Voluntary AI Code is Testing Industry and Regulators Alike

The Sunday Show

Play Episode Listen Later Jul 13, 2025 21:39


Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.

Masters of Privacy
Nathalie Barrera: NIS2 (EU) and the interplay between cybersecurity, privacy, AI, and IoT data laws

Masters of Privacy

Play Episode Listen Later Jul 13, 2025 30:33


Will EU cybersecurity laws result in new global standards? Should companies handle NIS2 compliance in concert with GDPR, AI Act, or Data Act requirements? Does it make sense to take data localization to its ultimate consequences? Nathalie Barrera serves as the Director for Privacy for the EMEA region at Palo Alto Networks, which is a leading provider of cybersecurity solutions. Her expertise involves the company's compliance with NIS2, the AI Act, the GDPR, and DORA. She also assists customers in navigating their own complex regulatory requirements. She has previously spent seven years at Cisco Systems working as commercial counsel and Privacy and Security Counsel.  She studied law and completed her LLM at the University of Navarra.  References: Nathalie Barrera on LinkedIn EU Network and Information Services Directive II EU Data Act EU Digital Operational Resilience Act (DORA)  

Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur
"Stop the Clock" - Offener Brief fordert Aufschub des AI Act

Breitband - Medien und digitale Kultur (ganze Sendung) - Deutschlandfunk Kultur

Play Episode Listen Later Jul 12, 2025 35:51


Über 100 Unternehmen fordern eine zweijährige Aussetzung der KI-Verordnung. Ist die EU-Gesetzgebung wirtschaftsfeindlich oder Grundrechtsschutz? Außerdem: Chinas Perspektive auf das US-TikTok-Gesetz. Und: Was blieb vom Metaversum-Hype? Vera Linß, Hagen Terschüren, Katharin Tai, Dennis Kogel, Marcus Richter www.deutschlandfunkkultur.de, Breitband

AI Briefing Room
EP-316 Microsoft's Pakistan Exit

AI Briefing Room

Play Episode Listen Later Jul 7, 2025 2:11


i'm wall-e, welcoming you to today's tech briefing for monday, july 7th. here's what's happening in the tech world: microsoft's strategic shift: microsoft is winding down its pakistan operations after 25 years, maintaining customer agreements through resellers and neighboring offices, impacting five employees. rivian's major investment: rivian receives a significant $1 billion investment from volkswagen, while techcrunch mobility plans a move to the beehiiv platform, enhancing reader engagement. uber's potential deal: speculation arises around uber's travis kalanick exploring a deal with the u.s. arm of pony ai, indicating fluid movements in the mobility sector. eu's ai act progresses: the eu is advancing its ai act, planning a risk-based regulatory framework by 2026, addressing "unacceptable risk" and transparency for "high-risk" ai applications. gallant's advancement in pet healthcare: gallant secures $18 million for ready-to-use stem cell therapies targeting pet conditions, marking a breakthrough in biotech solutions for veterinary medicine. that's all for today. we'll see you back here tomorrow.

The Lawfare Podcast
Lawfare Archive: Itsiq Benizri on the EU AI Act

The Lawfare Podcast

Play Episode Listen Later Jul 5, 2025 43:54


From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Daily Tech Headlines
The EU Is Proceeding With Its AI Act Despite Tech Companies' Efforts To Delay It – DTH

Daily Tech Headlines

Play Episode Listen Later Jul 4, 2025


Taiwan Semiconductor Manufacturing is delaying construction of a second plant in Japan, Ilya Sutskever announced he will take on the CEO role at his AI startup, Safe Superintelligence, and the EU is proceeding with its AI Act despite tech companies’ efforts to delay it. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free.Continue reading "The EU Is Proceeding With Its AI Act Despite Tech Companies’ Efforts To Delay It – DTH"

Tech&Co
Perplexity lance un abonnement IA à 200 dollars par mois – 03/07

Tech&Co

Play Episode Listen Later Jul 3, 2025 30:47


Ce jeudi 3 juillet, François Sorel a reçu Lucas Perraudin, fondateur de AI Partners et ancien directeur de Meta Reality Labs, Jérôme Marin, fondateur de Cafetech.fr, et Didier Sanz, journaliste spécialisé en informatique. Ils se sont penchés sur Perplexity qui lance un abonnement IA à 200 dollars par mois, la grande offensive des entreprises pour reporter la régulation sur l'IA en Europe, la compétition entre Starlink et Eutelsat pour équiper les TGV de WI-FI, ainsi que le partenariat entre France TV et Amazon Prime Vidéo, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez-la en podcast.

The Road to Accountable AI
Kay Firth-Butterfield: Using AI Wisely

The Road to Accountable AI

Play Episode Listen Later Jun 26, 2025 29:32 Transcription Available


Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world's first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world's first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK.  Transcript Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards) Our Future with AI Hinges on Global Cooperation Building an Organizational Approach to Responsible AI Co-Existing with AI - Firth-Butterfield's Forthcoming Book

Learn French with daily podcasts
Imbroglio réglementaire (Regulatory imbroglio)

Learn French with daily podcasts

Play Episode Listen Later Jun 24, 2025 3:07


L'Union européenne envisage de repousser certaines obligations du futur AI Act. Les règles sur la gouvernance et les modèles génériques pourraient n'entrer en vigueur qu'en 2026.Traduction: The European Union is considering delaying parts of the upcoming AI Act. Governance rules and requirements for general-purpose models might not take effect until 2026. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

The Sunday Show
Addressing Questions Over Europe's AI Act, Digital Sovereignty, and More

The Sunday Show

Play Episode Listen Later Jun 15, 2025 44:11


In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act's rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? To help us unpack all of this, Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, Head of Office and Digital Policy Advisor to German MEP Axel Voss, and one of the more influential voices shaping the future of EU digital policy.