POPULARITY
We are delighted to share with you the recording of our recent webinar, Professional Services in the Age of AI: Regulation, Strategy, and Real-World Impact. In this discussion, Anthony Davis (FisherBroyles), Devika Kewalramani, (FisherBroyles), Dr Robert Millard (Cambridge Strategy Group), David Shufflebotham (PepUp Consulting) and Nick Leale (CM Murray LLP) join chair Corinne Staves (CM Murray LLP) to explore the impact of technology on professional services firms, with a particular focus on regulation. Topics covered include: - How is technology reshaping professional services firms? From AI-powered research and disclosure tools to virtual client engagement, technology is now embedded in every aspect of firm strategy and operations. - What are the implications for structure, pricing, and regulation? AI is driving a shift away from traditional leverage models, elevating the role of technologists and challenging the billable hour. Value-based pricing is gaining ground and firms must rethink how they assess and compensate their partners. - What regulatory and ethical risks must firms manage? Competence, confidentiality, and supervision remain central. Firms must develop clear AI policies, invest in training and maintain strong oversight - particularly as regulatory scrutiny increases. - What practical steps should firms take now? Define your AI business case, benchmark tools, train your teams, and document compliance. The next generation of lawyers is already embracing AI - leaders must act now to stay ahead. This summary was drafted using Microsoft Copilot, as part of our ongoing exploration of how AI can support and enhance professional services. If you would like to discuss any aspect of AI in relation to your firm, or if you have any specific questions arising from this discussion, please contact Partner and non-contentious partnership specialist Corinne Staves, or Partner and leading regulatory law specialist Nick Leale. For example, CM Murray LLP can assist firms with reviewing their HR, risk management and other policies and/or their terms of business, on changes to governance and compensation systems for partners as a result of the increased adoption of technology and the regulatory implications of the use AI and technology in client service.
In a special Future of Everything podcast episode recorded live before a studio audience in New York, host Russ Altman talks to three authorities on the innovation economy. His guests – Fei-Fei Li, professor of computer science and co-director of the Stanford Institute for Human-Centered AI (HAI); Susan Athey, professor and authority on the economics of technology; and Neale Mahoney, Trione Director of the Stanford Institute for Economic Policy Research – bring their distinct-but-complementary perspectives to a discussion on how artificial intelligence is reshaping our economy.Athey emphasizes that both AI broadly and AI-based coding tools specifically are general-purpose technologies, like electricity or the personal computer, whose impact may be felt quickly in certain sectors but much more slowly in aggregate. She tells how solving one bottleneck to implementation often reveals others – whether in digitization, adoption costs, or the need to restructure work and organizations. Mahoney draws on economic history to say we are in a “veil of ignorance” moment with regard to societal impacts. We cannot know whose jobs will be disrupted, he says, but we can invest in safety nets now to ease the transition. Li cautions against assuming AI will replace people. Instead, she speaks of AI as a “horizontal technology” that could supercharge human creativity – but only if it is properly rooted in science, not science fiction.Collectively, the panel calls on policymakers, educators, researchers, and entrepreneurs to steer AI toward what they call “human-centered goals” – protecting workers, growing opportunities, and supercharging education and medicine – to deliver broad and shared prosperity. It's the future of the innovation economy on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Fei-Fei LiStanford Profile: Susan AtheyStanford Profile: Neale MahoneyConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces live guests Fei-Fei Li, Susan Athey, and Neale Mahoney, professors from Stanford University.(00:02:37) Lessons from Past TechnologyComparing AI with past technologies and the bottlenecks to their adoption.(00:06:29) Jobs & Safety NetsThe uncertainty of AI's labor impact and investing in social protections.(00:08:29) Augmentation vs. ReplacementUsing AI as a tool to enhance, not replace, human work and creativity.(00:11:41) Human-Centered AI & PolicyShaping AI through universities, government, and global collaboration.(00:15:58) Education RevolutionThe potential for AI to revolutionize education by focusing on human capital.(00:18:58) Balancing Regulation & InnovationBalancing pragmatic, evidence-based AI policy with entrepreneurship.(00:22:22) Competition & Market PowerThe risks of monopolies and the role of open models in fair pricing.(00:25:22) America's Economic FunkHow social media and innovation are shaping America's declining optimism.(00:27:05) Future in a MinuteThe panel shares what gives them hope and what they'd study today.(00:30:49) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca
Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AIregulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovation and growth.
Amid reports of multiple people dying or taking their own lives after interacting with artificial intelligence chat bots, the families of victims are demanding better safeguards from developers. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Plus - Google Photos users on Android can now edit their photos by talking to or texting the AI; Disney is raising the price of Disney+ Learn more about your ad choices. Visit podcastchoices.com/adchoices
Meta has raised the stakes in Big Tech's fight against AI regulation. The Facebook-maker is investing “tens of millions” of dollars into a new super PAC to fight state-level tech policy proposals that could stifle AI advancement, reports Axios. Also, Sydney, Australia-based Alloy thinks it can help with that issue: the startup is building data infrastructure for robotics companies to help them process and organize all the data their robots collect from various sources, including sensors and cameras. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Calum Chase, co-founder of Conscium, discusses why fears of an AI winter have been greatly exaggerated, why the industry has surpassed the hype phase and will continue to grow and the transformational impacts already happening and those yet to come. Conscium is a research organisation focused on applied AI consciousness. The group was founded to explore the questions of the nature of consciousness and how that can apply to AI technology.
Episode 78 with Oonagh van den Berg, Prof. Peter Reuter, and Dr. Mirko Nazzari
Industry visionary Graham Wilkinson joins the podcast to talk about the industry's adoption of AI, where it's working and where it's not. The team examines the role of AI across generative advertising, data fragmentation, breaking down silos and the genesis of creativity.Thanks for listening! Follow us on Twitter and Instagram or find us on Facebook.
Welcome to a fascinating exploration of the hidden legal battles shaping tomorrow's technology. Predictive algorithms have become the crystal balls of modern business, forecasting everything from home prices to healthcare costs, but they're also becoming the center of high-stakes courtroom dramas worth hundreds of millions of dollars.Across the globe, from Texas courtrooms to China's Supreme People's Court, judges and juries are answering a profound question: who owns the right to predict the future? The House Canary v. Amrock case resulted in a staggering $600 million verdict over real estate valuation algorithms, while Alibaba secured a 30 million RMB judgment against a company that allegedly scraped its predictive marketing tools. Even industrial applications aren't immune, with companies like Shen Group successfully protecting predictive design software for machinery components.What makes these cases particularly compelling is how they're redefining intellectual property law. Courts are now recognizing that AI model weights, the mathematical parameters tuned during training, qualify as protectable trade secrets. Data pipelines, prediction engines, and algorithmic structures have all received similar protection. The real drama often unfolds when employees change companies, raising thorny questions about what constitutes general expertise versus proprietary knowledge that belongs to the former employer.Healthcare prediction presents especially valuable territory, with ongoing battles between companies like Qruis and Epic Systems, or Milliman and Gradient AI, demonstrating how patient data forecasting creates immensely valuable intellectual property. Whether it's forecasting home values on Zillow or optimizing Medicare billing, these predictive tools aren't just convenient features, they're corporate crown jewels worth protecting at almost any cost.Ready to dive deeper into the invisible rules governing innovation? Subscribe now and join us as we continue to decode the legal frameworks shaping our technological future. The algorithms may predict tomorrow, but who gets to own those predictions? That's what we're exploring on Intangiblia.Get the book!Send us a textSupport the show
Welcome to episode #1001 of Thinking With Mitch Joel (formerly Six Pixels of Separation). Toby Stuart is a Distinguished Professor of Business Administration at the Haas School of Business, UC Berkeley, where he directs the Berkeley-Haas Entrepreneurship Program and the Institute for Business Innovation. Over his career, he has also taught at Harvard, Columbia, Chicago Booth and MIT Sloan, and he is recognized globally as one of the leading thinkers on entrepreneurship, networks and organizational strategy. Beyond academia, Toby sits on the boards of multiple technology companies, cofounded the Black Venture Institute, and serves as the founding Chairman of Workday's AI Advisory Board. His latest book, Anointed - The Extraordinary Effects Of Social Status In A Winner-Take-Most World, examines the invisible hierarchies that govern so much of human life and why small advantages so often compound into massive outcomes. From why blurbs on books sway readers, to how neighborhoods or technologies become “the next big thing,” to the inequalities embedded in who gets credit for innovation, Anointed reveals how status shapes trust, opportunity and even our sense of self (I loved this book). Toby argues that status is both necessary - helping us navigate infinite choices in the modern world - and corrosive, creating inequality that is often disconnected from true merit. In our discussion, Toby unpacks the mechanics of anointment, the ways status rubs off through association and how technology, especially AI, might both entrench and disrupt these hierarchies. The conversation explores the paradox of meritocracy, the illusions of self-anointment in today's digital culture and the future of work as AI accelerates change. If you've ever wondered why some ideas, people, or companies get chosen while others languish (or even how you go to where you are), this conversation will challenge you to see the hidden operating system behind everyday decisions. Enjoy the conversation... Running time: 55:24. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Thinking With Mitch Joel. Feel free to connect to me directly on LinkedIn. Check out ThinkersOne. Here is my conversation with Toby Stuart. Anointed - The Extraordinary Effects Of Social Status In A Winner-Take-Most World. Haas School of Business. Follow Toby on LinkedIn. Chapters: (00:00) - Introduction to Toby Stuart. (01:50) - Understanding Anointed and Social Status. (04:40) - The Necessity and Corrosiveness of Status. (08:54) - Blurbs, Status, and the Publishing Industry. (12:40) - The Role of Association in Anointment. (15:29) - Breaking into New Fields and Status Transfer. (19:44) - Meritocracy and the Role of AI. (27:12) - AI's Impact on Status and Society. (31:38) - The Impact of AI on Status and Credentials. (34:46) - Evaluating Human Contribution in the Age of AI. (39:17) - The Future of AI Regulation and Power Dynamics. (45:29) - Self-Anointed Status in a Digital World. (51:25) - Reflections on Status and Personal Growth.
- Financial Crisis and Geopolitical Instability (0:00) - Historical Financial Predictions and Current Market Conditions (2:23) - US Financial Policies and Global Repercussions (9:59) - Gold Revaluation and Economic Collapse (27:39) - AI and Job Replacement (39:15) - Simulation Theory and AI Safety (49:33) - AI and Human Extinction (1:19:57) - Decentralization and Survival Strategies (1:21:35) - Perpetual Motion and Safety Machines (1:21:50) - Resource Competition and AI Extermination (1:24:24) - Simulation Theory and AI Simulations (1:25:58) - Religious Parallels and Near-Death Experiences (1:27:54) - AI Development and Human Self-Preservation (1:32:02) - AI Regulation and Government Inaction (1:37:55) - AI Deployment and Economic Pressure (1:39:57) - AI Extermination Methods and Human Survival (1:42:32) - Simulation Theory and Personal Beliefs (1:43:55) - AI and Health Nutrition (1:55:41) - AI and Government Trust (1:58:50) - AI and Financial Planning (2:19:36) - Cosmic Simulation Discussion (2:21:46) - Enoch's Spiritual Connection Insights (2:39:06) - Humility and Material Possessions (2:40:13) - AI and Spiritual Connection (2:40:53) - Roman's Directness and Humor (2:41:35) - After-Party Segment (2:43:40) - Health Ranger Store Product Introduction (2:44:15) - Importance of Clean Chicken Broth (2:45:25) - Conclusion and Call to Action (2:47:42) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
More than 20 AI experts have signed an open letter urging the government to better regulate the use of artificial intelligence here. Victoria University senior lecturer in AI Dr Andrew Lensen spoke to Ingrid Hipkiss.
On this episode of The Insuring Cyber Podcast, Claire Davey, senior vice president and head of product innovation and emerging risk at Relm Insurance, and Peter Dugas, executive … Read More » The post EP. 106: Navigating State, Federal and Global AI Regulation appeared first on Insurance Journal TV.
On this episode of The Insuring Cyber Podcast, Claire Davey, senior vice president and head of product innovation and emerging risk at Relm Insurance, and Peter Dugas, executive … Read More » The post EP. 106: Navigating State, Federal and Global AI Regulation appeared first on Insurance Journal TV.
On this episode of The Insuring Cyber Podcast, Claire Davey, senior vice president and head of product innovation and emerging risk at Relm Insurance, and Peter Dugas, executive … Read More » The post EP. 106: Navigating State, Federal and Global AI Regulation appeared first on Insurance Journal TV.
Marc Andreessen, cofounder Andreessen Horowitz, joins the Hermitix podcast for a conversation on AI, accelerationism, energy, and the future.From the thermodynamic roots of effective accelerationism (E/acc) to the cultural cycles of optimism and fear around new technologies, Marc shares why AI is best understood as code, how nuclear debates mirror today's AI concerns, and what these shifts mean for society and progress. Timecodes:0:00 Introduction 0:51 Podcast Overview & Guest Introduction1:45 Marc Andreessen's Background3:30 Technology's Role in Society4:44 The Hermitix Question: Influential Thinkers8:19 AI: Past, Present, and Future10:57 Superconductors and Technological Breakthroughs15:53 Optimism, Pessimism, and Stagnation in Technology22:54 Fear of Technology and Social Order29:49 Nuclear Power: Promise and Controversy34:53 AI Regulation and Societal Impact41:16 Effective Accelerationism Explained47:19 Thermodynamics, Life, and Human Progress53:07 Learned Helplessness and the Role of Elites1:01:08 The Future: 10–50 Years and Beyond Resources:Marc on X: https://x.com/pmarcaMarc's Substack: https://pmarca.substack.com/Become part of the Hermitix community:On X: https://x.com/HermitixpodcastSupport: http://patreon.com/hermitixFind James on X: https://x.com/meta_nomad Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Welcome aboard Kyber Sqaudron! This week, Colorado is in the midst of a Special Session dealing with the fallout from the Federal Budget and AI regulation, and Andrés got to speak with legislators and hear from them at a Press Conference hours before the special session started. If you're in Colorado, and AI matters to you, there's never been a better time to reach out to your state legislator! Follow us: Twitch: @Sithty_Minutes BlueSky: @sithtyminutes.bsky.social Discord: Sithty Minutes Show Notes: Democrat AI Option 1 AI Sunshine Act Rep. Weinberg's AI Bill Find your (CO) Legislator!
Professor Chris Berg from RMIT University discusses the transformative potential of AI, likening it to the next industrial revolution. He argues that AI, particularly tools like ChatGPT, can boost white-collar productivity by up to 40%. Governments should cautiously avoid AI regulation to maximise these gains. Berg emphasises AI's bottom-up adoption and its exponential advancements. He also touches on the social implications, including the formation of parasocial relationships with AI and the potential for AI to displace certain jobs. The conversation, with ATA Chief Economist John Humphreys and Economics Explored host Gene Tunny, highlights AI's dual promise of productivity gains and societal challenges.Please email Gene your thoughts on this episode via contact@economicsexplored.com.TimestampsIntroduction to AI and Productivity (0:00)Government's Approach to AI Regulation (3:01)Bottom-Up Technology and Industrial Applications (9:40)Comparisons to Previous Industrial Revolutions (12:04)Social and Economic Implications of AI (16:46)AI and Human Labour (17:15)AI's Role in Public Policy Debates (35:41)Conclusion and Future Prospects (38:35)TakeawaysAI as “Infinite Intelligence” – Berg views AI, particularly LLMs like ChatGPT, as offering effectively infinite intelligence, capable of dramatically transforming productivity.White-Collar Disruption – AI is not just about automating manual labour. It poses a serious threat (and opportunity) for traditionally “smart” jobs like writing, research, and art.Regulatory Caution – While Europe has rushed to regulate AI, Australia hasn't, allowing innovation and productivity gains to take root.Parasocial Relationships with AI – AI companions (e.g., AI-infused toys or assistants) may change human interaction, especially among children and vulnerable populations.Rethinking Migration via Robots – Remote-controlled humanoid robots may one day allow low-skilled workers in developing countries to "work" in advanced economies without leaving home.Links relevant to the conversationATA livestream on 21 August with Chris Berg:https://austaxpayers.substack.com/p/productivity-ideas-with-chris-bergLumo Coffee promotion10% of Lumo Coffee's Seriously Healthy Organic Coffee.Website: https://www.lumocoffee.com/10EXPLOREDPromo code: 10EXPLORED
On Thursday's show: A federal judge on Wednesday temporarily blocked 11 public school districts in Texas' largest metropolitan areas from displaying the Ten Commandments in classrooms as required by a new state law set to take effect Sept. 1. A legal expert joins us to discuss the case and where it goes from here.Also this hour: Sen. Ted Cruz is largely opposed to regulation of AI, while the state is set to be one of a few to implement the first legislation related to the industry. We discuss that dichotomy.Then, Houston native filmmaker Nell Teare discusses some of the lessons she's learned about overcoming the barriers and naysayers to building a creative career. It's a topic she'll discuss Sunday afternoon during the Houston Media Conference.And we learn how school publications like newsletters, zines, and podcasts are giving students ways to take control of their media and better understand how media works.
Phishing Training Effectiveness: A study of over 19,000 employees showed traditional phishing training has limited impact, improving scam detection by just 1.7% over eight months. Despite varied training methods, over 50% of participants fell for at least one phishing email, highlighting persistent user susceptibility and the need for more effective cybersecurity education strategies.Cybersecurity Risks in Modern Cars: Modern connected vehicles are highly vulnerable to cyberattacks. A researcher exploited flaws in a major carmaker's web portal, gaining “national admin” access to dealership data and demonstrating the ability to remotely unlock cars and track their locations using just a name or VIN. This underscores the urgent need for regular vehicle software updates and stronger manufacturer security measures to prevent data breaches and potential vehicle control by malicious actors.Nation-State Cyberattacks on Infrastructure: Nation-state cyberattacks targeting critical infrastructure are escalating. Russian hackers reportedly took control of a Norwegian hydropower dam, releasing water undetected for hours. While no physical damage occurred, such incidents reveal the potential for widespread disruption and chaos, signaling a more aggressive stance by state-sponsored cyber actors and the need for robust infrastructure defenses.AI Regulation in Mental Health Therapy: States like Illinois, Nevada, and Utah are regulating or banning AI in mental health therapy due to safety and privacy concerns. Unregulated AI chatbots risk harmful interactions with vulnerable users and unintended data exposure. New laws require licensed professional oversight and prohibit marketing AI chatbots as standalone therapy tools to protect users.Impact of Surveillance Laws on Privacy Tech: Proposed surveillance laws, like Switzerland's data retention mandates, are pushing privacy-focused tech firms like Proton to relocate infrastructure. Proton is moving its AI chatbot, Lumo, to Germany and considering Norway for other services to uphold its no-logs policy. This reflects the tension between national security and privacy, driving companies to seek jurisdictions with stronger data protection laws.Data Brokers and Privacy Challenges: Data brokers undermine consumer privacy despite laws like California's Consumer Privacy Act. Over 30 brokers were found hiding data deletion instructions from Google search results using specific code, creating barriers for consumers trying to opt out of data collection. This intentional obfuscation frustrates privacy rights and weakens legislative protections.Android pKVM Security Certification: Android's protected Kernel-based Virtual Machine (pKVM) earned SESIP Level 5 certification, the first software security solution for consumer electronics to achieve this standard. Designed to resist sophisticated attackers, pKVM enables secure handling of sensitive tasks like on-device AI processing, setting a new benchmark for consistent, verifiable security across Android devices.VPN Open-Source Code Significance: VP.NET's decision to open-source its Intel SGX enclave code on GitHub enhances transparency in privacy technology. By allowing public verification, users can confirm the code running on servers matches the open-source version, fostering trust and accountability. This move could set a new standard for the VPN and privacy tech industry, encouraging others to prioritize verifiable privacy claims.
The industry is experiencing the ‘Wild West ‘in terms of AI implementation and associate legislation. Leading patent attorney Gene Quinn of IP Watchdog joins the podcast to discuss the complexity and swirl of issues and potential resolutions in both the US and globally, smart modularization approaches for marketers and ultimately adding value for consumers. Thanks for listening! Follow us on Twitter and Instagram or find us on Facebook.
Illinois just sent a shockwave through the digital wellness and AI ethics space with the WOPR Act—a new law that stops AI from replacing human therapists or making independent clinical decisions. At a time when therapy chatbots are becoming more advanced, more available, and more human-like than ever, this move forces a question we can't ignore: should your mental health care ever be left entirely to an algorithm? In this episode of The Healthier Tech Podcast, we dig into the heart of the WOPR Act and explore why Illinois is saying “humans only” when it comes to therapy. It's a conversation about patient safety, ethics, and the very definition of care in the age of artificial intelligence. Here's what we cover: How AI therapy tools work, and why they've grown so popular so fast. The risks AI can't see—like micro-expressions, body language, and the unspoken clues only humans can catch. Real-world stories of AI therapy helping people, and the scenarios where it could dangerously miss the mark. Why “better than nothing” isn't always better when it comes to mental health care. The difference between real empathy and simulated empathy, and how it shapes trust. The ethical stakes: when corporate motives meet vulnerable minds. Why Illinois decided to act now—and what it signals for the future of AI in healthcare. Whether you're fascinated by technology, concerned about its limits, or passionate about mental health, this episode offers a thought-provoking look at the intersection of AI, ethics, and human connection. Subscribe to The Healthier Tech Podcast for more conversations on building a healthy, intentional relationship with technology—one choice at a time. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
Today Dean W. Ball, former White House AI policy advisor joins The Cognitive Revolution to discuss his role in crafting the Trump administration's AI Action Plan, his reasons for leaving government, and his perspectives on AI policy, US-China competition, and the future of AI regulation and adoption. Check out our sponsors: Fin, Labelbox, Oracle Cloud Infrastructure, Shopify. Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at https://notion.com/lp/nathan White House Experience & Government Role: Dean Ball served as senior policy advisor for AI and emerging technology at the White House Office for Science and Technology Policy (OSTP) for four months. AI Regulation & Government Approach: Information asymmetry exists between government and AI labs, "Having worked at the White House, I don't know tremendously more about what goes on inside the Frontier Labs than you do." Private Sector Innovation: Dean emphasizes the importance of private sector-led initiatives in AI safety and standards. Future AI Developments: Dean believes agentic commerce is "right around the corner" but sees little discussion about it from regulatory or conceptual perspectives. AI Action Plan Development: It emphasized concrete actions for AI implementation across government agencies rather than just theoretical frameworks. Personal Updates: Dean is reviving his weekly Hyperdimensional Substack, joining the Foundation for American Innovation as a senior fellow, and plans to share his long-held insights on recent AI developments. Sponsors: Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you're not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
Your face unlocks your phone, animates your emoji, and verifies your identity but who actually owns the digital rights to your unique features? In this deep dive into biometric data law, we explore the high-stakes legal battles reshaping how technology interacts with our most personal physical characteristics.When Facebook paid $650 million to settle a class action lawsuit over facial recognition, it signaled a seismic shift in how companies must approach biometric data collection. We break down the landmark cases—from White Castle's potential $17 billion fingerprint scanning liability to Clearview AI's global legal troubles for scraping billions of public photos without consent. These aren't just American concerns; we journey from China, where a professor successfully sued a wildlife park over mandatory facial scans, to India's Supreme Court ruling on the world's largest biometric ID system.Beyond privacy concerns, fierce patent wars are erupting over who owns the methods for collecting and using biometric data. Companies battle over facial authentication patents worth billions while "liveness detection" technology becomes crucial in a world of deepfakes and digital impersonation. The stakes couldn't be higher as these technologies become embedded in everything from banking to border control.We untangle the global patchwork of regulations emerging to govern facial recognition, from Illinois' pioneering BIPA law to Europe's strict GDPR protections and China's surprising new limits on private biometric collection. Throughout it all, a clear trend emerges: your face isn't just data, it's your identity, and increasingly, the law recognizes that distinction.Whether you're concerned about your rights, curious about the future of facial recognition, or simply want to understand why your social media filters might be collecting more than just likes, this episode offers essential insights into the legal frameworks shaping our biometric future. Listen now to discover how to protect your digital identity in a world that increasingly wants to scan it.Send us a textSupport the show
While Silicon Valley builds advanced AI models and Beijing integrates them into state power, Washington faces an uncomfortable reality: America's innovation machine might not be enough to win the AI race on its own. The problem isn't our technology—it's our government's ability to deploy it.The White House recently released “America's AI Action Plan,” which aims to change this dynamic, calling for everything from "Manhattan Project-style" coordination to federal AI sandboxes. But with the Trump Administration now moving to implement these initiatives, the question becomes: can American democracy move fast enough to compete with authoritarian efficiency? And should it?Charles Clancy, Chief Technology Officer of MITRE, knows the challenges well. His organization serves as a bridge between government needs and technical solutions, and he's seen firsthand how regulatory fragmentation, procurement bottlenecks, and institutional silos turn America's AI advantages into operational disadvantages. His team also finds that Chinese open-weight models outperform American ones on key benchmarks—a potential warning sign as the U.S. and China compete to proliferate their technology across the globe.Clancy argues the solution is not for the U.S. to become China, but rather to take a uniquely American approach—establish federal frontier labs, moonshot challenges, and market incentives that harness private innovation for public missions. He and FAI's Josh Levine join Evan to explore whether democratic institutions can compete with authoritarian efficiency without sacrificing democratic values. View Mitre's proposals for the White House's plan here, and more of Charle's research here.
Matt Perault (a16z) joins Corbin Barthold (TechFreedom) for a wide-ranging discussion of AI bills, AI laws, and AI vibes. Part of the WLF-TechFreedom Tech in the Courts webinar series.Topics include:Why did the AI moratorium die?Activity in the statesRegulate outcomes, not models?Next steps in Congress“Transparency”: so hot right nowThe AI panicLawsuitsLinks:Recorded Tech in the Courts Webinar—The State of AI Regulation
In 1993, leaders resisted calls to control the internet and it changed the world. Now, the U.S. is taking the same approach with AI. Learn more about your ad choices. Visit megaphone.fm/adchoices
When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIOWhat you'll learn:✓ Why AI behaves unpredictably despite explicit programming✓ How to implement "pattern of life" monitoring for AI systems✓ The hidden dangers of anthropomorphizing AI✓ Essential board-level governance structures for AI deployment✓ Real-world AI failure examples and their business impact✓ Strategies for building appropriate skepticism while leveraging AI benefitsKey ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.Subscribe to CXOTalk for more expert insights on technology leadership and AI:
The rapid development of artificial intelligence has the potential to spur amazing advances in human society that are definitely worth pursuing. That said, AI also has the potential to do dreadful harm that we must guard against. As one of the nation's tech leaders, OpenAI's Sam Hartman, explained recently, for all of AI's amazing […]
On this episode of Gov Tech Today, hosts Russell Lowery and Jennifer Saha explore the fast-moving advancements in AI technology and the concurrent legislative and regulatory efforts attempting to keep pace. They discuss a mysterious federal provision to preclude state regulation of AI that was swiftly eliminated by an overwhelming 99 to 1 vote. The episode also delves into California's proactive stance on AI regulations, including recent amendments and ongoing projects, providing insights on how other states may follow suit. They highlight key concerns around risk, transparency, and the practical integration of AI in government operations.00:00 Introduction and Hosts00:14 Federal AI Legislation Controversy01:17 State vs. Federal Regulation Debate03:23 California's AI Regulatory Landscape05:10 AI in State Government Operations07:52 AI Risk Management and Implementation10:16 AI Proof of Concepts and Budgeting14:41 Adoption and Skepticism of AI16:39 Conclusion: The Future of AI
OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit
OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit
OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit
OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit
OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit
July heats up with a look at the impact of some regulatory moves in the US and a deep dive into a new AI trend for developers that may be more useful than you think.
Generative AI continues to drive conversation and concern, and not surprisingly, the focus on the promise of AI and how best to regulate it have created controversial positions. The EU has been one of those leaders in addressing regulation of AI, primarily through the EU AI Act. On today's episode, we will learn more from David about the EU AI Act, as well as a US perspective from Derek on the status of AI regulation, and how US companies may be impacted by the EU AI Act. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Tara Stingley (email) (Cline Williams Wright Johnson & Oldfather, LLP)Guest Speakers: David van Boven (email) (Plesner / Denmark) & Derek Ishikawa (email) (Hirschfeld Kraemer LLP / California)Support the showRegister on the ELA website here to receive email invitations to future programs.
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
Send us a textShe's the legal powerhouse behind IBM's AI ethics strategy — and she makes law fun. In this encore episode, we revisit a fan favorite: Christina Montgomery, formerly IBM's Chief Privacy and Trust Officer, now Chief Privacy and Trust Officer, GM. From guarding the gates of generative AI risk to advising on global regulation, Christina gives us a front-row seat to what's now, what's next, and what needs rethinking when it comes to trust, synthetic data, and the future of AI law.
Dr. Gillian Hadfield from Johns Hopkins University and Andrew Freedman from Fathom discuss their innovative proposal to govern AI through private regulatory markets, which has been introduced as California's SB 813. Their system would separate democratic goal-setting from technical rule-making by having government bodies articulate safety outcomes while competitive private certifiers develop and enforce detailed standards, with companies receiving liability protection for compliance. The conversation explores how this market-based approach could create a "race to the top" in AI safety standards while remaining agile enough to keep pace with rapid technological development. Key challenges discussed include preventing a race to the bottom among certifiers, liability law interactions, and identifying qualified organizations to serve as effective private regulators. Sponsors: Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you're not 100% satisfied, get up to $1 million back. Learn more at fin.ai/cognitive Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (04:48) Introduction and Problem Overview (07:48) Regulatory Markets Concept Origins (17:14) Current Governance System Failures (Part 1) (19:28) Sponsors: Fin | Labelbox (22:42) Current Governance System Failures (Part 2) (25:30) Private Governance Mechanism Explained (Part 1) (35:06) Sponsors: Oracle Cloud Infrastructure | NetSuite by Oracle (37:38) Private Governance Mechanism Explained (Part 2) (44:17) Liability Protection Framework (56:39) Race to Top Dynamics (01:07:24) Red Teaming Implementation Challenges (01:28:47) Insurance Alternative Approaches (01:53:51) Moving Forward Conclusions (01:55:11) Outro
Meta just announced the legit Dream Team of AI.
On this week's “Marketplace Tech Bytes: Week in Review,” Marketplace's Nova Safo and Paresh Dave, senior writer at WIRED, discuss Amazon releasing its 1 millionth robot at one of its warehouses. Plus, lawmakers contended with provisions dealing with artificial intelligence and renewable energy in that big tax and spending bill, recently passed by Congress, that consumed Washington this week.
On this week's “Marketplace Tech Bytes: Week in Review,” Marketplace's Nova Safo and Paresh Dave, senior writer at WIRED, discuss Amazon releasing its 1 millionth robot at one of its warehouses. Plus, lawmakers contended with provisions dealing with artificial intelligence and renewable energy in that big tax and spending bill, recently passed by Congress, that consumed Washington this week.
Plus: The Elon Musk-Donald Trump feud reignites over Republicans' tax-and-spending bill. And robots are about to outnumber humans in Amazon warehouses. Katie Deighton hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
See omnystudio.com/listener for privacy information.