POPULARITY
In this episode, Carmen Bellebna will share with us the journey of deepeye Medical to implement the EU AI act requirements in parallel with the EU MDR certification. We will review first what is the AI ACT for those that are still not aware and also all the challenges that deepeye went through to be able to get the EU MDR certification. Who is Carmen Bellebna? Carmen Bellebna is a Regulatory Affairs and Quality Management expert at deepeye Medical, a medtech company pioneering AI-driven solutions for ophthalmology. With a strong background in implementing EU regulatory frameworks, Carmen has been closely following the evolution of the Artificial Intelligence Act (AI Act) and its intersection with the Medical Device Regulation (MDR). She has played a key role in integrating AI-specific compliance strategies into deepeye's QMS, ensuring alignment with both MDR and upcoming AI requirements. Carmen recently delivered a well-received presentation at the Outsourcing in Clinical Trials (OCT) conference in Munich, where she shared hands-on insights and practical tips for operationalizing AIA obligations within a regulated medtech environment. Who is Monir El Azzouzi? Monir El Azzouzi is the founder and CEO of Easy Medical Device a Consulting firm that is supporting Medical Device manufacturers for any Quality and Regulatory affairs activities all over the world. Monir can help you to create your Quality Management System, Technical Documentation or he can also take care of your Clinical Evaluation, Clinical Investigation through his team or partners. Easy Medical Device can also become your Authorized Representative and Independent Importer Service provider for EU, UK and Switzerland. Monir has around 16 years of experience within the Medical Device industry working for small businesses and also big corporate companies. He has now supported around 100 clients to remain compliant on the market. His passion to the Medical Device filed pushed him to create educative contents like, blog, podcast, YouTube videos, LinkedIn Lives where he invites guests who are sharing educative information to his audience. Visit easymedicaldevice.com to know more. Link Carmen Bellebna LinkedIn: https://www.linkedin.com/in/men-be-a1828a81/ Social Media to follow Monir El Azzouzi Linkedin: https://linkedin.com/in/melazzouzi Twitter: https://twitter.com/elazzouzim Pinterest: https://www.pinterest.com/easymedicaldevice Instagram: https://www.instagram.com/easymedicaldevice Authorized Representative and Importer services:https://easymedicaldevice.com/authorised-representative-and-importer/
In this episode we chat to Niklas Silfverström, CEO of Klang.ai, about Europe's need for AI independence. We discuss data privacy risks, the Cloud Act, and AI bias, emphasizing the need for European infrastructure and language models. Niklas highlights how relying on American AI companies threatens sovereignty, and why investing in GPUs, data centers, and energy is crucial for Europe's competitive future. He also warns that without these efforts, Europe risks becoming a mere consumer of AI rather than a leader in the field. We should note that we use Klang.ai's wonderful platform in the backend processes of AI-Podden - they make our jobs much easier.
Years after first identifying the potential risks of AI systems, world leaders are having to balance concerns with an acknowledgment of the gains achievable through certain AI systems and nowhere is this more true than the EU.The Artificial Intelligence Action Summit in Paris has seen a number of high-profile announcements made on EU AI investments, on both a continental and regional basis. But it's also highlighted the distance the EU has yet to go for true international AI competition – up against the likes of the US and China, can it continue to stand out? In this episode, Jane and Rory welcome Nader Henein, Gartner VP Analyst, Data Protection and AI Governance, to discuss the finer details of EU AI and how public-private partnerships balance with its strong legal requirements for the technology.Read more:UK and US reject Paris AI summit agreement as “Atlantic rift” on regulation growsUnraveling the EU AI ActThe EU just shelved its AI liability directiveA big enforcement deadline for the EU AI Act just passed – here's what you need to knowLooking to use DeepSeek R1 in the EU? This new study shows it's missing key criteria to comply with the EU AI ActHow the EU AI Act compares to other international regulatory approachesUK regions invited to apply for ‘AI Growth Zone' status
Wieder eine volle Folge in die es nicht alle News geschafft haben, die wir die letzten zwei Wochen angesammelt haben.Die großen Player haben auch diesmal wieder Updates erbracht. Von Google gibt es das neue Feature „Ask for Me“, Imagen 3 ist raus und über die API zur Verfügung gestellt und die Gemini-2-Familie ist nun verfügbar mit Preisinfos. Einen kleinen Rückschritt gibt es, da Google die Richtlinien zu AI lockert.Auch OpenAI bringt Neuerungen in die API und den Chat: Dort ist jetzt o3-mini als Modell verfügbar. Für Pro-User:innen gibt es nun den nächsten Agenten: OpenAI DeepResearch.Mistral startet jetzt auch mit einer eigenen mobilen App „le Chat“, die besonders schnell mit über 1000 Tokens pro Sekunde Antworten liefert.Im Antrophic Economic Index kann man u.a. nachlesen, welche Gruppen Claude in ihrer Arbeit besonders viel oder wenig einsetzen.Die EU versucht auf verschiedenenWegenGeld für KI zu mobilisieren.Auf Hugging Face startet heute ein kostenloser AI-Agents-Kurs.Es gibt ein neues Text-to-Video-Modell von Alibaba: Tongyi Wanxiang 2.1. Und ein weiteres Open-Source-Modell kommt an Deep Seek v3 ran: Tülu 3 405B.Schreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. BlueskyInstagramLinkedInMeetupYouTube
「“AIサミット”共同声明 アメリカ・イギリスは署名せず EUはAIに31兆円投資へ 」 パリで行われていたAI(人工知能)を巡るサミットで、アメリカとイギリスが共同声明に署名をせず、スタンスの違いが浮き彫りになりました。「AIアクションサミット」は閉幕日の11日、首脳級会合を開き、EU(ヨーロッパ連合)はAI分野への投資を募り2000億ユーロ、約31兆円を目標にすると発表しました。一方、アメリカのバンス副大統領は「アメリカのAI技術が世界の基準であり続ける」と述べ、過度な規制には反対すると主張しました。共同声明には「AIのリスクに対処し、透明性に関する作業を継続する」ことなどが盛り込まれ、日本や中国、フランスなど60の国と地域が署名しましたが、アメリカとイギリスは署名せず、AI推進を巡るスタンスの違いが浮き彫りになりました。
Another week, another whirlwind of AI chaos, hype, and industry shifts. If you thought things were settling down, well, think again because this week, I'm tackling everything from AI regulations shaking up the industry to OpenAI's latest leap that isn't quite the leap it seems to be. Buckle up because there's a lot to unpack. With that, here's the rundown. EU AI Crackdown – The European Commission just laid down a massive framework for AI governance, setting rules around transparency, accountability, and compliance. While the U.S. and China are racing ahead with an unregulated “Wild West” approach, the EU is playing referee. However, will this guidance be enough or even accepted? And, why are some companies panicking if they have nothing to hide? Musk's “Inexperienced” Task Force – A Wired exposé is making waves, claiming Elon Musk's team of young engineers is influencing major government AI policies. Some are calling it a threat to democracy; others say it's a necessary disruption. The reality? It may be a bit too early to tell, but it still has lessons for all of it. So, instead of losing our minds, let's see what we can learn. OpenAI o3 Reality Check – OpenAI just dropped its most advanced model yet, and the hype is through the roof. With it comes Operator, a tool for building AI agents, and Deep Research, an AI-powered research assistant. But while some say AI agents are about to replace jobs overnight, the reality is a lot messier with hallucinations, errors, and human oversight still very much required. So is this the AI breakthrough we've been waiting for, or just another overpromise? Physical AI Shift – The next step in AI requires it to step out of the digital world and into the real one. From humanoid robots learning physical tasks to AI agents making real-world decisions, this is where things get interesting. But here's the real twist: the reason behind it isn't about automation; it's about AI gaining real-world experience. And once AI starts gaining the context people have, the pace of change won't just accelerate, it'll explode. Show Notes: In this Weekly Update, Christopher explores the EU's new AI guidelines aimed at enhancing transparency and accountability. He also dives into the controversy surrounding Elon Musk's use of inexperienced engineers in government-related AI projects. He unpacks OpenAI's major advancements including the release of their 3.0 advanced reasoning model, Operator, and Deep Research, and what these innovations mean for the future of AI. Lastly, he discusses the rise of contextual AI and its implications for the tech landscape. Join us as we navigate these pivotal developments in business technology and human experience. 00:00 - Introduction and Welcome 01:48 - EU's New AI Guidelines 19:51 - Elon Musk and Government Takeover Controversy 30:52 - OpenAI's Major Releases: Omni3 and Advanced Reasoning 40:57 - The Rise of Physical and Contextual AI 48:26 - Conclusion and Future Topics #AI #Technology #ElonMusk #OpenAI #ArtificialIntelligence #TechNews
In this episode, I interview Risto Uuk, the EU Research Lead at the Future of Life Institute in Brussels. He researches European Union Artificial Intelligence policy, including the EU Artificial Intelligence Act. Risto is a PhD researcher at KU Leuven studying the assessment and risk-mitigation of general purpose AI models. We have an expansive conversation covering the most groundbreaking parts of the EU AI Act, the negotiations behind it, and the broader applications and risks of artificial intelligence throughout the world. I hope you enjoy this episode!
▼日本IBM 誰かに話したくなる"〇〇"の話https://sbwl.to/IBM-linkfire18:00のDAILY BRIEFフラッシュニュースです。このエピソードはAI音声で読み上げられています。【トピックはこちら】キエフで大規模攻撃、暖房停止イギリスの小売売上高、11月は期待下回る回復トランプ、EUに米国産エネルギー購入要求フラッシュニュースはAIを使用して原稿の作成から読み上げまでを行なっています。読み間違え等お気づきの際は下記の投稿フォームへご報告お願い致します。https://survey.sonicbowl.cloud/form/4f4a4c70-9c98-45e6-bc1a-5a8efaa2bbed/■DAILY BRIEF+ Apple Podcastshttps://apple.co/46E6pY7■DAILY BRIEFオフィシャルXhttps://twitter.com/DailyBrief_DB▼日本IBM 誰かに話したくなる""〇〇""の話https://sbwl.to/IBM-linkfire
In this episode focusses on the future of regulatory approvals for AI software medical devices in the UK (and EU). Featuring guest James Dewar- Co-founder of Scarlet a EU Notified body and UK approved body that specialises in certifying software medical devices. Key discussion topics: The current regulatory position of the UK post-Brexit and the opportunities that this could present The practical impact of the EU AI act for medical device manufacturers within the EU. What makes regulatory submissions uniquely challenging for AI devices Getting novel technologies such as Large Language model and other foundation models regulatory approval
Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional. In today's edition of Daily Compliance News: Former Tyson Foods CFO pleads guilty to drunk driving. (WSJ) Kenya impeaches deputy President. (Al Jazeera) Meta fires staff who abused $25 meal credits. (FT) An LLM which benchmarks Big AI's compliance under the EU AI law. (Tech Crunch) For more information on the Ethico Toolkit for Middle Managers, available at no charge by clicking here. Check out the full 3-book series, The Compliance Kids on Amazon.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week on the Tech ONTAP podcast, Adam Gale joins us to discuss the new EU AI regulations and how they may impact you and your business.
Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O'Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining. Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters. Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now? Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.? Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well. Cynthia: Interesting, so I'd say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree? Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach? Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take? Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that. Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our organization where we're using AI and what is an AI system as defined under the various laws, also making sure we have a global interpretation of that term. And then that is a step many of our clients are taking at the moment is like setting up an AI inventory. And that's already a very difficult and tough step. And then the next one is then like per AI system that is then coming up in this register is to define the risk management process. And of course, that's the point where in Europe, we look into the AI Act and look what kind of AI system do we have, high risk or any other sort of defined system. Or today, we're talking about the generative AI systems a bit more. For example, there we have strong obligations in the European AI Act on the providers of such generative AI. So less on companies that use generative AI, but more on those that develop and provide the generative AI because they have the deeper knowledge on what kind of training data is being used. They need to document how the AI is working and they need to also register this information with the centralized database in the European Union. They also need to give some information on copyright protected material that is contained in the training data so there is quite some documentation requirements and then of course so logging requirements to make sure the AI is used responsibly and does not trigger higher are risks. So there's also two categories of generative AI that can be qualified. So that's kind of like the risk management process under the European AI Act. And then, of course, organizations also look into risks into other areas, copyright, data protection, and also IT security. Cynthia, I know IT security is one of the topics you love. You add some more on IT security here and then we'll see what Nikki says for the US. Cynthia: Well, obviously NIST 2.0 is coming into force. It will cover providers of certain digital services. So it's likely to cover providers of AI systems in some way or other. And funny enough, NIST 2.0 has its own risk management process involved. So there's supply chain due diligence involved, which would have to be baked into a risk management process for that. And then the EU's ENISA, Cybersecurity Agency for the EU, has put together a framework for cybersecurity, for AI systems, dot dot binding. But it's certainly a framework that companies can look to in terms of getting ideas for how best to ensure that their use of AI is secure. And then, of course, under NIST, too, the various C-Certs will be putting together various codes and have a network meeting late September. So we may see more come out of the EU on cybersecurity in relation to AI. But obviously, just like any kind of user of AI, they're going to have to ensure that the provider of the AI has ensured that the system itself is secure, including if they're going to be putting trained data into it, which of course is highly probable. I just want to say something about the training data. You mentioned copyright, and there's a difference between the EU and the UK. So in the UK, you cannot use, you know, mine data for commercial purposes. So at one point, the UK was looking at an exception to copyright for that, but it doesn't look like that's going to happen. So there is a divergence there, but that stems from historic UK law rather than as a result of the change from Brexit. Nikki, turning back to you again, I mean, we've talked a little bit about risk management. How do you think that that might differ in the US and what kind of documentation might be required there? Or is it a bit looser? Monique: I think there are actually quite a bit of similarities that I would pull from what, you know, we have in the EU. And Andy, I think this goes back to your question about whether companies can establish a global process, right? In fact, I think it's going to to be really important for companies to see this as a global process as well. Because AI development is going to happen, you know, throughout the world. And it's really going to depend on where it's developed, but also where it's deployed, you know, and where the outputs are deployed. So I think taking a, you know, broader view of risk management will be really important in the the context of AI, particularly given. That the nature of AI is to, you know, process large swaths of information, really on a global scale, in order to make these analytics and creative development and content generation processes faster. So that just a quick aside of I actually think what we're going to see in the US is a lot of pulling from what we've seen that you and a lot more cooperation on that end. I agree that, you know, really starting to frame the risk governance process is looking at who are the key players that need to inform that risk measurement and tolerance analytics, that the decision making in terms of how do you evaluate, how do you inventory. Evaluate, and then determine how to proceed with AI tools. And so, you know, one of the things that I think makes it hopefully a little bit easier is to be able to leverage, you know, from a U.S. Perspective, leverage existing compliance procedures that we have, for example, for SEC compliance or privacy compliance or, you know, other ethics compliance programs. Brands and make AI governance a piece of that, as well as, you know, expand on it. Because I do think that AI governance sort of brings in all of those compliance pieces. We're looking at harms that may exist to a company, not just from personal information, not just from security. Not just from consumer unfair deceptive trade practices, not just from environmental, standpoints, but sort of the very holistic view of not to make this a bigger thing than it is, but kind of everything, right? Kind of every aspect that comes in. And you can see that in some of the questions that developers are supposed to be able to answer or deployers are supposed to be able to answer in risk management programs, like, for example, in Colorado, right, the information that you need to be able to address in a risk management program and an impact assessment really has to demonstrate an understanding of, of the AI system, how it works, how it was built, how it was trained, what data went into it. And then what are the full, what is the full range of harms? So for example, you know, the privacy harms, the environmental harms, the impact on employees, the impact on internal functions, the impact on consumers, if you're using it externally, and really be able to explain that, whether you have to put out a public statement or not, that will depend on the jurisdiction. But even internally, to be able to explain it to your C-suite and make them accountable for the tools that are being brought in, or make it explainable to a regulator if they were to come in and say, well, what did you do to assess this tool and mitigate known risks? So, you know, kind of with that in mind, I'm curious, what steps do you think need to go into a governance program? Like, what are one of the first initial steps? And I always feel that we can sort of start in so many different places, right, depending on how a company is structured, or what initial compliance pieces are. But I'm curious to know from you, like, Like, what would be one of the first steps in beginning the risk management program? Cynthia: Well, as you said, Nikki, I mean, one of the best things to do is leverage existing governance structures. You know, if we look, for instance, into how the EU is even setting up its public authorities to look at governance, you've got, as I've mentioned, you know, kind of at the outset, you've almost got a multifaceted team approach. And I think it would be the same. I mean, the EU anticipates that there will be an AI officer, but obviously there's got to be team members around that person. There's going to be people with subject matter expertise in data, subject matter expertise in cyber. And then there will be people who have subject matter expertise in relation to the AI system itself, the data, training data that's been used, how it's been developed, how the algorithm works. Whether or not there can be human intervention. What happens if there are anomalies or hallucinations in the data? How can that be fixed? So I would have thought that ultimately part of that implementation is looking at governance structure and then starting from there. And then obviously, I mean, we've talked about some of the things that go into the governance. But, you know, we have clients who are looking first at use case and then going, okay, what are the risks in relation to that use case? How do we document it? How do we log it? How do we ensure that we can meet our transparency and accountability requirements? You know, what other due diligence and other risks are out there that, you know, blue sky thinking that we haven't necessarily thought about. Andy, any? Andy: Yeah, that's, I would say, one of the first steps. I mean, even though not many organizations allocate now the core AI topic in the data protection department, but rather perhaps in the compliance or IT area, still from the governance process and starting up that structure, we see a lot of similarities to the data protection. Protection GDPR governance structure and so yeah I think back five years to implementation or getting ready for GDPR planning and checking what what other rules we we need to comply with who knew do we need to involve get the plan ready and then work along that plan that's that's the phase where we see many of our clients at the moment. Nikki, more thoughts from your end? Monique: Yeah, I think those are excellent points. And what I have been talking to clients about is sort of first establishing the basis of measurement, right, that we're going to evaluate AI development on or procurement on. What are the company's internal principles and risk tolerances and defining those? And then based off of those principles and those metrics, putting together an impact assessment, which borrows a lot from what, you know, from what you both said, it borrows a lot from the concept of impact assessments under privacy compliance, right? Right, to implement the right questions and put together the right analytics in order to measure whether a AI tool that's in development is meeting up to those metrics, or something that we are procuring is meeting those metrics, and then analyzing the risks that are coming out of that. I think a lot of that, the impact assessment is going to be really important in helping make those initial determinations. But also, you know, and this is not just my feeling, this is something that is also required in the Colorado law is setting up an impact assessment, and then repeating it annually, which I think is particularly important in the context of AI, especially generative AI, because generative AI is a learning system. So it is going to continue to change, There may be additional modifications that are made in the course of use that is going to require reassessing, is the tool working the way it is intended to be working? You know, what has our monitoring of the tool shown? And, you know, what are the processes we need to put into place? In order to mitigate the tool, you know, going a little bit off path, AI drift, more or less, or, you know, if we start to identify issues within the AI, how do we what processes do we have internally to redirect the ship in the right process. So I think impact assessments are going to be a critical tool in helping form what is the rest of the risk management process that needs to be in place. Andy: All right. Thank you very much. I think these were a couple of really good practical tips and especially first next steps for our listeners. We hope you enjoyed the session today and look forward if you have any feedback to us either here in the comment boxes or directly to us. And we hope to welcome you soon in one of our next episodes on AI, the law. Thank you very much. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or established standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.
■リスナーアンケートはこちらからhttps://survey.sonicbowl.cloud/form/fe7e73e9-7dea-49eb-ab96-4411f44b1f22/■DAILY BRIEF BILINGUALはこちらからhttps://sbwl.to/dbb-smry9月20日、世界で今起きていること【ニュースソース】1.ナイキCEOジョン・ドナホーが退任https://x.gd/VUhjyhttps://x.gd/kAWln2.メルセデス・ベンツ、年間業績見通しを下方修正https://x.gd/QSTX3https://x.gd/FfX273.企業広告、差別動画に誤掲載https://x.gd/lAL3r4.EUのAI規制に懸念、企業が一斉に警鐘https://x.gd/4Ke1f5.ディズニー、ハッキング被害後にSlackの使用停止へhttps://x.gd/m2eSe
In this episode, we talk to Daniel Leufer and Caterina Rodelli from Access Now, a global advocacy organization that focuses on the impact of the digital on human rights. As leaders in this field, they've been working hard to ensure that the European Union's AI Act doesn't undermine human rights or indeed fundamental democratic values. They share with us how the EU AI act was put together, the Act's particular downfalls, and where the opportunities are for us as citizens or as digital rights activists to get involved and make sure that it's upheld by companies across the world. Note: this episode was recorded back in February 2024.
welcome to wall-e's tech briefing for monday, august 26th! dive into today's top tech stories: doj vs. realpage: the u.s. justice department sues property management software company realpage for alleged algorithmic collusion to raise rents, marking a significant case against such practices amid surging rent prices. halliburton cyberattack: energy giant halliburton shuts down internal systems following unauthorized access, while u.s. department of energy confirms no disruptions in energy services. meta & spotify criticize eu ai regulations: ceos of meta and spotify argue that existing eu ai rules hinder innovation and the use of public data for ai model training, leading meta to withhold its latest ai model from the european market. andrew ng steps down: ai luminary andrew ng announces his transition from ceo of landing ai to executive chairman, as dan maloney steps up as the new ceo and ng focuses on investments through his ai fund. apple's upcoming event: rumors suggest apple will unveil the iphone 16, apple watch series 10, new airpods, and macbook pro laptops with m4 chips at an event on september 10th, with potentially significant updates in ios 18.1. stay tuned for tomorrow's tech updates!
AI is the main topic of conversation for this week's episode. Between continued advancements in the technology and governments trying to put safeguards in place to prevent a Terminator-style future, there's plenty going on.OpenAI has introduced a new feature of its API called “structured outputs,” which essentially lets developers pass in a valid JSON schema that guarantees the model will always generate responses that adhere to it. No omission of required keys, no extra values you weren't expecting, no need for strongly worded prompts to achieve consistent formatting.On the flip side, the European Union has introduced the first legislation to develop safe and trustworthy AI within its borders. This legislation includes a 4 tier risk classification system for all AI products ranging from minimal risk to unacceptable risk, and a 3+ year timeline for companies developing AI products to comply with these new regulations.The React core team announces the changes to Suspense will delay the release of React 19 for a bit longer than originally planned, but should ultimately lead to a better end user experience for devs and library authors alike.And the news rounds out with a game of “guess the CSS usage statistics” compiled by Chrome's anonymous usage statistics. Ever wondered what percentage of websites are styling scrollbars, or how many set height? Not to mention the amount of CSS properties we've never heard of before: font-synthesis-small-caps, anyone?News:Paige - EU rolls out first-ever legal framework for AIJack - OpenAI Structured OutputsTJ - Chrome CSS usage statisticsBonus News:React 19 release delayedWhat Makes Us Happy this Week:Paige - Deadpool & Wolverine movieJack - Facebook MarketplaceTJ - The Lord of the Rings film seriesThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or Tweet us on X @front_end_fire.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fire
Is the EU AI Act about on product safety or fundamental rights? Join us in this enlightening episode of The FIT4Privacy Podcast wherein the host Punit Bhatia sits down with Caro Robson, a leading expert in AI regulations. Together, they explore the aspect of the AI Act being a safety-oriented framework than a just a rights protection safeguard. Caro also dives into the vital roles of international standards from bodies like OECD, UNESCO, ISO, and NIST, and discusses conformity assessment, compliance procedures, and the concept of regulatory sandboxes. Uncover how these developments align with GDPR and what they mean for the future of AI systems, especially in high-risk applications. This conversation is a must for anyone keen on understanding the intricate balance between regulation and innovation in the AI landscape. Tune in to Episode 119, Season 5, and subscribe to The FIT4Privacy Podcast for more insightful dialogues. If you find our content valuable, please leave a review and share it with others interested in the evolving world of AI regulation. KEY CONVERSATION POINT Introduction How Caro Robson got into the privacy space Understanding the need for EU AI Act Why did the EU push for the EU AI Act Will there be similarities in regulation? How EU AI Act can help protect already set product standards Will EU AI Act apply to products which aren't in the market yet? Can companies categorize systems from high risk to low risk? Final message ABOUT THE GUEST Caro is a renowned expert and leader in digital regulation. She is a passionate advocate for ethical AI and data governance, with over 15 years' global experience across regions and sectors, designing and embedding practical solutions to these challenges. Caro has worked with governments, international organisations and multinational businesses on data and technology regulation, including as strategy executive for a regulator and leader of a growing practice area for a prominent public policy consultancy in Brussels. Caro was recently appointed UK Ambassador for the Global AI Association and is an expert observer to the UNECE Working Party on Regulatory Cooperation and Standardization Policies (WP.6), Group of Experts on Risk in Regulatory Systems. Caro holds an Executive MBA with distinction from Oxford, an LLM with distinction in Computer & Communications Law from Queen Mary, University of London, and is a Fellow of Information Privacy with the International Association of Privacy Professionals. She has contributed to legal textbooks, publications, and research on privacy and data governance, including for the EU, ITU and IEEE. ABOUT THE HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach privacy professionals. Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites www.fit4privacy.com, www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
The conversation revolved around the regulatory challenges surrounding AI adoption, particularly the recently passed EU AI regulation. Elena Gurevich emphasized the importance of transparency and responsibility in AI development and use, and discussed the challenges of regulating AI while promoting innovation. Punit Bhatia and Elena Gurevich explored the ethical implications of relying solely on AI tools to detect plagiarism or AI use in academic decision making, and the importance of critical thinking and ethical considerations in integrating AI in business. Watch and listen to this podcast highlighting the need for a risk-based framework and responsible innovation in the EU to ensure compliance with AI regulations and maintain customer trust, with Elena Gurevich and Punit Bhatia. KEY CONVERSATION POINT 02:05 - What fascinates Elena towards the IT, IP and Blockchain World? 04:33 - Evolution of AI especially in terms of regulation 08:10 - The EU AI Act 11:16 - EU AI Act regulation and its potential impact on companies 16:33 - AI generated contents copyright 22:20 - Risk and harms of AI 24:39 - How can these technologies help assist in creation of the transparency culture 26:59 - How Blockchain works for transparency 30:24 - Plagiarism caused by AI 36:20 - Elena's advice about using AI 41:15 - Contact Elena and Outro ABOUT THE GUEST Elena Gurevich is an Intellectual Property attorney with a background in both Blockchain and Artificial Intelligence. With a degree from Benjamin N. Cardozo school of Law, Elena Gurevich combines a strong legal background with a deep understanding of the technical aspects of Blockchain and AI. This allows her to provide expert counsel to clients on a wide range of legal issues related to these cutting-edge technologies. Elena is known for her ability to evaluate complex technology and provide practical, business-minded advice. Elena's expertise includes advising on copyrights and trademarks, with a focus on new emerging technologies such as Blockchain and AI. Elena Gurevich also has experience working on NFT licensing, and other commercial transactions involving cutting-edge technology, and has provided counsel to a wide range of clients, including start-ups and tech companies as well as individuals. Elena is a member of the New York State Bar Association, where she stays up-to-date on the latest developments in these rapidly-evolving fields. Elena's unique combination of technical knowledge and legal expertise makes her a valuable asset to any organization working with Blockchain and AI, and she is well-positioned to continue providing valuable counsel to clients in the future. ABOUT THE HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach privacy professionals. Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites: www.fit4privacy.com, www.punitbhatia.com Podcast: https://www.fit4privacy.com/podcast Blog: https://www.fit4privacy.com/blog YouTube: http://youtube.com/fit4privacy
Im Ö1 Mittagsjournal gesendet am 01.08.24
Sara Hooker is VP of Research at Cohere and leader of Cohere for AI. We discuss her recent paper critiquing the use of compute thresholds, measured in FLOPs (floating point operations), as an AI governance strategy. We explore why this approach, recently adopted in both US and EU AI policies, may be problematic and oversimplified. Sara explains the limitations of using raw computational power as a measure of AI capability or risk, and discusses the complex relationship between compute, data, and model architecture. Equally important, we go into Sara's work on "The AI Language Gap." This research highlights the challenges and inequalities in developing AI systems that work across multiple languages. Sara discusses how current AI models, predominantly trained on English and a handful of high-resource languages, fail to serve the linguistic diversity of our global population. We explore the technical, ethical, and societal implications of this gap, and discuss potential solutions for creating more inclusive and representative AI systems. We broadly discuss the relationship between language, culture, and AI capabilities, as well as the ethical considerations in AI development and deployment. YT Version: https://youtu.be/dBZp47999Ko TOC: [00:00:00] Intro [00:02:12] FLOPS paper [00:26:42] Hardware lottery [00:30:22] The Language gap [00:33:25] Safety [00:38:31] Emergent [00:41:23] Creativity [00:43:40] Long tail [00:44:26] LLMs and society [00:45:36] Model bias [00:48:51] Language and capabilities [00:52:27] Ethical frameworks and RLHF Sara Hooker https://www.sarahooker.me/ https://www.linkedin.com/in/sararosehooker/ https://scholar.google.com/citations?user=2xy6h3sAAAAJ&hl=en https://x.com/sarahookr Interviewer: Tim Scarfe Refs The AI Language gap https://cohere.com/research/papers/the-AI-language-gap.pdf On the Limitations of Compute Thresholds as a Governance Strategy. https://arxiv.org/pdf/2407.05694v1 The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm https://arxiv.org/pdf/2406.18682 Cohere Aya https://cohere.com/research/aya RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs https://arxiv.org/pdf/2407.02552 Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs https://arxiv.org/pdf/2402.14740 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ EU AI Act https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf The bitter lesson http://www.incompleteideas.net/IncIdeas/BitterLesson.html Neel Nanda interview https://www.youtube.com/watch?v=_Ygf0GnlwmY Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet https://transformer-circuits.pub/2024/scaling-monosemanticity/ Chollet's ARC challenge https://github.com/fchollet/ARC-AGI Ryan Greenblatt on ARC https://www.youtube.com/watch?v=z9j3wB1RRGA Disclaimer: This is the third video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.
■DAILY BRIEF BILINGUALはこちらからhttps://sbwl.to/dbb-smry7月18日、世界で今起きていること【ニュースソース】1.EUのリーダーの再選に、暗い雲https://x.gd/JFqClhttps://x.gd/aDZOF2.中国はAIを試しているhttps://x.gd/1wGHLhttps://x.gd/Vq4gO3.バンコクの高級ホテルで謎の死亡事件https://x.gd/nRTmphttps://x.gd/R11iw4.今年の話題のドラマが出揃ったhttps://x.gd/go3TXhttps://x.gd/cIIRZ5.イギリスではペットフードに培養肉が使えるようになるhttps://x.gd/Hqou3https://x.gd/Whpsb「SPINEAR」と「ナッシュ」がコラボ中!初回購入限定3000円オフのキャンペーン(2024年8月31日迄)を実施しています。詳しくは、以下のURLから!https://sbwl.to/spinear_nosh_ov
(0:00) Intro(1:20) About the podcast sponsor: The American College of Governance Counsel.(2:06) Start of interview.(2:37) Natasha's "origin story." (6:25) On the risks and opportunities for AI.(8:39) On the regulatory landscape of AI in the US. Reference to President Biden's Executive Order.(11:40) On California's regulation of AI (SB 1047).(15:24) On the international AI regulatory landscape, including the EU AI legislation.(20:35) On the state of startups and venture capital in Silicon Valley.(25:34) On the 'stay private or go public' debate.(28:50) On the increased antitrust scrutiny by the FTC and DOJ, particularly in tech industry.(30:08) On the increased national security scrutiny via CFIUS reviews. The new geopolitics of dealmaking.(35:46) On the increased politicization of the boardroom, including ESG and DEI.(38:32) On boardroom diversity and challenges to SB-826 and AB-979 (California), and Nasdaq's Diversity Rule.(42:20) Books that have greatly influenced her life: To Kill a Mockingbird, by Harper Lee (1960)The Handmaid's Tale, by Margaret Altwood (1985)Animal Farm, by George Orwell (1945)(42:57) Her mentors.(43:49) Quotes that she thinks of often or lives her life by: "Don't Self-Select."(51:17) An unusual habit or absurd thing that he loves.(44:17) The living person that she most admires. One of them is Michelle Obama.Natasha Allen is a partner at Foley & Lardner in Silicon Valley, serving as Co-Chair for Artificial Intelligence, Co-Chair of the Venture Capital Committee, and a member of the Venture Capital, M&A, and Transactions Practices. You can follow Evan on social media at:Twitter: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__You can join as a Patron of the Boardroom Governance Podcast at:Patreon: patreon.com/BoardroomGovernancePod__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
Angela Shen-Hsieh is an entrepreneur, technologist, architect, and designer who worked in the corporate world and startup world. She loves to think about the future and build things. She led teams to create new innovations with AI, big data and collaborate. She's currently the founder and CEO of Alcōv, a home styling and shopping app. Where to find Angela* LinkedIn: https://www.linkedin.com/in/angelashenhsieh/* X: https://twitter.com/angelashenhsieh* Alcōv: https://alcov.co/Maria* X: https://go.ggutt.com/x* Linkedin: https://go.ggutt.com/maria-linkedin* TikTok: go.ggutt.com/tiktok* YouTube: http://go.ggutt.com/videosTimestamps* 00:00 Introduction* 02:18 Being intuition led and GGUTT feeling* 03:14 Failure in collaboration* 07:40 How designers lead collaborative processes* 13:16 Bridging design and business* 16:00 Design as a problem-solving discipline* 20:50 Rethinking innovation in design* 24:08 The UX of AI* 30:56 How AI and human creativity intersect* 36:41 Design insights from EU AI policies* 43:31 The role of design in policy development* 46:49 Angela on designing Alcōv* 58:55 Advice for designing your life This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ggutt.substack.com
Pre-IPO stock valuations = www.x.com/aarongdillon (see pinned post)Pre-IPO stock index fact sheet = www.agdillon.com/index00:07 | Mistral's new $6.2b valuation- French AI large language model company- Raised $645m at $6.2b valuation- General Catalyst led; Nvidia, Salesforce, IBM participated- Capital to be used to expand globally- Tech is compliant with new EU AI regulation00:48 | OpenAI $3.4b ARR- AI large language model provider, partnerships with Apple and Microsoft- $3.4b ARR as of May 2024, $2.0b ARR in Dec 2023, $1.0b in Jun 2023- Anthropic $100m ARR, Cohere $22m ARR- OpenAI has a 25x revenue multiple- $200m ARR from Microsoft partnership- New products; search engine, video generating models01:30 | OpenAI + Apple deal terms unveiled- OpenAI's ChatGPT to be integrated into iPhones, iPads, Macs- No Apple to OpenAI payment- Apple believes brand and product exposure to Apple's millions of customers is more valuable- Users will need an OpenAI subscription for some Apple interface use cases; both OpenAI and Apple will make incremental revenue- Apple deal is not exclusive; in talks with Google Gemini and Anthropic02:21 | OpenAI new CFO, CPO- Sarah Friar new CFO; Nextdoor holdings CEO, Square CFO, current Walmart board member- Kevin Weil new CPO; Instagram, Twitter, Planet Labs PBC- Focus on driving international revenue growth and enterprise customer revenue growth- $103.8b secondary market valuation, +21% vs its Apr 2024 round03:15 | Brex revenue +35% in 2023- Online banking company for businesses- Restructured leadership; co-CEO to Franseschi sole CEO, Dubugras as chairman- Raised over $1.5b in capital- 30,000 business customers; DoorDash, Roblox- 20% staff reduction in 2024- Reduced cash burn by 50%, extends runway 4 yrs- 2023 revenue +35%, gross profit +75%- $3.8b secondary market valuation, -69% vs its Jan 2022 round04:18 | Investment case for electricity?!- PG&E, California electric utility, reports 3.5 gigawatts of incremental electricity demand from 24 new data centers- Equal to output of 3 nuclear power plants- New data centers coming in next 5 yrs- Electricity demand to grow 2% to 4% annually through 2040- Electricity stands to be an incredible investment opportunity05:19 | Databrick's new chart/graph tool- Data management, analytics, and AI company- Launching AI/BI, a new visualization tool- Competes with Salesforce Tableau, Microsoft Power BI- AI/BI = uses AI to create charts/graphs via typed queries- AI/BI is free to Databricks users, vs Saleforce/Microsoft which have fees06:12 | Databricks revenue +60% vs 2023- $2.4b ARR forecast as of Jun 2024, +60% vs 2023- $1.6b in 2023 full year revenue, +50% vs 2022- 221 sales deals over $1.0m- Net revenue retention +140%- R&D is 33% of revenue over last 3yrs- 80% subscription gross margin over last 3yrs- $400m ARR from data warehouse product- $42.8b secondary market valuation, -1% vs its Nov 2023 round07:17 | Pre-IPO +2.56% for week, +71.53% for last 1yr- Up week: Chime +22.3%, CoreWeave +19.3%, Wiz +13.1%, Cohere +11.3%, Scale AI +8.8%- Down week: OpenAI -5.5%, Epic Games -4.0%, Deel -2.9%, Bytedance -2.1%, Notion -1.9%- Top valuations: ByteDance $292b, SpaceX $191b, OpenAI $104b, Stripe $76b, Databricks $43b07:54 | 2024 Pre-IPO Stock Vintage Index week performance- www.agdillon.com/index for fact sheet pdf- 2024 Vintage Index top contributors since inception: Rippling +106%, Revolut +52%, Epic Games +44%, Klarna +43%, Anduril +27%- Key metric averages for all Vintage Indexes 5 years old or older…3.31 distributed paid in capital2.05 residual value to paid in capital5.36 total value to paid in capit
Senior Research Fellows Dr. Eleanor Drage and Dr. Kerry McInerney share their insights on how artificial intelligence will impact society, using a feminist lens to rethink innovation and the importance of language in shaping our understanding of ‘good' technology. Dr Eleanor Drage is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence. She teaches AI Professionals about AI ethics at Cambridge and presents widely on the topic. She specialises in using feminist ideas to make AI better and safer for everyone. She is currently building the world's first free and open access tool that helps companies meet the EU AI act's obligations. Eleanor is also an expert on women writers of speculative and science fiction from 1666 to the present - An Experience of the Impossible: The Planetary Humanism of European Women's Science Fiction. Dr Kerry McInerney (née Mackereth) is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads a project on how AI is impacting international relations. Aside from The Good Robot, Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press) and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press). This episode was recorded in front of a live audience for an event in partnership with SPACE4. ABOUT THE HOST Luke Robert Mason is a British-born futures theorist who is passionate about engaging the public with emerging scientific theories and technological developments. He hosts documentaries for Futurism, and has contributed to BBC Radio, BBC One, The Guardian, Discovery Channel, VICE Motherboard and Wired Magazine. CREDITS Producer & Host: Luke Robert Mason Join the conversation on Facebook, Instagram, and Twitter at @FUTURESPodcast Follow Luke Robert Mason on Twitter at @LukeRobertMason Subscribe & Support the Podcast at http://futurespodcast.net
In this episode of Good Morning Hospitality, Leo, Sarah, and Raphi discuss the latest industry news, trends, and thought-provoking discussions. We dive deep into the forthcoming Euro 2024 and its impacts on travel bookings and highlight Airbnb's decision to remove experiences from their platform. Raphi introduces us to HomeToGo's fresh AI innovations and how Google is revolutionizing search with AI, shifting the future of SEO. Join us for valuable insights into the evolving landscape of travel and hospitality. — Good Morning Hospitality is part of the Hospitality.FM podcast network and a Hospitality.FM Original. If you like this podcast, then you'll also love Behind The Stays with Zach Busekrus, which comes out every Tuesday & Friday, wherever you get your podcasts! This show is structured to cover industry news in travel and hospitality and is recorded live every Monday morning at 7 a.m. PST/10 a.m. EST. So make sure you tune in during our live show on our social media channels or YouTube and join the conversation live! Thank you to all of the Hospitality.FM Partners that help make this show possible, and if you have any press you want covered during the show, fill out this form! Learn more about your ad choices. Visit megaphone.fm/adchoices
Facts & Spins for May 22, 2024 Top Stories: The US and Saudi Arabia near a historic defense pact, Trump's defense rests its case in the hush-money trial, the EU's landmark AI rules get the green light, Israel reportedly prepares to scale back its Rafah offensive, 150 Yale graduates stage a walkout in support for Palestinian people, Russia masses troops near another border region of Ukraine,Russia jails a hypersonic missile scientist, phase five of India's elections sees a drop in voter turnout, turbulence on a Singapore-bound flight leaves one passenger dead, and Trump Media reports a $327M Q4 net loss. Sources: https://www.verity.news/
In this podcast, we cover - 1. Role of serendipity in building meaningful careers 2. Ethical principles toward shaping more inclusive technologies 3. Feminist and anti-racist approach to AI Eleanor started her career in financial technology before co-founding an e-commerce company. Now a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, she maintains her strong interest in commercial concerns and opportunities in AI by working to bridge the gap between industry in academia in AI Ethics. She runs a team that is building the world's first free auditing online tool that allows companies to meet the EU AI act's obligations - which have been enriched with feminist and antiracist principles. She previously explored what AI ethics currently means to AI engineers at a major tech multinational the size of Meta. Her advisory work in the AI Ethics space also includes the UN Data Science & Ethics Group's 'Applied Ethics Toolkit'. On this site you can learn more about her past and present projects, media appearances, and publications. She has an international dual degree PhD from the University of Bologna and the University of Granada, where she was an Early Stage Researcher for the EU Horizon 2020 ETN-ITN-Marie Curie Project “GRACE” (Gender and Cultures of Equality in Europe). She has made two short films about science fiction utopias and dystopias, and co-created a feminist quotation-generating App called 'Quotidian'.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #56: Blackwell That Ends Well, published by Zvi on March 23, 2024 on LessWrong. Hopefully, anyway. Nvidia has a new chip. Also Altman has a new interview. And most of Inflection has new offices inside Microsoft. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Open the book. Clauding Along. Claude continues to impress. Language Models Don't Offer Mundane Utility. What are you looking for? Fun With Image Generation. Stable Diffusion 3 paper. Deepfaketown and Botpocalypse Soon. Jesus Christ. They Took Our Jobs. Noah Smith has his worst take amd commits to the bit. Generative AI in Games. What are the important dangers? Get Involved. EU AI office, IFP, Anthropic. Introducing. WorldSim. The rabbit hole goes deep, if you want that. Grok the Grok. Weights are out. Doesn't seem like it matters much. New Nivida Chip. Who dis? Inflection Becomes Microsoft AI. Why buy companies when you don't have to? In Other AI News. Lots of other stuff as well. Wait Till Next Year. OpenAI employees talk great expectations a year after GPT-4. Quiet Speculations. Driving cars is hard. Is it this hard? The Quest for Sane Regulation. Take back control. The Week in Audio. Sam Altman on Lex Fridman. Will share notes in other post. Rhetorical Innovation. If you want to warn of danger, also say what is safe. Read the Roon. What does it all add up to? Pick Up the Phone. More good international dialogue on AI safety. Aligning a Smarter Than Human Intelligence is Difficult. Where does safety lie? Polls Show People Are Worried About AI. This week's is from AIPI. Other People Are Not As Worried About AI Killing Everyone. Then there's why. The Lighter Side. Everyone, reaping. Language Models Offer Mundane Utility Ethan Mollick on how he uses AI to aid his writing. The central theme is 'ask for suggestions in particular places where you are stuck' and that seems right for most purposes. Sully is predictably impressed by Claude Haiku, says it offers great value and speed, and is really good with images and long context, suggests using it over GPT-3.5. He claims Cohere Command-R is the new RAG king, crushing it with citations and hasn't hallucinated once, while writing really well if it has context. And he thinks Hermes 2 Pro is 'cracked for agentic function calling,' better for recursive calling than GPT-4, but 4k token limit is an issue. I believe his reports but also he always looks for the bright side. Claude does acausal coordination. This was of course Easy Mode. Claude also successfully solves counterfactual mugging when told it is a probability theorist, but not if it is not told this. Prompting is key. Of course, this also presumes that the user is telling the truth sufficiently often. One must always watch out for that other failure mode, and Claude does not consider the probability the user is lying. Amr Awadallah notices self-evaluated reports that Cohere Command-R has a very low hallucination rate of 3.7%, below that of Claude Sonnet (6%) and Gemini Pro (4.8%), although GPT-3.5-Turbo is 3.5%. From Claude 3, describe things at various levels of sophistication (here described as IQ levels, but domain knowledge seems more relevant to which one you will want in such spots). In this case they are describing SuperFocus.ai, which provides custom conversational AIs that claim to avoid hallucinations by drawing on a memory bank you maintain. However, when looking at it, it seems like the 'IQ 115' and 'IQ 130' descriptions tell you everything you need to know, and the only advantage of the harder to parse 'IQ 145' is that it has a bunch of buzzwords and hype attached. The 'IQ 100' does simplify and drop information in order to be easier to understand, but if you know a lot about AI you can figure out what it is dropping very easily. Figure out whether a resume ...
Hosts: HutchOn ITSPmagazine
The European Union's AI Act is an initiative aimed at regulating the field of artificial intelligence. On Wednesday, March 13, the Parliament approved the regulation. It seeks to establish a legal framework for the use of AI, but also to position the EU at the forefront of global digital governance – at least on this aspect, and in this regard. The approach is comprehensive, touching on a wide spectrum of applications, from low-risk to high-risk categories, tailoring regulatory requirements accordingly. Katrin Nyman-Metcalf, Adjunct Professor at TalTech and Associated Expert for e-Governance Academy, guides us through propositions and principles of the AI Act, and how the EU plans to move toward ensuring a thoughtful and ethical use of artificial intelligence. This podcast episode was recorded shortly previous to the approval of the EU AI Act.The EU AI Act – principles, features, mission“This is one of the first legal attempts by the EU to harmonise AI regulation across member states and protect against negative effects. The EU AI Act introduces a risk categorisation for AI, dividing it into categories based on the level of risk each poses. This approach dictates the level of regulation needed, focusing on what the technology does – rather than prescribing specific uses. It's a general but effective method to ensure that AI development aligns with European values and standards. But it's also a measure to protect consumers, users, people,” Nyman-Metcalf begins with.The categorisation of AI systems into risk profiles is crucial here – in a range that goes from ‘minimal' to ‘unacceptable' risk. This risk-based approach allows for a regulatory framework that is fairly nuanced and can adapt to the diverse applications of AI, from consumer products to critical infrastructure. At the heart of the EU AI Act, after all, lies the ambition to safeguard European values and consumer rights while fostering a good environment for innovation. Thus, the Act's dual focus: preventing fragmentation of AI regulations among member states and ensuring user and consumer protection. With an eye on the EU's internal market dynamics, and one on its global competitiveness.Moreover, the establishment of an EU AI office is expected to guide member states on the matter. “The EU AI office is set to play a coordinating role, not just overseeing regulation at the member state level but also facilitating dialogue with the industry and civil society. This approach, more proactive than previous initiatives like GDPR, aims to involve all relevant stakeholders from the outset, ensuring that the AI Act is shaped by a wide range of insights and concerns,” Nyman-Metcalf explains.* this podcast episode has been recorded shortly previous to the approval of the EU AI Act
Many Haitians are troubled by an international plan to impose a transitional government. European Union lawmakers have approved the world's first comprehensive regulations on artificial intelligence. And as we barrel toward a presidential election with two unpopular candidates, third-party bids are scrambling to get on the ballot.Want more comprehensive analysis of the most important news of the day, plus a little fun? Subscribe to the Up First newsletter.Today's episode of Up First was edited by Tara Neill, Dana Farrington, Nick Spicer, Jan Johnson and Ben Adler. It was produced by Ziad Buchh, Ben Abrams and Lindsay Totty. We get engineering support from Stacey Abbott, and our technical director is Zac Coleman.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #55: Keep Clauding Along, published by Zvi on March 14, 2024 on LessWrong. Things were busy once again, partly from the Claude release but from many other sides as well. So even after cutting out both the AI coding agent Devin and the Gladstone Report along with previously covering OpenAI's board expansion and investigative report, this is still one of the longest weekly posts. In addition to Claude and Devin, we got among other things Command-R, Inflection 2.5, OpenAI's humanoid robot partnership reporting back after only 13 days and Google DeepMind with an embodied cross-domain video game agent. You can definitely feel the acceleration. The backlog expands. Once again, I say to myself, I will have to up my reporting thresholds and make some cuts. Wish me luck. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Write your new legal code. Wait, what? Claude 3 Offers Mundane Utility. A free prompt library and more. Prompt Attention. If you dislike your prompt you can change your prompt. Clauding Along. Haiku available, Arena leaderboard, many impressive examples. Language Models Don't Offer Mundane Utility. Don't be left behind. Copyright Confrontation. Some changes need to be made, so far no luck. Fun With Image Generation. Please provide a character reference. They Took Our Jobs. Some versus all. Get Involved. EU AI office, great idea if you don't really need to be paid. Introducing. Command-R, Oracle OpenSearch 2.11, various embodied agents. Infection 2.5. They say it is new and improved. They seemingly remain invisible. Paul Christiano Joins NIST. Great addition. Some try to stir up trouble. In Other AI News. And that's not all. Quiet Speculations. Seems like no one has a clue. The Quest for Sane Regulation. EU AI Act passes, WH asks for funding. The Week in Audio. Andreessen talks to Cowen. Rhetorical Innovation. All of this has happened before, and will happen again. A Failed Attempt at Adversarial Collaboration. Minds did not change. Spy Versus Spy. Things are not going great on the cybersecurity front. Shouting Into the Void. A rich man's blog post, like his Coke, is identical to yours. Open Model Weights are Unsafe and Nothing Can Fix This. Mistral closes shop. Aligning a Smarter Than Human Intelligence is Difficult. Stealing part of a model. People Are Worried About AI Killing Everyone. They are hard to fully oversee. Other People Are Not As Worried About AI Killing Everyone. We get letters. The Lighter Side. Say the line. There will be a future post on The Gladstone Report, but the whole thing is 285 pages and this week has been crazy, so I am pushing that until I can give it proper attention. I am also holding off on covering Devin, a new AI coding agent. Reports are that it is extremely promising, and I hope to have a post out on that soon. Language Models Offer Mundane Utility Here is a seemingly useful script to dump a github repo into a file, so you can paste it into Claude or Gemini-1.5, which can now likely fit it all into their context window, so you can then do whatever you like. Ask for a well-reasoned response to an article, from an opposing point of view. Write your Amazon listing, 100k selling partners have done this. Niche product, but a hell of a niche. Tell you how urgent you actually think something is, from 1 to 10. This is highly valuable. Remember: You'd pay to know what you really think. Translate thousands of pages of European Union law into Albanian (shqip) and integrate them into existing legal structures. Wait, what? Sophia: In the OpenAI blog post they mentioned "Albania using OpenAI tools to speed up its EU accession" but I didn't realize how insane this was - they are apparently going to rewrite old laws wholesale with GPT-4 to align with EU rules. Look I am very pro-LLM but for the love ...
This Day in Legal History: Dred Scott DecidedOn this day in legal history, March 6th, 1857, the U.S. Supreme Court issued its infamous decision in Dred Scott v. Sandford, a landmark case that deepened the nation's sectional divisions and paved the way for the Civil War.The Court, led by Chief Justice Roger B. Taney, ruled that Dred Scott, an enslaved man who resided in free territories, was not a U.S. citizen and therefore had no right to sue in federal court. This decision effectively stripped Scott of any legal right to freedom.Furthermore, the Court declared the Missouri Compromise, which prohibited slavery in certain U.S. territories, unconstitutional. This decision, later overturned by the 14th Amendment, inflamed tensions over slavery and propelled the nation closer to civil war.The Dred Scott decision stands as a stark reminder of the dark chapters in American history and the ongoing struggle for equality. While the 14th Amendment later overturned this decision, its legacy continues to resonate, serving as a powerful symbol of the fight for justice and the enduring pursuit of a more perfect union.The EU is close to passing sweeping AI regulations impacting businesses globally. The law categorizes AI uses by risk, banning high-risk practices like subliminal manipulation. Companies developing or using high-risk AI, like job-screening software, will face stricter controls and registration requirements. Though the EU market is the target, the law's reach extends due to its size. Experts urge companies to prepare now, as the consequences for non-compliance mirror those of the EU's GDPR, which has resulted in hefty fines for tech giants. Companies need to assess their AI use to determine if they fall under the act, as some applications, like biometric characterization, navigate complex classifications. The EU AI Act marks a significant step in regulating AI, and businesses worldwide should be aware of its potential implications.EU Poised to Enact Sweeping AI Rules With US, Global ImpactThe SEC is finalizing climate reporting rules for public companies. While the initial proposal faced criticism for its low threshold for reporting weather and transition costs, the final version sets a higher bar based on materiality. Companies will still disclose financial impacts of climate events, but in a footnote rather than directly in financial statements. They also need to report greenhouse gas emissions with outside verification in phases, with larger companies starting sooner. This phased approach aims to balance providing investors with valuable information while avoiding excessive burdens on businesses. The final rules represent a significant step in requiring companies to address climate risks and their financial implications, aligning with similar initiatives in California and Europe.SEC Unveils Higher Threshold for Reporting on Climate CostsThe United Auto Workers (UAW) is making progress in organizing workers at foreign-owned auto plants in the South. Nearly a third of workers at a Toyota plant in Missouri have signed cards in support of the union. This is a key milestone as federal law requires at least 30% support to call for a union election. The UAW aims to add as many as 150,000 members through its campaign, which follows successful contract negotiations with the Big Three automakers.UAW Hits 30% Support at Toyota Missouri Plant in Key MilestoneA federal appeals court upheld the conviction of Michael Avenatti, the former lawyer for adult film actress Stormy Daniels, for defrauding her. Avenatti was accused of stealing nearly $300,000 in book deal proceeds from Daniels and forging her signature. He was sentenced to four years in prison for this conviction, which partially overlaps his sentence for extorting Nike. Avenatti is currently serving a total of 19 years in prison for various fraud and theft charges, and he is appealing this latest conviction.Court upholds Michael Avenatti conviction for defrauding Stormy Daniels | ReutersElon Musk sued OpenAI, his co-founded AI startup, claiming it strayed from its original mission and focused on profits. OpenAI denies the accusation and says Musk proposed a merger with Tesla and demanded significant control over the company, which they refused. They also claim Musk pushed them to raise more funds initially and later criticized their progress when he wasn't involved. OpenAI intends to dismiss the lawsuit and views it as the culmination of Musk's long-standing disagreements with the company.OpenAI seeks to dismiss Musk claims, says billionaire pushed for merger with Tesla | Reuters Get full access to Minimum Competence - Daily Legal News Podcast at www.minimumcomp.com/subscribe
Exploring Synthesia's colossal $90 million funding for AI Deep Fakes, the substantial $4.4 trillion AI-driven contribution to the global GDP, and an insightful analysis of the impactful EU AI legislation. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community
Meta goes all in on AGI, Microsoft releases Copilot Pro, Runway's new multibrush tool, and more! Here's this week's AI news that matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode pageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:03:15 Meta shifts focus to AGI10:15 Microsoft releases Copilot Pro16:02 Google lags behind in AI product accessibility.19:45 Runway multitool brush24:25 Samsung's AI phones27:36 Samsung Galaxy S24: AI enhances photo search.32:10 Eleven Labs valued at over $1 BillionTopics Covered in This Episode:1. Generative AI Advancements and Applications2. Advancements in On-Device AI Computing3. Evolution of the AI Software Market4. AGI Race and Societal Impact5. AI Governance and Privacy RegulationsKeywords:Qualcomm Snapdragon chip, generative AI capabilities, edge AI, on-device AI, first-person data, Samsung Galaxy S24, circle search, live translation, AI call assistants, Apple, environmental impact, 11 Labs, unicorn status, AI software market, Everyday AI, AI enthusiasts, LinkedIn thread, Mark Zuckerberg, artificial general intelligence, Meta, NVIDIA GPUs, AGI race, Microsoft Copilot Pro, DALL E, Microsoft 365, Google Docs, Wellsaid Labs, EU AI governance, generative AI tools, misinformation, disinformation, text-to-speech"
【あなたもメンつよ! 伊藤・奥山・神田の夜会】2024年2月2日(金)19時、3人による公開収録イベントを開催します。リアル会場は東京・築地の朝日新聞読者ホール、オンラインでも配信します。朝日新聞デジタル有料会員の方のみご参加いただけます。下記からご応募ください。https://que.digital.asahi.com/question/11012566 ↓↓↓↓↓【その朝デジ有料会員、今なら月額100円で!】有料会員、始めるなら今。朝日新聞デジタル「初トク」キャンペーン!(2024/1/25まで)記事が読み放題のスタンダードコース(月額1,980円)が、2カ月間は月額100円で試せます。https://digital.asahi.com/pr/cp/2024/wtr/?ref=cp2024wtr_podcast 【番組内容】ビル・ゲイツ氏やウクライナのゼレンスキー大統領ら、世界の首脳や経済人がスイスの小さな町に集まるのがダボス会議です。今年の大テーマはAIや中東問題。何が話し合われているのでしょうか。現地のポッドキャストブースとつないでお届けします。※2024年1月18日に収録しました。 【関連リンク】Globe+ダボス・リポート2024はこちら https://globe.asahi.com/series/11035757 【関連記事】ゼレンスキー大統領、ウクライナへの投資呼びかけ EUと中国はAIめぐりつばぜり合いhttps://globe.asahi.com/article/15120029 「偽情報」が最大のリスク、世界経済フォーラムが報告書https://www.asahi.com/articles/ASS1C7FJHS1CUHBI00G.html?iref=omny世界のリーダーはダボスを目指す 小さな町で議論を交わす魅力とはhttps://www.asahi.com/articles/ASR2466YSR24UHBI00H.html?iref=omny 【出演・スタッフ】宮地ゆう(Globe編集部副編集長)MC・音源編集 岸上渉 【朝ポキ情報】ご感想はおたよりフォーム → https://bit.ly/asapoki_otayori番組カレンダー→ https://bit.ly/asapki_calendar出演者名検索ツール→ https://bit.ly/asapoki_cast最新情報はX(旧ツイッター)→ https://bit.ly/asapoki_twitter交流はコミュニティ → https://bit.ly/asapoki_communityテロップ付きはYouTube → https://bit.ly/asapoki_youtube_こぼれ話はメルマガ → https://bit.ly/asapoki_newsletter全話あります公式サイト → https://bit.ly/asapoki_lp広告ご検討の企業様は → http://t.asahi.com/asapokiguideメールはこちら → podcast@asahi.comSee omnystudio.com/listener for privacy information.
For our in-person episode on 2023 with Karin Rudolph we chat about the Future of Life Institute letter, existential risk of AI, TESCREAL, Geoffrey Hinton's resignation from Google, the AI Safety Summit, EU AI act and legislating AI, neural rights and more...
Do This, NOT That: Marketing Tips with Jay Schwedelson l Presented By Marigold
In this short Tuesday episode, Jay and Kristin Nagel discuss some of the interesting things going on this week, including a new EU AI law, their thoughts on the hit Netflix movie "Leave the World Behind", and more.Main Discussion Points:- The EU has reached a deal on a sweeping AI law called the AI Act (00:01:03). It will require companies to label AI-generated content and design systems so AI content can be detected (00:01:24). This could have major implications as EU laws often make their way to the US (00:01:49).- The current #1 Netflix movie in the US is "Leave the World Behind" starring Julia Roberts, Mahershala Ali, and Ethan Hawke (00:03:31). Jay did not like the movie but Kristin gave it an 8/10 (00:04:00). There seems to be two camps - people who liked it and people who think it absolutely sucked. (00:04:40).
Timestamps: 0:00 SEND HELP 0:09 Apple blocks Beeper Mini...temporarily 1:38 EU reaches deal on AI Act 2:53 Grok AI plagiarizes ChatGPT 4:08 Vessi 4:49 QUICK BITS INTRO 4:57 Apple plans to launch Vision Pro 5:32 MSI apologizes for coolers 6:13 Apple to incentivize spatial audio 6:53 Google's "Project Ellmann" 7:34 VR for mice News Sources: https://lmg.gg/7XdIX
Automation is a ket asset to speeding up your workflow and becoming more efficient. But adding AI to your automation can seem like a daunting task. Luckily Mark Savant, Founder of Mark Savant Media is joining us to discuss simple ways you can automate your media marketing with AI to grow your company or career. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Mark and Jordan questions about AI automationRelated Episode: ChatGPT and Zapier: A Game-Changing DuoUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:45] Daily AI news[00:04:40] About Mark and Mark Savant Media[00:07:30] Starting with AI automation[00:12:50] Balancing human interaction and AI[00:15:15] Automating media marketing[00:19:45] AI tools for media marketing[00:22:50] Audience questions[00:28:15] More ways to use AI automations[00:32:30] Mark's final takeawayTopics Covered in This Episode:1. Importance of leveraging AI for media marketing2. Impact of AI on human-to-human interactions3. Efficiency and cost-saving potential of AI4. Tools and Software for Media MarketingKeywords:AI automation, podcast industry, generative AI, workforce displacement, technological advancements, embracing technological change, mid-thirties to mid-forties, business efficiency, human-to-human interactions, marketing automation, digital media content, remote video calls, transcription, interpretation, remote work, hybrid teams, Cast Magic, Chat GPT, Zapier, Avoma, strategic selection, obsolescence prevention, software integration, competitive edge, EU AI bill, Mistral AI valuation, Spotify AI focus, small businesses, personalized messages, chatbots, sales calls Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
【主なニュース】▼自民 安倍派 10人以上の議員側が1000万円超のキックバックか ▼安保理 ガザ地区の停戦決議 アメリカ拒否権で否決 ▼EU AI利用などの規制法案 大筋合意 委員長“世界で初めて”
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring a CEO & EU Tech Policy Lead to launch an AI policy career org in Europe, published by Cillian on December 6, 2023 on The Effective Altruism Forum. Summary We are hiring for an Executive Director and an EU Tech Policy Lead to launch Talos Institute[1], a new organisation focused on EU AI policy careers. Talos is spinning out of Training for Good and will launch in 2024 with the EU Tech Policy Fellowship as its flagship programme. We envision Talos expanding its activities and quickly growing into a key organisation in the AI governance landscape. Apply here by December 27th. Key Details Closing: 27 December, 11:59PM GMT Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later. Ability to attend our upcoming Brussels Summit (February 26th - March 1st) would also be beneficial, though not required. Hours: 40/week (flexible) Location: Brussels (preferred) / Remote Compensation: Executive Director: 70,000 - 90,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. EU Tech Policy Lead: 55,000 - 75,000. We are committed to attracting top talent and are willing to offer a higher salary for the right candidate. How to apply: Please fill in this short application form Contact: cillian@trainingforgood.com About Talos Institute EU Tech Policy Fellowship The EU Tech Policy Fellowship is Talos Institute's flagship programme. It is a 7-month programme enabling ambitious graduates to launch European policy careers reducing risks from artificial intelligence. From 2024, it will run twice per year. It includes: 8-week training that explores the intricacies of AI governance in Europe A week-long policymaking summit in Brussels to connect with others working in the space 6-month placement at a prominent think tank working on AI policy (e.g. The Centre for European Policy Studies, The Future Society) Success to date The EU Tech Policy Fellowship appears to have had a significant impact to date. Since 2021, we've supported ~30 EU Tech Policy Fellows and successfully transitioned a significant number to work on AI governance in Europe. For example: Several work at key think tanks (e.g. The Future Society, the International Center for Future Generations, and the Centre for European Policy Studies) One has co-founded an AI think tank working directly with the UN and co-authored a piece for The Economist with Gary Marcus Others are advising MEPs and key institutions on the EU AI Act and related legislation We're conducting an external evaluation and expect to publish the results in early 2024. Initial indicators suggest that the programme has been highly effective to date. As a result, we have decided to double the programme's size by running two cohorts per year. We now expect to support 30+ fellows in 2024 alone. Future directions We can imagine Talos Institute growing in a number of ways. Future activities could include things like: Creating career advice resources tailored to careers in European policy (especially for those interested in AI & biosecurity careers). Similar to what Horizon has done in the US. Community-building activities for those working in AI Governance in Europe (eg. retreats to facilitate connections, help create shared priorities, identify needs in the space, and incubate new projects). Hosting events in Brussels educating established policy makers on risks from advanced AI Activities that help grow the number of people interested in considering policy careers focused on risks from advanced AI, e.g. workshops like this Expanding beyond AI governance to run similar placement programmes for other problems in Europe (e.g. biosecurity). Establishing the organisation as a credible think tank in Eu...
The text of the European Union AI act was passed with a hefty majority in June 2023. It's one of the most stringent and wide-reaching pieces of legislation governing artificial intelligence to date, but what does that mean for global organizations inside (and outside) the European Union? In this episode, we'll be asking what the act means, why it matters, and where the uncertainties, controversies and challenges lie, with HPE Chief Technologist Matt Armstrong-Barnes.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMAAbout the expert: https://uk.linkedin.com/in/mattarmstrongbarnesSources and statistics cited in this episode:The G7 11-point code: https://www.reuters.com/technology/g7-agree-ai-code-conduct-companies-g7-document-2023-10-29/The EU AI act opening statement: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceText of the EU AI act: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.htmlCenter for Data Innovation report into the costs of the act: https://datainnovation.org/2021/07/how-much-will-the-artificial-intelligence-act-cost-europe/
Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise
The text of the European Union AI act was passed with a hefty majority in June 2023. It's one of the most stringent and wide-reaching pieces of legislation governing artificial intelligence to date, but what does that mean for global organizations inside (and outside) the European Union? In this episode, we'll be asking what the act means, why it matters, and where the uncertainties, controversies and challenges lie, with HPE Chief Technologist Matt Armstrong-Barnes.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMAAbout the expert: https://uk.linkedin.com/in/mattarmstrongbarnesSources and statistics cited in this episode:The G7 11-point code: https://www.reuters.com/technology/g7-agree-ai-code-conduct-companies-g7-document-2023-10-29/The EU AI act opening statement: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceText of the EU AI act: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.htmlCenter for Data Innovation report into the costs of the act: https://datainnovation.org/2021/07/how-much-will-the-artificial-intelligence-act-cost-europe/
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this content-rich episode, we uncover the remarkable journey of Synthesia, a company that has raised an impressive $90 million for AI deep fakes technology. Dive into the financial potential of AI as we discuss its anticipated contribution of $4.4 trillion to the global GDP. We also explore the latest developments in EU AI legislation, shedding light on the evolving regulatory landscape for artificial intelligence. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai
Like it or not AI is here, and it will only get better. Where does that leave Voice Artists, Podcasters and Content Creators who currently have no protections in terms of owning their voice? Tim Frielander is an award winning, voice actor, studio owner, advocate, and educator. Tim is also the Founder and President of NAVA, The National Association of Voice Actors as well as co-owner and editor of The Voice Over Resource Guide. His work with NAVA puts him at the coal face of negotiations with the like of voices.com and the AI seeding debate. We have him on the show next week to give us an insight into where we might be headed in terms of a compromise, what protections we might be able to put in place, and most troublininly the short amount of time we have to get it done before it may effectively be too late. A big shout out to our sponsors, Austrian Audio and Tri Booth. Both these companies are providers of QUALITY Audio Gear (we wouldn't partner with them unless they were), so please, if you're in the market for some new kit, do us a solid and check out their products, and be sure to tell em "Robbo, George, Robert, and AP sent you"... As a part of their generous support of our show, Tri Booth is offering $200 off a brand-new booth when you use the code TRIPAP200. So get onto their website now and secure your new booth... https://tribooth.com/ And if you're in the market for a new Mic or killer pair of headphones, check out Austrian Audio. They've got a great range of top-shelf gear.. https://austrian.audio/ We have launched a Patreon page in the hopes of being able to pay someone to help us get the show to more people and in turn help them with the same info we're sharing with you. If you aren't familiar with Patreon, it's an easy way for those interested in our show to get exclusive content and updates before anyone else, along with a whole bunch of other "perks" just by contributing as little as $1 per month. Find out more here.. https://www.patreon.com/proaudiosuite George has created a page strictly for Pro Audio Suite listeners, so check it out for the latest discounts and offers for TPAS listeners. https://georgethe.tech/tpas If you haven't filled out our survey on what you'd like to hear on the show, you can do it here: https://www.surveymonkey.com/r/ZWT5BTD Join our Facebook page here: https://www.facebook.com/proaudiopodcast And the FB Group here: https://www.facebook.com/groups/357898255543203 For everything else (including joining our mailing list for exclusive previews and other goodies), check out our website https://www.theproaudiosuite.com/ “When the going gets weird, the weird turn professional.” Hunter S Thompson Summary In this episode of Pro Audio Suite, we explore the controversial topic of AI voices with special guest Tim Friedlander. Voices.com has reportedly promised not to use people's voices from their database without permission, but the potential misuse of audition files by clients remains a concern. We discuss the fairness of voice synthesis, highlighting Nava's call for consent and compensation for voice actors. Listeners will gain insight into the problematic quality of AI voice samples and the potential threat to new voice actors as AI begins to replace human voices in certain sectors. We also delve into the future role of agents as potential AI voice libraries, and the necessity for clear licensing fee structures and strong protections before the end of the year to prevent misuse. #VoiceAIControversy #FairVoicesCampaign #FutureOfVoiceActing Timestamps (00:00:00) Introduction (00:00:43) Voices.com's Promise (00:03:31) Copyright Laws and AI Voices (00:11:50) Review of AI Voice Samples (00:12:59) Risks of Recorded Audio (00:14:25) Dangers of AI (00:19:57) AI Replacing Human Voices (00:23:26) AI's Impact on Visual Artists Transcript Speaker A: Y'all ready be history.,Speaker B: Get started.,Speaker A: Welcome.,Speaker B: Hi. Hi. Hello, everyone, to the Pro Audio Suite.,Speaker A: These guys are professional and motivated with tech. To the Vo stars George Wittam, founder of Source Elements Robert Marshall, international audio engineer Darren Robbo Robertson and global voice Andrew Peters. Thanks to Triboo Austrian audio making passion heard. Source elements. George the tech. Wittam and robbo and AP's. International demo. To find out more about us, check thepro audiosuite.com.,Speaker B: Learn up learner. Here we go.,Speaker C: And welcome. And don't forget, if you want to get a discount of $200 off your Tribooth trip, 200 is the code you need now, this week. Very topical. Of course, this AI thing will just not go away. And I know that there was a conversation about that place. I don't even like saying it. Anyway, I will say it. Voices.com supposedly have promised not to farm out people's voices from their database. Tim Friedlander has been involved in this and has written an article, which is what I saw. And Tim is joining us. G'day Tim.,: Hello. Hello. I'm here.,Speaker C: So what's the backstory to this and how did you get involved?,: The backstory to the AI voices.com thing goes back to about May when Davidcirellianvoices.com announced that they were releasing Voices AI and for the voice acting community, that was a huge concern, basically for the main part being that many people have been uploading audio to their website through their website for 20 years. So theoretically, Voices.com or either of these sites has 20 years of very high quality data and audio that they could use to synthesize our voices. So through Nava, which is association that I run along with Karen Guilfrey and a board of directors, we reached out to David and Stephanie and had a week of conversations with them to get the assurance that they had never been uploading or using or doing anything with auditions or files that have been uploaded through their website. And out of that came our Fair Voices campaign or the Fair Voices pledge that we launched. And we reached out to the other online casting sites, six other sites, to get the same assurances from them and also to make sure that they had changed their terms of service. So Voices.com at the time changed their terms of service to very explicitly say they would not be using any audio files uploaded through their site for machine learning or synthesized or synthesizing voices.,Speaker C: Was that backdated or is that from that point onward?,: The terms of service were from that point onward, but they publicly at the time and in various blog posts and other written areas have said that they have never used audio files for that. The caveat being is that once the audio files are uploaded and sent to a client, it's possible that the client then could take those audition files and use them. We don't know and haven't seen any companies per se who we know are doing that but over the last ten or so years, a lot of these companies have been working in the AI TTS sphere and very potentially could have been using that audio for training. We haven't seen it yet explicitly that we know of, but the inability to track our audio files and to know where the audio goes once we've emailed it out or uploaded through a website makes that a real possibility.,: So to give this some perspective, is there any sort of copyright law or anything in place at the moment that protects someone from having their voice turned into an AI voice without their permission?,: That's a great question. Short answer is no. We've been working with the Copyright Office. I gave a presentation to the FTC last week at a roundtable. I've spoken with multiple lawyers and people across the country and across the world. We're working with a group in Europe to help with the EU AI act. Most actors, voice actors, we give away our files as a work for hire, and the understanding is that that audio will be used for this very specific project. Unfortunately, that also basically gives the person we've given the audio file to the copyright and the ability to do whatever they want to with that. We're currently looking at the possibility that since most voice actors record from home, if from like a music perspective, we could theoretically be the owners of the master files, because a lot of times there's no contracts that are signed. But that's an early we're in the early stages of of exploring that. But there are copyright law does not currently protect the voice actor. It protects the copyright holder, which 99% of the time is the company who hired us. Wow. The only other thing we could fall back on is right, right of publicity. But those laws are only really in California and New York, where the strongest laws and then there's possibly biometric and privacy laws, but those really are only strongest in Illinois and Texas of all places, privacy rights.,Speaker C: So is there a way of know? We've talked about this before having some kind of fingerprint of, your know, if anybody uses your voice, it's quite obvious it's yours because it shows some kind of a fingerprint in the waveform, potentially. I don't know how that would work, but there must be someone who's got.,: Something nobody does currently that we know of. I've spoken with people at DARPA and at NASA. We are currently working. We've gone very deep in this conversation to try and figure out a way to do this, what we can do. And actually, I'm working on this with another company that I started about three years ago to create voice prints that we can then use to match a human voice to a synthetic voice and also to match a human voice to a human voice to say that they're the same person. You could theoretically, if we can get that software in place lock down a voice. So if somebody tries to upload it to a synthetic voice site, it would be locked and would be flagged as basically essentially DRM for voice is what we're trying to do. But the only thing really that you could do that might stay is some kind of spread spectrum watermarking that you could do within that. But it'd have to be embedded so deeply in there that you could rip this into Pro Tools or rip it into something else right. And transfer it between audio files or different Daws and strip out. If it's frequency, then it's very easy to pull out frequencies. Most of the stuff that's out there watermarking is pretty easy to bypass currently.,Speaker C: Well, you just have to get clarity or something and it's gone.,: Yeah, exactly. Yeah.,: So what's the compromise future from your perspective then? Would it be a point where Darren Robertson is selling his voice sample disc to AI people? Or would you rather not see AI at all?,: I'm a musician primarily. I was in Seattle in the was on the cusp of playing live and really exploring music when napster and everything hit. And from a consumer perspective, that was one of the most eye opening things that I'd ever seen. The ability to now have access to a massive amount of audio that I'd never heard before. Not anti technology by any means and definitely not anti AI. I've worked with a synthetic voice company. I have know people who are working with synthetic voice companies. The issue right now is that a lot of the foundational models, a lot of the foundations of these AI generative engines, synthetic voice engines are built on somebody's data and more than likely they are being built on the literal voices of voice actors. So we become the foundation of a lot of these models. What Nava has been asking for is consent, control and compensation. And it's the same thing that all artists are asking for, musicians are asking for, models are asking for, is if you're going to take my data and what makes the essence of me. My voice or my image, or the way I walk or the way that I speak, the cadence that I have, the way that I stand. All of those things are very personal to all of us individually. And that data is basically being turned into data, right. What makes us is being turned into data and put into these synthetic voice engines or these synthetic generative engines or generative AI to produce images and videos and photos and voices that are based on real humans and sound like and look like real humans. So we try to find consent, control and compensation for those and really consent to say yes or no. You can make a synthesized version of my voice.,Speaker C: So if we're talking about AI voices, we're not going to stop. It's already out. I mean, the thing's going to happen.,: They're out there. Yes, correct.,Speaker C: How do you perceive we control. It?,: The only thing that we can currently do right now. And this is part of what this discussion at the FTC came up with last week, is really, I think, from a consumer perspective, a consumer safety perspective, I think that there is so much danger in disinformation and false. Information and just absolute lies that are out there that can now be easily replicated and put into a video or an audio or something that is not very easily detectable. It's almost impossible to tell a synthetic voice from a human voice that are done well. It's hard to tell a synthetic image from a factual image. The laws and regulations currently our laws and legislation, I think, is currently the only thing that we can really do on a broad scale to help stem the tide of the damage that's been done already. And going forward, we have to have very clear contracts and agreements in place that either do or do not allow for the use of somebody's voice to be used in a synthetic voice or generative. AI. That's partially what the WGA and SAG afterstrikes are about. AI is the top of that list of things that are concerns, and it's a top concern for anybody who is in the arts right now that creates anything that any of that could be put into a synthetic engine of some kind and have a new creation made out of that. We just came out of a pandemic where we relied on artists, on musicians and filmmakers and actors and voice artists. And the first thing we do out of that pandemic is try and replace those people. That's really essentially what's happening. There is some accessibility. There are places that there is an argument to be made for doing things that a human couldn't generate. But when it's done to replace somebody, when it's done just to save money, that's where the concern comes in. And we know that money, those savings, are not going to be passed along to the consumer. A video game is not going to be cheaper for somebody to buy because it has synthetic voices. A movie is not going to be cheaper at the movie theater because it's synthetically generated. So they cut out the people. They cut out the people who actually make this work, and then that money just goes to the company that gets to save that money at the expense of everybody.,: Why would voices.com say the quiet part out loud? They're a bit like Uber basically going like, hi, please work for us. Make us money, and then we're going to put all of our money into figuring out how to make driverless cars so we don't need you see bitches.,Speaker C: Yeah, exactly.,: They did. I don't know if anybody saw the news last week, but David Cicearelli is out and Morgan Stanley is it morgan Stanley who was the venture capital whoever gave them the money, they replaced him at the top. My guess is that they either went all in on AI and it's not paying off, or they weren't seeing this is all purely speculation. This is just what we can have for conjecture in this place. So I know nothing for fact, but they invested a massive amount of money in them, what, $18,000,015 to $18,000,000.07 years ago. And if they went all in on AI, I don't know if anybody's heard.,: They lost all of it.,: Yeah, they lost all of it. Has anybody actually have you guys heard their AI? The voices AI their samples. They're terrible.,: Never heard it.,: They're terrible. They are terrible. But they were done with consent, control and compensation.,: Is it better or worse than voicealo?,: I haven't heard that one. But most of what I deal with, I deal with Eleven Labs and Play HT are the two that I use most often, for example, for samples in that. And both of those are phenomenal. They are really good. And voices. AI is nowhere. It sounds about ten years old, the technology, from what I heard, and some of the voice actors who had their voices synthesized, who participated in this are not happy with how that voice sounds.,Speaker C: Yeah, I was going to say, just to lighten up a bit, there's an old gag that could actually be modernized and you can ask the question, how many voiceover artists does it take to change a light bulb? And the answer is none. You get an AI to do it.,: That was a drummer joke.,Speaker C: I know we can update it.,: It.,: Just hasn't happened quite yet.,: I was going to say. Yeah, exactly. I've heard that one before somewhere. So the thing that occurs to me though, Tim, is it's great that we're protecting voice actors and all that sort of stuff, but obviously there's a crapload more voice samples out there. I mean, how many podcasts are there out there? And YouTube content creators and all the rest of it? All these places they could go mining for voices.,: How do we protect know? Currently we can't currently there is no protection for Know. This goes into Know, we talk about this being more it's with anybody who has recorded audio is at risk. And that voice actors just happen to be the ones who make a living off of our recorded voice most of the time, but doesn't mean that others aren't making a living off of what they have on the podcast and YouTube. And even those who are just hobbyists at this, who just have a little bit of recorded audio, some twitch stream. I can currently record all the audio off this and make a synthetic voice of anybody on this conversation right now, as can anybody who's listening to it.,Speaker B: Right.,: And it's easy.,: What work does it really kill, truly kill? Like in the short term? I can see it taking out a crapload of elearning and other things like that.,: It takes that out that's any of the stuff that is purely factual, a lot of times talk about factual stuff where I just need information read. A lot of that stuff gets taken out right away, which if you can license your voice to that, then you can still have a career as a voice actor. One of the things that I think is the dangerous part of this, and this goes for any of the arts, is that a lot of these places that are going to be replaced first are where a lot of voice actors, a lot of artists learn. This is how you cut your teeth and you come up through the industry. You do the free jobs, you do the cheap jobs, you do the entry level jobs. Those entry level jobs go away right away because it's cheaper. But a lot of the times it's better. Unfortunately, it is better. The audio quality of a voice actor who's just starting out, who is using a USB mic in their living room with hardwood floors and the refrigerator running and the AC is going to be at risk for sure, and I think rightfully so.,: I'll give you another one, is the company that doesn't hire anybody, right. And they just see the AI voices as it's better than having Mary Jo read it because it's going to take her a long time and whatever. And so just type it into the system. And there's our video. It's our instruction video on how to use our garden hose absolutely or something. And yeah, it's going to take out I don't see it initially taking out real voice acting, but I agree, just like conveying voice, it's just going to plenty of AI voices I'd rather hear.,: Instead of the president of the auto.,: Workers union, for example. One of the things that we've seen, I think, that's been most hopeful in this is that those who work with voice actors already or don't want to replace voice actors, those people who are already working in the creative sphere, who are the producers, who are the directors, they're the people they say, I would never replace a voice actor. But it's all of those people who don't who have just need a voice actor for this one time, need a voice actor for this one training video, this one thing here that they would go to a friend or a referral or wherever it might be, to the online casting site and cast somebody who's new. They're not going to do that anymore. And we're not going to see it's very hard to tangibly find the damage to this because we're not seeing auditions going out where they're saying we're going to audition a human versus an AI. And the AI gets the job. They're just not even going to bother to do the auditions in the first place. And we're never even going to know if it was a synthetic voice. So this is partially why, again, laws and legislation. There's a Senate bill out that NAV is endorsing senate Bill 26 91, which is a labeling act of 2023, which is going to require all anything AI generated to be labeled marked same thing as you would with food. I think consumers have a right to know if what they're taking in is synthetic or human, whether it's emotional, spiritual, food. We have a right to know what we're interacting with. I think.,: I want to know when I'm in the Matrix personally.,: Right, exactly. Yeah, you want to know you're in the Matrix.,: I'm sure it puts to bed a lot of political issues. Mean, you know, imagine sitting there listening to a radio broadcast of Joe Biden declaring war on Russia when it's actually not really you know, there's all sorts of issues that this raises.,: Well, that as well, but also it raises the possibility of doubt. And the Donald Trump tape from years ago, if he could say, well, I never said that that's a synthetic voice, and prove that it's not my synthetic voice. Prove I actually said that. Right, so you're running into proving to both sides of that and we're coming into election.,: All sorts of possibilities raised, considering some of the possible candidates, right?,: Yes, absolutely.,Speaker C: Is there a way of a voice actor to say, okay, I'm going to actually upload to say someplace where you can license a voice from you actually give them all the information of your voice and then there's a license fee. If people want to grab it and use it for something, then they pay you a license fee the same as you would do with library music.,: Absolutely. I've been pushing that example for a while. I think that one of the ways that both Europe with the GDPR and with FTC are approaching this is that we don't need to make new laws or new regulations. We just need to enforce the ones that exist and put this into use. The precedent, I think the precedent of music licensing can directly go into voice. You have a licensing fee, you have a usage fee, you have a generation fee. If you generate new content from this, then I get paid a certain amount for the generation. There's companies out there that do that. Vocal ID veritone was one of the earlier ones that did that. And there's a licensing fee that they have in place for that. And the actors who do that have the consent to know where their voice goes. We're working with a TTS company who reached out to us and we're helping them with this exact same thing of helping to license their deployments so that the voice actor knows where their voice is being used, but also get paid for the original creation of that model and then know where the voice goes from there. There's lots of possibilities. The one possibility that unfortunately, none of those things really exist right now. The only possibilities happen is people just can upload your voice anywhere they want to create a synthetic voice and use it. And there's nothing really stopping anybody, even the AI sites. Right now, all you have to do is click a button that says, yes, I have the right to upload this voice.,: And at what point do you stop?,: I mean, at what point do you stop anybody?,: If you blend two people's voices or three people's, at a certain point, you're.,: Like, it's becomes you know I mean, that's what Siri Alexa, Google voice, those are, you know, they're all blended voices, multiple people put in together and to create a new voice. So now you have to get into now you're talking about songwriting splits, right? Now you're going to talk about splits and points on a song, right? So I've got three voices. We all get an equal split of the usage of that voice, or does it not become an issue because it doesn't sound like anybody? Therefore, there's no conflict. Voice actors, you're also going to run into conflict. Right? What if my human voice is doing Pepsi? My synthetic voice can't do Coca Cola. And if it does, who's going to be held responsible for or or a voice that just sounds like me? At what point how do you draw the line there? How do you even know this voice sounds a lot like me? Is it my voice or is it not my voice? It's a voice that sounds a lot like me. Do I get into conflict because of the similarity?,: It's just like this actors are impersonated. It has to be like, all voices are synthesized, right?,: Yeah, exactly.,: From a synthetic voice saying that all voices are synthesized, including this voice.,: Yeah. Right.,Speaker C: But can you see, like, if you look into the future of the role of the agent, will the agent all of a sudden become a library of voices that can potentially be used for AI? Would that be the shift?,: I have honestly have no idea. I think there's going to be a we're already starting to see a split of human only no AI, and then those who are willing to have a conversation with it and explore it. I'm not by any means advocating to replace humans with AI voices, but we also know that this technology has been around for years, right? And it's been being built for the last 20 years, ten years solidly for synthetic voices. It's here, and we can just pretend that it's not going to have an impact and hope that it doesn't have an impact. Or we can go directly to these companies, which is what we've been doing. I've been speaking with the CEOs of these companies to try and talk with them about great, this is why voice actors are concerned. This is why artists in general are concerned. But this is what we're concerned about. And we know you have a lot of money. Eleven Labs just they're worth $100 million, or they got an investment of $100 million a month ago or so. Right. They have the money to pay the voice actors fairly for the foundation. And if they can license that, the better audio they have, the better foundational model they can create. So if those voice actors who want to do that have the right to say yes, it's the right to say yes as much as it is the right to say no. You should have the right to say yes if you want to. I think.,Speaker C: I reckon there's going to be a scramble with voice actors all trying to get themselves uploaded onto one of these business sites so they can be licensed out.,: Yeah, some of them have. Right now, there's really no clear understanding of what that licensing fee would be. We've seen similar jobs on the casting sites that on one job is paying $500, on the next job it's paying $20,000. And they don't appear to be any different. We just don't have enough a lot of people who are casting don't have enough information to know about where those files are going to be used. Voice actors don't know really enough about how they're going to be used either to know what to ask, and agents don't know what to ask either. Like just so many unknowns out there about what to even ask to come up with what a fair usage would be. Because there's so many potentially so many uses out there that we can't even comprehend right now that we can't even imagine of that they could be used for. So it's really hard to tell. That generation is kind of what we're looking at as kind of a generation fee is what we're kind of really interested in.,Speaker C: Well, it's going to be interesting to watch how this all unfolds, but it's.,: A massive can of worms, isn't it?,Speaker C: It is incredible.,: It is a massive can of worms. Yeah. Visual artists are being hit massively, obviously, right now. They're some of the most hard hit because those images are so distinctive and the styles are so distinct that when they come out that it's obvious it was trained on those authors. There's two lawsuits against multiple lawsuits against AI companies right now from authors who have had their books ingested into these and used as foundational models to train these things. And the thing is, once it's trained, you can't untrain it.,: Well, AP, was it? You saying that there's a film in the Cam with starring James Dean?,Speaker C: Yeah, that's what I'm told is sitting there waiting to go. So James Dean is going to be a co star of a New know you've used motion capture. So they've got an actor that actually can walk and move like James Dean. They've just done a motion capture and then they built James Dean over the top of his skeleton, so to speak. And if that thing becomes a hit, you can see they're going to drag them all out.,: Right.,: And then Elvis really isn't dead.,: Yeah, right, exactly. We talk about that for vo. Like speech to speech, too.,: Well, that's the thing. How would you license that, Tim?,: It's performed know the James Dean performed by so and so. You want to give the motion capture person the credit for it. Like speech to could I could know. Karen Guilford vice president uses example a lot, which is she could narrate audacity of Hope and then put Barack Obama's voice over it. So it would be the voice of Barack Obama performed by Karen Guilfrey.,: Right.,: So as read by Barack Obama performed by Corn Griffin. Yeah. As puppetry.,Speaker B: Yeah.,Speaker C: If I was the ad agency for 711, I would actually get an AI of Elvis and have him in a 711. And finally, it's true.,: Slurpee in one hand, donut in the other. Is that what you're saying?,: When does Elvis become public domain?,: A long time. Long time. It's a space to watch, isn't it? It really is.,Speaker C: And the space will be filled by AI.,: Yeah, it's interesting. And I think we've got three months left. I think we have about three months before something dramatically so you think there.,: Is a time frame on this? Because I was actually sitting here thinking, god, how long will this take to sort? But you're saying you think there might be a time frame on it?,: I think we have, if anything, any legitimate and strong protections need to be in place before the end of the year. By the end of the year, it's going to be too late for us to have any kind of protection. The technology is moving too quickly. It's exponential. And it's going to be beyond our control or potentially beyond the control of those who actually are running the systems. At one point, without fully taking your entire system offline and destroying your models, it could potentially get to the point where there is no control, there is no ability to consent, there is no ability to even know whose voice is being used. They're just a multitude of generic voices that one company gets paid when you use their voice, but nobody has any idea who the human behind it is or where the content came from anymore.,: Watch this space, people.,Speaker C: Yes, indeed. Indeed. Exactly. By the way, this is actually really not me. I'm on holiday.,: This is my not hard to do.,Speaker B: Well, that was fun. Is it over?,Speaker A: The Pro Audio suite with thanks to Tribut and Austrian audio recorded using Source Connect edited by Andrew Peters and mixed by Voodoo Radio Imaging with tech support from George the Tech Wittam don't forget to subscribe to the show and join in the conversation on our Facebook group. To leave a comment, suggest a topic or just say g'day. Drop us a note at our website. Theproudiosuite.com.
Like it or not AI is here, and it will only get better. Where does that leave Voice Artists, Podcasters and Content Creators who currently have no protections in terms of owning their voice? Tim Frielander is an award winning, voice actor, studio owner, advocate, and educator. Tim is also the Founder and President of NAVA, The National Association of Voice Actors as well as co-owner and editor of The Voice Over Resource Guide. His work with NAVA puts him at the coal face of negotiations with the like of voices.com and the AI seeding debate. We have him on the show next week to give us an insight into where we might be headed in terms of a compromise, what protections we might be able to put in place, and most troublininly the short amount of time we have to get it done before it may effectively be too late. A big shout out to our sponsors, Austrian Audio and Tri Booth. Both these companies are providers of QUALITY Audio Gear (we wouldn't partner with them unless they were), so please, if you're in the market for some new kit, do us a solid and check out their products, and be sure to tell em "Robbo, George, Robert, and AP sent you"... As a part of their generous support of our show, Tri Booth is offering $200 off a brand-new booth when you use the code TRIPAP200. So get onto their website now and secure your new booth... https://tribooth.com/ And if you're in the market for a new Mic or killer pair of headphones, check out Austrian Audio. They've got a great range of top-shelf gear.. https://austrian.audio/ We have launched a Patreon page in the hopes of being able to pay someone to help us get the show to more people and in turn help them with the same info we're sharing with you. If you aren't familiar with Patreon, it's an easy way for those interested in our show to get exclusive content and updates before anyone else, along with a whole bunch of other "perks" just by contributing as little as $1 per month. Find out more here.. https://www.patreon.com/proaudiosuite George has created a page strictly for Pro Audio Suite listeners, so check it out for the latest discounts and offers for TPAS listeners. https://georgethe.tech/tpas If you haven't filled out our survey on what you'd like to hear on the show, you can do it here: https://www.surveymonkey.com/r/ZWT5BTD Join our Facebook page here: https://www.facebook.com/proaudiopodcast And the FB Group here: https://www.facebook.com/groups/357898255543203 For everything else (including joining our mailing list for exclusive previews and other goodies), check out our website https://www.theproaudiosuite.com/ “When the going gets weird, the weird turn professional.” Hunter S Thompson Summary In this episode of the Pro Audio Suite, we explore the intriguing topic of copyright laws around AI voice impersonation. We discuss the current legal state, revealing that there are no protections yet in place against turning someone's voice into an AI replica without their consent. We highlight our collaborative efforts with a European group aimed at contributing to the EU AI act. This episode brings to light the challenges faced by voice actors who, despite providing their work for specific projects, unknowingly hand over the copyright and potential misuse to the receiver. Tune in to the Pro Audio Suite, brought to you by Tripoos and Austrian Audio, on your preferred podcast platform to delve deeper into this complex and evolving issue. #VoiceAI #CopyrightLaw #ProAudioSuite Timestamps (00:00:00) Copyright Law & AI (00:01:01) Pro Audio Suite Transcript Speaker A: Coming up. Coming up.,Speaker B: Next, the Pro Audio Suite.,: Sneak peek.,Speaker A: So to give this some perspective, is there any sort of copyright law or anything in place at the moment that protects someone from having their voice turned into an AI voice without their permission?,Speaker B: That's a great question. Short answer is no. We've been working with the Cop office. I gave a presentation to the FTC last week at a roundtable. I've spoken with multiple lawyers and people across the country and across the world. We're working with a group in Europe to help with the EU AI act. Most actors, voice actors, we give away our files as a work for hire, and the understanding is that that audio will be used for this very specific big project. Unfortunately, that also basically gives the person we've given the audio file to the copyright and the ability to do whatever they want to with that.,: The Pro audio.,Speaker B: Suite. Thanks to Tripoos.,: And Austrian audio. Listen now on your favorite.,Speaker B: Podcast provider.
Paris Marx is joined by Edward Ongweso Jr. to discuss how the venture capital industry works, why the technologies it funds don't deliver on their marketing promises, and how that's once again being shown in the hype around AI. Edward Ongweso Jr. is a freelance journalist, co-host of This Machine Kills, and guest columnist at The Nation. You can follow Ed on Twitter at @bigblackjacobin.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Edward wrote about the problems with venture capital and what the AI hype shows us about the industry for The Nation. Earlier this year, he wrote about the tantrum VCs threw after the Silicon Valley Bank collapse.Paris wrote about where Elon Musk's vision for the X superapp comes from, why his Twitter rebrand isn't going so well, and why ChatGPT isn't a revolution.In 2020, Sam Harnett wrote about the problem with tech media's coverage of the gig economy.Uber used to want to be the “Amazon for transportation” and the “operating system for everyday life.”TIME reported on how OpenAI lobbying watered down EU AI rules.Marc Andreessen wrote his pitch for “Why AI Will Save the World.”Support the show
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Alex Lebrun is the Co-Founder and CEO of Nabla, an AI assistant for doctors. Prior to Nabla, he led engineering at Facebook AI Research. Alex founded Wit.ai, an AI platform that makes it easy to build apps that understand natural human language. Wit.ai was acquired by Facebook in 2015. Prior to Wit, Alex was the Founder and CEO of VirtuOz, the world pioneer in customer service chatbots, acquired by Nuance Communications in 2013. In Today's Episode with Alex Lebrun We Discuss: 1. Third Time Lucky and Lessons from Zuckerberg: How did Alex make his way into the world of startups with the founding of his first company? What worked with Alex's prior companies that he has taken with him to Nabla? What did not work that he has left behind? What were the single biggest takeaways for Alex from working with Mark Zuckerberg? How does Mark prepare for meetings? How does Mark negotiate so well? 2. Open vs Closed: Why does Alex believe the winning AI models will always be open? Why are open models not as transparent as people think they are? What are the biggest downsides to both open and closed models? Does Alex agree with Emad @ Stability that we will have "national data sets"? 3. Incumbent vs Startup: Who wins in the AI race; startups or incumbents? How important is access to proprietary data in winning in AI today? How does Alex respond to many VCs who suggest so many AI startups are merely "a thin layer on top of a foundational model"? Is that a fair critique? Which startups are best placed to challenge incumbents? Which incumbents have been most impressive in adopting AI into existing product suites? 4. Models 101: Size, Quality, Switching Costs: Why will the best companies switch the models that they use often? Will any models in action today be used in a year? How important is the size of the model? How will this change with time? In what way is new EU regulation around models going to harm European AI companies? 5. Location Matters: Who Wins: When looking at China, US and Europe, who is best placed to win the AI war? What are the biggest challenges Europe and China face? Why is the US best placed to win the AI race? What does it have to overcome first? If Alex were a politician, what would he do to ensure his country were best positioned?