POPULARITY
Send us a textIn this episode, we use AI to take a deep dive into the EU's landmark Artificial Intelligence Act — the world's first comprehensive legislation regulating AI.
Recording of an on site event that took place at University of Vienna on February 28.Programme:14:00: Welcome/Opening remarks byUniv.-Prof. Dr. Nikolaus ForgóHead of the Department of Innovation and Digitalisation in Law of the University of Vienna14:15-15:30: Part I - AI GovernanceKeynote byDr. Klaus SteinmaurerExecutive Director at the Telecommunications and Postal Services at Rundfunk und Telekom Regulierungs-GmbH, Viennafollowed by a panel, moderated byClara Saillant, LL.M.Research AssociateDepartment of Innovation and Digitalisation in Law of the University of Viennawith contributions fromMMag. Elisabeth WagnerDeputy Commissioner of the Austrian Data Protection AuthorityProf. Dr. Peggy ValckeCo-Director of Centre for IT & IP Law at KU Leuven,Executive Board Member at theBelgian Institute for Postal Services and Telecommunicationsfollowed byDiscussion and Q&A15:45-17:15: Part II - AI LiteracyKeynote byDr. Elora Fernandes (remote)Postdoctoral Researcher at Centre for IT & IP Law at KU Leuvenfollowed by a panel, moderated byUniv.-Ass. Mag. Adriana WinkelmeierDepartment of Innovation and Digitalisation in Law of the University of Viennawith contributions fromCeyhun Necati Pehlivan, LL.M.Counsel and Head of Technology, Media Telecommunications and Intellectual Property Group at Linklaters, MadridEditor-in-chief of Global Privacy Law Review at Wolters KluwerDr. Jeanette GorzalaVice Chair, AI Board of the Austrian GovernmentFounder, ACT AI NOWDr. Lukas FeilerPartner at Baker McKenzie, ViennaDr. Sonja Dürager, LL.M.Partner at bpv Hügel, Salzburgfollowed byDiscussion and Q&AThe increasing integration of artificial intelligence (AI) systems into various aspects of our daily lives continue raising concerns about their potential adverse effects on society, democracy, fundamental rights, and the rule of law. In response, various governance frameworks have emerged across different jurisdictions to address the challenges posed by AI. Notably, the European Union's AI Act, which entered into force on 1 August 2024, stands out as the first legislation of its kind to introduce comprehensive rules governing the development and deployment of AI systems in the EU. However, understanding the complexities and nuances of this legislation remains a significant challenge.The Department of Innovation and Digitalisation in Law of the University of Vienna is therefore delighted to invite you to the first event in a series of three dedicated to exploring the implications, opportunities, and challenges presented by the AI Act. The event series is organised on the occasion of the launch of the books "EU Artificial Intelligence (AI) Act: A Commentary" and "AI Governance and Liability in Europe: A Primer" by Wolters Kluwer, edited by Ceyhun Necati Pehlivan, Nikolaus Forgó and Peggy Valcke. The first event of the series in Vienna will bring together leading experts, academics and policymakers in the field to discuss AI literacy measures and the governance structure under the AI Act. The next two events in Brussels and Madrid will be further dedicated to different aspects of the AI Act, providing a comprehensive overview of and encouraging diverse dialogue on the new regulatory framework for AI in the EU.Speakers: Link: https://id.univie.ac.at/news-und-events/detailansicht-news-und-events/news/untangling-the-eu-artificial-intelligence-act-ai-literacy-and-governance-1/
In this webinar Ms Lucilla Sioli, Director of the EU AI Office, addresses the Institute on the enforcement of the EU's new AI Act. Ms Sioli explores the enforcement framework for the AI Act and how this legislation will be implemented in practice. Ms Sioli focuses particular attention on the role of the EU AI Office and how it can help to ensure coherent cooperation between regulators across different sectors and Member States. About the speaker: Lucilla Sioli is the Director of the AI Office in Directorate-General for Communications Networks, Content, and Technology (DG CONNECT) in the European Commission. She was previously the Director for Artificial Intelligence and Digital Industry within DG CONNECT, where she was responsible for the development of AI policy, including the AI Act, and for the digitisation of industrial strategy. Ms Sioli holds a PhD in economics from the University of Southampton (UK) and one from the Catholic University of Milan (Italy) and has been a civil servant with the European Commission since 1997.
Dans ce nouvel épisode, le Président du MR présente les ministres qui intègreront le gouvernement de Bart de Wever avec un mot d'ordre: réforme de l'emploi. David Clarinval reprendra le ministère de l'emploi et de l'économie et compte bien prendre les choses en main. Après le Canada, le Mexique et la Chine, Donald Trump annonce que l'Union européenne est la prochaine sur sa liste. Les taxes douanières imposées par le Président américain risque de plonger le monde dans une guerre commerciale. L'Union Européenne a, de son côté, mis en vigueur l'Aritificial Intelligence Act. Une loi qui a pour but d'encadrer l'utilisation de l'intelligence artificielle, notamment dans les entreprises et les PME. Côté Tech, Christophe Charlot revient sur l'IA Deepseek qui a chamboulé les marchés la semaine passé en analysant ses risques et ses fonctionnalités. Côté Bourse, Erik Joly explique l'augmentation du prix de l'Or lié à l'actualité aux États-Unis, en Ukraine et dans le Moyen-Orient.
Crefovi's live webinar will begin on Friday 29 November 2024 at 15:00 pm London time (UK), and will provide an analysis of the Artificial Intelligence act and its impact on AI innovation and creativity. You haven't yet secured your free place for our upcoming webinar on the AI act? Here is your chance to join Annabelle Gauberti on Friday 29 November 2024, 15:00pm London time (UK) as she explores the the structure, content and impact of the AI act and how it is going to have repercussions in Europe and beyond. In this webinar, our expert speaker will discuss: 1. What is the Artificial Intelligence Act? 2. When does the AI act enter into force? 3. Who is affected and/or impacted by the AI act? AI systems 4. Penalties and sanctions (chapter XII AI act) 5. AI act: ‟promoting” innovation with a lot of caution Check the written version of our thought leadership content on https://crefovi.com/articles/ai-act/ and https://crefovi.fr/articles/loi-sur-lia/ #AIact @artificialintelligenceact Learn more about your ad choices. Visit megaphone.fm/adchoices
The European Union is intensifying efforts to improve consumer protection in the rapidly evolving digital landscape. At the same time, industries are exploring how new technological solutions can be used to safeguard consumers in innovative new ways.The forthcoming "Digital Fairness Act" will impact a range of industries, from e-commerce to entertainment, which will need to adhere to new standards, including transparent marketing practices and measures to prevent addictive behaviours, all aimed at creating a safer and more equitable digital environment for consumers.The Commission has also launched a fitness check of EU consumer law on digital fairness to assess whether the current legal framework is sufficient to guarantee a high level of consumer protection in the evolving digital landscape.Where the upcoming implementation of the Artificial Intelligence Act specifically aims to regulate AI systems and their application in industry and the Digital Services Act (DSA) regulates online content, these new measures seek to level the digital playing field, address unfair practices, and ensure consumers are thoroughly protected both online and offline.Lsiten to this Euractiv Hybrid Conference to discuss the protection of consumers in the digital environment. Questions to be discussed include:- What role does the Digital Services Act play in holding online platforms accountable for ensuring a high level of safety and privacy for consumers?- What role should public consultation and stakeholder engagement play in shaping future digital fairness legislation to ensure it addresses real consumer concerns?- What lessons can be learned from the lottery industry in their ongoing efforts to safeguard consumers and prevent addictive gambling behaviours?
A Clare MEP is set to head up an EU Committee tasked with keeping tabs on artificial intelligence. Scariff's Michael McNamara has been appointed Co-Chair of the European Parliament's AI Monitoring Group, which has responsibility for implementing the Artificial Intelligence Act, that became law in August. The group will aim to ensure systems used by businesses, schools, states, and the general public are fairly regulated based on the risk they pose to society. Independent Member of the Renew Europe Group, McNamara says its vital the technology isn't abused.
In this week's episode, Siobhain Ivers, Senior Director & Deputy Chief Compliance Officer, Etsy Inc.and Chair of the Institute's Fintech and Payments Working Group speaks with John O'Connor, Partner, Technology & Data and co-Head of FinTech at William Fry LLP, Claire O'Connor, Senior Associate, Technology & Data Protection, Fintech at William Fry LLP, Member of the Institute's Fintech and Payments Working Group and Chair of the Institute's Southwest Regional Chapter Committee and Greg James, Senior Manager, fscom and Member of the Institute's Fintech and Payments Working Group on the topic of AI in Financial Services. The guests on today's episode have specific experience of navigating the challenges faced by financial services firms who have implemented or are in the process of implementing Artificial Intelligence solutions to empower their business. In this episode they explore some of those use cases, the provisions of the European Union's Artificial Intelligence Act and the practical legal considerations when regulated financial institutions are procuring AI solutions.
Arsen Kourinian is a Partner in Mayer Brown's AI Governance and Cybersecurity & Data Privacy practices. He advises clients on data privacy and AI laws and frameworks. Arsen has published numerous articles regarding nuanced issues in these fields, including a forthcoming book entitled Implementing a Global Artificial Intelligence Governance Program. In this episode… The growing number of global and state privacy laws and AI regulations is prompting companies to integrate fundamental frameworks into their AI governance programs. While the US lacks a comprehensive federal AI law, states like Colorado have begun implementing AI regulations that could serve as a model for future state-level standards. With seemingly fragmented regulations, how can companies effectively develop an AI governance program? A multi-regulatory approach to AI governance can be challenging for companies to navigate with regulations like the EU AI Act, Colorado's Artificial Intelligence Act, and international standards like ISO and NIST. While the regulatory landscape is patchy, harmonizing across various regulations and frameworks can help companies meet compliance obligations and reduce risk. This includes forming an AI governance committee, implementing a data governance plan, conducting risk assessments, documenting accountability with policies and procedures, and continuous monitoring and oversight of AI vendors. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Arsen Kourinian, Partner at Mayer Brown, about developing an AI governance program amid emerging global and state regulations. Arsen emphasizes incorporating key components and frameworks from various laws to develop AI governance programs. He also delves into the departments that assume responsibility for these programs and offers guidance on completing AI impact assessments, highlighting the importance of risk mitigation and understanding practical harms.
In today's podcast I talk about the EU Artificial Intelligence Act. If you are a EU based business, or do business in the European Union and are also working with AI in any aspect, there are some important compliancy dates for implementation that are coming in the next couple of years that you definitely need to pay attention too. Curious? I'll see you in the episode! Website: www.futurecreatorspodcast.com Show Notes: EU Artificial Intelligence Act Article 4: AI Literacy Today's Sponsor The AI Expedition: A comprehensive introduction to generative AI tools to help you get you up to speed, quickly, in order to start strategically positioning your business for the future. https://robotspaceshipacademy.com/aiexpedition Subscribe and Follow! Be sure to subscribe to our podcast so you don't miss all the weekly episodes. We are currently on: Spotify Apple Podcasts YouTube Amazon Music Affiliate Disclaimer: In full disclosure, some links in this podcast may contain affiliate codes, which means, if you click on them and buy something there, the owner of this podcast can potentially receive a commission for items purchased there. --- Support this podcast: https://podcasters.spotify.com/pod/show/futurecreators/support
In this episode, hosts Payal Nanavati and Savanna Williams talk to Roma Sharma and Wietse Vanpoucke about the European Union's Artificial Intelligence Act, which establishes a common regulatory and legal framework for AI within the European Union. This podcast episode features the following speakers: Roma Sharma is a counsel in Crowell's Health Care Group, where she advises a variety of health care clients on navigating the use of AI in the industry and complying with federal and state laws and regulations. Wietse Vanpoucke is an associate in Crowell's Brussels office, where his practice focuses on the life sciences and digital health sectors, relying on his deep experience with European and Belgian regulatory affairs and legal procedures. Payers, Providers, and Patients – Oh My! is Crowell & Moring's health care podcast, discussing legal and regulatory issues that affect health care entities' in-house counsel, executives, and investors.
Nosipho Radebe speaks to AI expert Johann SteynSee omnystudio.com/listener for privacy information.
Ever wondered how a simple gadget like the Xbox 360 could leave such a lasting impact? Tune into this episode of Tech Time Radio with Nathan Mumm as we bid adieu to the iconic Xbox 360 store, reminiscing about the memories and milestones it brought into our lives. We'll also dive into the tech evolution with Siri's upcoming integration with OpenAI's ChatGPT. James Riddle joins us to discuss the groundbreaking Artificial Intelligence Act and its implications for high-risk systems like medical devices. We'll also share practical tips on generating safe, free QR codes and the resurgence of these digital shortcuts.But that's not all! Discover the hidden dangers lurking in your living room with our spotlight on the increasing number of injuries linked to VR headsets. A friend's mishap during a virtual boxing match serves as a cautionary tale, leading us into a fascinating conversation with experts Melissa Kovacs and Daniel Kutcher on how VR impacts our proprioception. From the psychological reasons behind ignoring safety warnings to the differences in awareness between adults and children, we've got all the insights you need to navigate the virtual world safely. Plus, our regular segments like Mike's mesmerizing moment and the technology fail of the week add an extra layer of intrigue and entertainment.And then there's the controversy that's got everyone talking—Elon Musk's deepfake video involving Vice President Harris. We'll break down the backlash and discuss Musk's contentious response to Governor Gavin Newsom. Don't miss the "Nathan Nugget" of the week, offering a handy tip on generating free QR codes, and our high marks for Wild Turkey Kentucky Spirit whiskey, a perfect pairing with a cigar. Stay tuned to hear about Apple's strategic delay in releasing its new AI feature to comply with European Union tech rules and wrap up with essential tips on securing VR headsets. This episode is a tech lover's dream, packed with the latest news, expert opinions, and a bit of fun for everyone.
AI offers new tools to help competition enforcers detect market-distorting behavior that was impossible to see until now. Paris Managing Partner Natasha Tardif explains how AI tools are beginning to help prevent anticompetitive behaviors, such as collusion among competitors, abuse of dominance and merger control. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Natasha: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, our focus is going to be on AI and antitrust. AI is at the core of antitrust authorities' efforts and strategic thinking currently. It brings a number of very interesting challenges and great opportunities, really. In what ways? Well, let's have a look at how AI is envisaged from the perspective of each type of competition concept. I.e. anti-competitive agreements, abuse of dominant position, and merger control. Well, first of all, in relation to anti-competitive agreements. Several types of anti-competitive practices, such as collusion amongst competitors to align on market behavior or on prices, have been assessed by competition authorities. And when you look at those in relation to algorithms and AI, it's a very interesting area to focus on because a number of questions have been raised and some of them have answers, some others still don't have answers. The French competition authorities and the German Bundeskanzleramt issued a report sharing their thoughts in this regard in 2019. The French competition authority went as far as creating a specific department focusing on digital economy questions. At least, three different behaviors have been envisaged from an algorithm perspective. First of all, algorithms being used as a supporting tool of an anti-competitive agreement between market players. So the market players would use that technology to coordinate their behavior. This one is pretty easy to apprehend from a competition perspective because it is clearly a way of implementing an anti-competitive and illegal agreement. Another way of looking at algorithms and AI in the antitrust sector and specifically in relation to anti-competitive agreements is when one and the same algorithm is being sold to several market players by the same supplier, creating therefore involuntary parallel behaviors or enhanced transparency on the market. We all know how much the competition authorities hate enhanced transparency on the market, right? And a third way of looking at it would be several competing algorithms talking, quote-unquote, to each other and creating involuntary common decision-making on the market. Well, the latter two categories are more difficult to assess from a competition perspective because, obviously, we lack one essential element of an anti-competitive agreement, which is, well, the agreement. We lack the voluntary element of the qualification of an anti-competitive agreement. In a way, this could be said to be the perfect crime, really, as collusion is made without a formal agreement having been made between the competitors. Now, let's look at the way AI impacts competition law from an abuse of dominance perspective. In March 2024, the French Competition Authority issued its full decision against Google in the Publishers' Related Rights case, whereby it fined again Google for €250 million for failing to comply with some of its commitments that had been made binding by its decision of 21 June 2022. The FCA considered that Bard, the artificial intelligence service launched by Google in July 2023, raises several issues. One, it says that Google should have informed editors and press agencies of the use of their contents by its service Bard in application of the obligation of transparency, which it had committed to in the previous French Competition Authority decision. The FCA also considers that Google breached another commitment by linking the use of press agencies and publishers' content by its artificial intelligence service to the display of protected content and services such as search, discover, and use. Now, what is this telling us about how the competition authorities look at abuse of dominance from an AI perspective? Well, interestingly, what it's telling us is something it's been telling us for a while when it comes to abuse of dominance, and particularly in the digital world. These behaviors have even been so much at the core of the competition authorities', concerns that they've become part of the new digital markets app. And this DMA now imposes obligations regarding the use of data collected by gatekeepers with their different services, as well as interoperability obligations. So in the future, we probably won't have these Google decisions in application of abuse of dominance rules, but most probably in application of DMA rules, because really now this is the tool that's been given to competition authorities to regulate the digital market and particularly AI tools that are used in relation to the implementation of the various services offered by what we now call gatekeepers, big platforms on the internet. Now, thirdly, the last concept of competition law that I wanted to touch upon today is merger control. What impact does AI have on merger control? And how is merger control used by competition authorities to regulate, review, and make sure that the AI world and the digital world function properly from a competition perspective? Well, in this regard, the generative AI sector is attracting increasing interest from investors and from competition authorities, obviously, as evidenced by the discussions around the investments made by Microsoft in OpenAI and by Amazon and Google in Anthropic, which is a startup rival to OpenAI. So the European Commission considered that there was no ground investigating the $13 billion investment of Microsoft in OpenAI because it did not fall under the classic conception of merger control. But equally, the Commission is willing to ensure that it does not become a way for gatekeepers to bypass merger controls. forms. So interestingly, there is a concern that the new way of investing in these tools would not be considered as a merger under the strict definition of what a merger is in the merger control conception of things. But somehow, once a company has been investing so much money in another one, it is difficult to think that it won't have any form of control over its behavior in the future. Therefore, the authorities are thinking of different ways of apprehending those kind of investments. The French Competition Authority, for instance, announced that it will examine these types of investments in its advisory role, and if necessary, it will make recommendations to better address the potential harmful effects of those operations. A number of acquisitions of minority stakes in the digital sector are also under close scrutiny by several competition authorities. So, again, we're thinking of situations which would not give control in the sense of merger control rules currently, but that still will be considered as having an effect on the behavior and the structure of those companies on the market in the future. Interestingly, the DMA, the Digital Markets Act, also has a part to play in the regulation of AI-related transactions on the market. For instance, merger control of acquisitions by gatekeepers of tech companies is reinforced. There is a mandatory information of these operations, no matter the value or size of the acquired company. And we know that normally for an information to be given to the competition authorities, it would be the notification system that is only required where certain thresholds are met. So we are seeing increasing attempts by competition authorities to look at the digital sector, particularly AI, from different kinds of lenses, being innovative in the way they approach it because the companies themselves and the market are being innovative about this. And competition authorities want to make sure that they remain consistent with their consumptions and concepts of competition law while not missing out on what's really happening on the market. So what has the future really made us now? Well, the European Union is issuing its Artificial Intelligence Act, which is the first ever comprehensive risk-based legislative framework on AI worldwide. That will be applicable to the development, deployment and use of AI. It aims to address the risks to health, safety and fundamental rights posed by AI systems while promoting innovation and the outtake of trustworthy AI systems, including generative AI. The general idea on the market and from a regulatory perspective is that if you're looking at competition law or more generally as a society, when you're scrutinizing AI, even though there may be abusive behavior through AI, the reality of it is AI is a wonderful source of innovation, competition, excellence on the market, added value for consumers. So authorities and legislators should try to find the best way to encourage, develop, nurture it for the benefits of each and every one of us, for the benefits of the market and for the benefits of everybody's rights really. Therefore, any piece of legislation or case law or regulation that will be implemented in the AI sector must be really focusing on the positive impacts of what AI brings to the market. Thank you very much. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.
So much news, so little time: John and Glen unravel Colorado's new Artificial Intelligence Act, Apple's hookup with OpenAI to fuel its own systems and the apparent unraveling of Visa/Mastercard's merchant settlement. Plus, watching Siri age in warp speed. Links related to this episode: Law Firm Mayer Brown's synopsis of Colorado's AI Act: https://www.mayerbrown.com/en/insights/publications/2024/06/colorado-governor-signs-comprehensive-ai-bill Colorado Public Radio's take on the state law: https://www.cpr.org/2024/06/17/colorado-artificial-intelligence-law-implementation-ramifications/ Javelin Research on the imperiled Visa/Mastercard settlement: https://www.paymentsjournal.com/visa-mastercard-settlement-unlikely-to-be-approved/ Marketing Dive summarizes Apple Intelligence and the OpenAI link: https://www.marketingdive.com/news/apple-intelligence-ai-openai-chatgpt-partnership-wwdc/718539/ The BBC's view from overseas (including Elon's disses): https://www.bbc.com/news/articles/c4nn5mejl89o Tim Cook interviewed by Marques Brownlee: https://www.youtube.com/watch?v=pMX2cQdPubk Check out BIG's AI Development offerings, enabling credit unions to streamline operations, amplify member experiences and capture new opportunities in the digital financial landscape. https://www.big-fintech.com/Products-Services/AI-Development Find us on X and BlueSky at @bigfintech, @jbfintech and @154Advisors (same handles for both) You can also follow us on LinkedIn: https://www.linkedin.com/company/best-innovation-group/ https://www.linkedin.com/in/jbfintech/ https://www.linkedin.com/in/glensarvady/
The European Union formed in 1992 with the signing of the Maastricht Treaty in the city located at southern tip of the Netherlands. 12 counties initially joined the EU, and this has since grown to 27 member states. The European Union was once described as the “grand experiment.” Experiments are not without challenges… and setbacks. The exit of the United Kingdom—or Brexit—in 2020 was a major disappointment for the EU, but it has otherwise proven successful, albeit fragile and, in many respects, continues to strengthen as a unified citizen-led democracy. The last few years have been tough on the EU. Economic uncertainty, rising inflation, and high energy prices, largely linked to Russia's invasion of Ukraine, have left Europeans with a deepening sense of pessimism. In a survey of Europeans in the 2023 Edelman Trust Barometer, only 20 percent agreed that they or their family will be better off in next five years. Trust in government is low and there is a deepening divide on critical issues. At the same time, the EU is the largest single market globally today. With a population approaching 450 million people and a GDP of €16 trillion, if it was a country, it would be the world's third largest (by both these metrics). The EU is ultimately a political and economic partnership, but it faces similar challenges to other economies. And these challenges are frequently compounded by the need to find consensus—and often compromise—among the 27 member states on very complex issues. No doubt, that is essentially how democracy works—it's difficult by design—but the EU government and member states do just that: They find consensus and compromises, and they legislate. A recent example is the Artificial Intelligence Act, the first-ever legal framework on AI, which was unanimously endorsed by all 27 member states. Our guest today is Karen Melchior. In 2019, Ms. Melchior became a Member of the European Parliament (MEP). Frustrated with the state of politics in both Denmark and the EU, she first ran for office in 2014 and was elected the following year to the Copenhagen City Council, where she served on the Social Committee and the Health and Care Committee. Ms. Melchior has worked as a diplomat for the Danish Ministry of Foreign Affairs and in data protection law and IT security at the Danish Agency for Labor Market and Recruitment. She holds an MA in Law and a Masters of Public Administration. As a MEP, Ms. Melchior serves on three committees: Legal Affairs, Women's Rights and Gender Equality, and Internal Market and Consumer Protection. She is also a member of Renew Europe, the third-largest political group in the European Parliament. In an online biography, Ms. Melchior said the following: “Political systems are created by people. They can also be changed by people. We cannot afford to let our frustrations grow to the point where they overshadow our capacity for action. Hate can be triggered as easily as hope. The society we have built, based on cooperation and freedom, is fragile. We need to fight every day to sustain it. We can achieve a lot if we dare to try! Let's roll up our sleeves, lift our gaze, and work together to create the kind of world we want.” Resources: About MEP Karen Melchior (European Commission) About MEP Karen MelchiorThe EU Artificial Intelligence ActCorporate Sustainability Due Diligence (European Commission)
Wenn ChatGPT Geschlechterklischees reproduziert und bildgenerierende KI-Tools Wissenschaftlerinnen (halb-)nackt darstellen, dann handelt es sich dabei um Sexismus. Doch (wie) kann Software sexistisch sein? In der sechsten Hintergrundfolge von Informatik für die moderne Hausfrau blicken wir auf verschiedene Beispiele sexistischer Diskriminierung von Frauen durch KI-basierte Anwendungen. Da jedes KI-Modell nur so gut ist wie die Daten, mit denen es trainiert wurde, beschäftigen wir uns mit zwei sehr problematischen Phänomenen, nämlich dem Gender Data Gap und dem Gender Data Bias. Den erwähnten Thread des Softwareentwicklers David Heinemeier Hansson auf Twitter/X könnt ihr hier nachlesen: https://x.com/dhh/status/1192540900393705474 Mehr zum Artificial Intelligence Act der EU könnt findet ihr hier: https://artificialintelligenceact.eu/de/ Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Twitter, Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden. Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung, um ihm zu mehr Sichtbarkeit zu verhelfen. Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau
Welcome to today's episode of the 'AI Lawyer Talking Tech' podcast. We have an exciting lineup for you, packed with the latest advancements and insights at the crossroads of law and technology. In this episode, we will delve into the revolutionary impact of AI on legal practices, including a new AI-powered contract review platform that's streamlining business operations and the EU's groundbreaking Artificial Intelligence Act setting global standards. We'll also discuss recent developments in data privacy legislation in Nebraska and Maryland, explore the implications of AI in intellectual property and M&A transactions, and look at how legal tech is transforming the efficiency and efficacy of law firms. Whether you're a tech-savvy attorney, a legal professional keen on staying ahead of the curve, or simply curious about the future of legal tech, this episode is sure to offer valuable insights. Stay tuned as we explore these cutting-edge topics and more. Justia Legal Resources: Bankruptcy Law Center21 May 2024Legal Marketing & Technology BlogAI contract review platform Superlegal raises $5M to help companies close deals quicker22 May 2024SiliconANGLEArtificial Intelligence Act: Landmark EU Law Sets Global Standard For Safe And Ethical AI System22 May 2024International Business Times UKTLT launches AI maturity assessment tool – TLT AI Navigator22 May 2024Glasgow Chamber of CommerceDigitizing Justice: The Case for Dedicated Online Dispute Resolution Legislation in India22 May 2024Mediate.comWhat ScarJo v. ChatGPT Could Look Like in Court22 May 2024Wired NewsLunch Hour Legal Marketing 10 ESSENTIAL Digital Marketing Tools for Lawyers (MUST HAVE!)22 May 2024Legal Talk NetworkMental Wellbeing and Fulfillment for Litigators: Sara Lord Interviews Gary Miles22 May 2024HB Litigation ConferencesSponsored Content: The Problem with Law Firm Billing22 May 2024Texas Bar BlogAfter Anil Kapoor, Jackie Shroff Follows Suit! Taking a Look at the Recent DHC Order From the Perspective of Personality Rights & Right to Livelihood22 May 2024SpicyIPMajority of small-medium firms plan growth and invest in tech – report22 May 2024Today's ConveyancerNebraska Enacts Data Privacy Law22 May 2024Vensure.comTranslating Legal Documents: Best Practices21 May 2024JD SupraTikTok ban vs. First Amendment: Legal experts explain21 May 2024Fast CompanyGenerative AI in USPTO Practice: Key Considerations Under the USPTO's New Guidance21 May 2024Gibbons Law AlertActionable Advice When Sharing Client Data with Vendors21 May 2024JD SupraGround Level: The Emerging Ecosystem Of AI-Driven Products21 May 2024Forbes.comThe Legal Issues to Consider When Adopting AI21 May 2024IEEE SpectrumSarah Turbo AI: Legal Opinions with Unprecedented Speed and Precision21 May 2024Binghamton HomepageThe Next Frontier: AI Easy For Lawyers To Really Build21 May 2024Above The LawHarnessing generative AI to create a new breed of supercharged lawyers and law firms21 May 2024Beta NewsYour Strategic Pricing Calculator21 May 2024LawVisionColorado Enacts Artificial Intelligence Legislation Affecting AI Systems Developers, Deployers21 May 2024JD SupraLegal AI Tools And Assistants Essential For Legal Teams [Sponsored]21 May 2024Above The LawParcel Data Lookup for Legal Professionals: A Vital Resource21 May 2024Market Business News.comSolo and Small Firms Plan to Adopt AI More Quickly than Larger Firms, But Not Fast Enough for Clients, Clio Survey Finds21 May 2024LawSitesGender Perspectives on AI and the Rule of Law in Africa21 May 2024WebWire | Recent HeadlinesArmy personnel feel ‘let down' after MoD cyber attack21 May 2024MSN United StatesDOL Issues Artificial Intelligence Principles21 May 2024LittlerMarketing X Generative AI: The protectability of marketing campaigns designed With GenAI22 May 2024Hogan LovellsMaryland and Nebraska Adopt Comprehensive Privacy Laws21 May 2024WilmerHale
In this episode, we discuss the AI Act, which will significantly reshape how businesses and organizations in Europe use AI. HR professionals must comply with the Act's provisions to avoid penalties and ensure that employees are treated fairly. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide. Host: Kato Aerts (email) (Lydian / Belgium)Guest Speakers: David van Boven (email) & Michael Hopp (email) (Plesner / Denmark)Support the Show.Register on the ELA website here to receive email invitations to future programs.
Welcome to today's episode of 'AI Lawyer Talking Tech,' where we delve into the evolving intersection of law and technology. In this episode, we will explore pressing issues at the forefront of legal tech discussions, from the ethical challenges posed by generative AI in the legal profession to the constitutional debates over digital privacy and AI-generated content. Join us as we dissect landmark cases, legislative updates, and expert opinions that shape how law firms, corporations, and individual rights intersect with rapidly advancing technological capabilities. Freedom to Read Foundation, American Association of School Librarians, and the Iowa Library Association Join Amicus Brief Supporting Students, Publishers, and Authors Challenging Iowa Book Ban Legislation26 Apr 2024Stephen's LighthouseThe Legal Ethics of Generative AI26 Apr 2024beSpacificFirst Amendment Law Firm Recruiting TikTok Creators To Challenge Possible Ban: Report26 Apr 2024HuffPostDrake's ‘Taylor Made Freestyle' Disappeared From Social Media After Tupac's Estate Threatened Legal Action Over AI Usage26 Apr 2024UPROXXCustom-Crafted Omnichannel Strategies for Law Firms25 Apr 2024JD SupraHow TikTok's Chinese parent company will rely on an American right to keep the app alive25 Apr 2024NewsWatch 12Workflow of the Week: Elevating Efficiency With Automated Matter Updates25 Apr 2024JD SupraNebraska Fourth State to Enact Privacy Law in 202425 Apr 2024National Law ReviewTikTok and the U.S. government dig in for legal war25 Apr 2024ABA JournalThe Fortress of Confidentiality: Grenada's Banking Privacy Laws for International Clients25 Apr 2024Market Business News.comUnleashing the Power of GenAI in Contracts Management: 3 Easy Ways to Start and Benefit25 Apr 2024JD SupraCMU Convenes Experts in Evaluating Generative AI25 Apr 2024Carnegie MelonUtah's Artificial Intelligence Act swings into full-force May 1st25 Apr 2024JD SupraRocket Matter Soars Higher By Automatically Tracking Time Everywhere And Adding Access To ChatGPT25 Apr 2024Above The LawJoint Guidelines for Secure AI Deployment [Alert]26 Apr 2024Cozen O ConnorHalf of corporate giants do not check if their tech is discriminatory, exposing them to lawsuits and regulatory investigations26 Apr 2024Hogan LovellsHIPAA Privacy Protections for PHI related to Reproductive Health Care: The Final Rule and what Covered Entities and Business Associates need to Know26 Apr 2024Mintz LevinNebraska Enacts Comprehensive Data Privacy Law25 Apr 2024White & CaseSB 2979 – Illinois Biometric Privacy Act Legislation Passes The Illinois Senate25 Apr 2024Duane MorrisThe 4 Ways AI Can Help Boost Your Workplace Retention Efforts25 Apr 2024Fisher & Phillips LLPGenerative Artificial Intelligence Representations and Warranties Emerge in Venture Financing Transactions25 Apr 2024Day PitneyCozen Cities – April 24, 202425 Apr 2024Cozen O Connor
------------------------------- 通勤學英語VIP加值內容與線上課程 ------------------------------- 通勤學英語VIP訂閱方案:https://open.firstory.me/join/15minstoday 社會人核心英語有聲書課程連結:https://15minsengcafe.pse.is/554esm ------------------------------- 15Mins.Today 相關連結 ------------------------------- 歡迎針對這一集留言你的想法: 留言連結 主題投稿/意見回覆 : ask15mins@gmail.com 官方網站:www.15mins.today 加入Clubhouse直播室:https://15minsengcafe.pse.is/46hm8k 訂閱YouTube頻道:https://15minsengcafe.pse.is/3rhuuy 商業合作/贊助來信:15minstoday@gmail.com ------------------------------- 以下是此單集逐字稿 (播放器有不同字數限制,完整文稿可到官網) ------------------------------- 國際時事跟讀 Ep.K767: Europe's Groundbreaking AI Regulations: A Comprehensive Overview Highlights 主題摘要:Europe's AI regulations pioneer risk-based oversight, distinguishing between low and high-risk AI applications to ensure safety and accountability.The inclusion of provisions for generative AI models addresses concerns of misuse and promotes transparency in AI development.Stricter scrutiny of powerful AI systems mitigates risks of accidents and cyberattacks, emphasizing responsible innovation and compliance. In a monumental move, European Union lawmakers have greenlit the world's foremost set of comprehensive regulations governing artificial intelligence (AI), paving the way for the implementation of the Artificial Intelligence Act later this year. This landmark legislation not only signifies a significant step forward in Europe's technological governance but also sets a precedent for global AI regulation. 在一項重大舉措中,歐盟立法者已經為全球首套全面監管人工智慧(AI)的法規開了綠燈,為今年晚些時候《人工智慧法案》(Artificial Intelligence Act)的實施鋪平了道路。這項具有里程碑意義的立法不僅標誌著歐洲科技治理向前邁出了重要一步,也為全球 AI 監管樹立了先例。 The foundation of the AI Act lies in its risk-based approach to regulating AI applications. While low-risk AI systems, such as content recommendation algorithms, are subject to voluntary guidelines and codes of conduct, high-risk uses—like medical devices and critical infrastructure—are met with stricter requirements. Moreover, the legislation outright prohibits certain AI applications, such as social scoring systems and specific forms of predictive policing, due to their deemed unacceptable risks. AI 法案的基礎在於其對 AI 應用採取基於風險的監管方式。雖然低風險的 AI 系統,如內容推薦算法,需遵守自願性指南和行為準則,但高風險用途——如醫療設備和關鍵基礎設施——則面臨更嚴格的要求。此外,該立法完全禁止某些 AI 應用,例如社會評分系統和特定形式的預測性警務,因為它們被認為存在不可接受的風險。 One of the key highlights of the AI Act is its inclusion of provisions addressing generative AI models, which have emerged as a prominent force in recent years. These models, capable of producing lifelike responses and content, are now subject to stringent regulations. Developers are mandated to disclose the data used for training these models and adhere to copyright laws. Additionally, any AI-generated content must be prominently labeled as artificially manipulated, mitigating the potential for misuse. AI 法案的一個重點是納入了解決生成式 AI 模型的條款,這些模型近年來已成為一股重要力量。這些能夠產生逼真回應和內容的模型現在受到嚴格監管。開發人員必須披露用於訓練這些模型的數據並遵守版權法。此外,任何 AI 生成的內容都必須明確標記為人工操縱,以減少濫用的可能性。 Of particular concern within the legislation are the most powerful AI systems, dubbed "systemic risks." These systems, such as OpenAI's GPT-4 and Google's Gemini, are under heightened scrutiny due to their potential for catastrophic accidents or cyberattacks. Developers of such systems are tasked with assessing and mitigating risks, reporting incidents, implementing cybersecurity measures, and disclosing energy usage. These measures aim to address apprehensions surrounding the proliferation of harmful biases and the misuse of AI technology. 立法中特別關注的是最強大的 AI 系統,被稱為「系統性風險」。這些系統,如 OpenAI 的 GPT-4 和 Google 的 Gemini,由於其可能導致災難性事故或網絡攻擊而受到更嚴格的審查。此類系統的開發人員需要評估和減輕風險、報告事故、實施網絡安全措施並披露能源使用情況。這些措施旨在解決人們對有害偏見擴散和 AI 技術濫用的擔憂。 Europe's role as a trailblazer in AI regulation is further solidified with the impending implementation of the AI Act. Following the final formalities, the Act is poised to become law, signaling Europe's commitment to fostering responsible innovation while mitigating potential risks. Violations of the AI Act carry significant penalties, with fines reaching up to 35 million euros or 7 percent of a company's global revenue, underscoring the imperative of compliance with these pioneering regulations. 隨著 AI 法案即將實施,歐洲作為 AI 監管的先驅者角色進一步得到鞏固。在完成最後的手續後,該法案有望成為法律,表明歐洲致力於在促進負責任創新的同時減輕潛在風險。違反 AI 法案將面臨重大處罰,罰款高達 3500 萬歐元或公司全球收入的 7%,強調了遵守這些開創性法規的必要性。 In conclusion, Europe's adoption of the AI Act heralds a new era of AI governance, setting a precedent for other jurisdictions worldwide. With its risk-based approach, emphasis on transparency, and stringent oversight of powerful AI systems, the AI Act represents a significant milestone in the quest to harness the benefits of AI while safeguarding against its potential pitfalls. 總之,歐洲通過 AI 法案開啟了 AI 治理的新時代,為全球其他司法管轄區樹立了先例。憑藉其基於風險的監管方法、對透明度的重視以及對強大 AI 系統的嚴格監督,AI 法案代表了在利用 AI 的好處的同時防範其潛在陷阱道路上的重要里程碑。 Keyword Drills 關鍵字:Infrastructure (In-fra-struc-ture): One of the key highlights of the AI Act is its inclusion of provisions addressing critical infrastructure.Predictive (Pre-dic-tive): Moreover, the legislation outright prohibits certain AI applications, such as social scoring systems and specific forms of predictive policing.Manipulated (Ma-nip-u-lat-ed): Additionally, any AI-generated content must be prominently labeled as artificially manipulated, mitigating the potential for misuse.Trailblazer (Trail-blaz-er): Europe's role as a trailblazer in AI regulation is further solidified with the impending implementation of the AI Act.Milestone (Mile-stone): The AI Act represents a significant milestone in the quest to harness the benefits of AI while safeguarding against its potential pitfalls. Reference article: 1. https://www.taipeitimes.com/News/lang/archives/2024/03/19/2003815116 2. https://edition.cnn.com/2024/03/13/tech/ai-european-union/index.html
The AI Act or Artificial Intelligence Act is now a reality. But what does it mean for the Medical Device industry and what should you do within your Quality or Regulatory affairs activities. Erik Vollebregt, from Axon Lawyers will tell us what we should understand with this new legislation and what are the consequences for the Medical Device community. Who is Erik Vollebregt? Erik specializes in EU and national legal and regulatory issues relating to medical devices, including eHealth, mHealth, software, and protection of personal data. He is an expert in life sciences regulation at the EU and Dutch level, with a focus on contracts, and regulatory litigation against competent authorities and M&A. Erik was initially trained as intellectual property and competition lawyer, starting his career at the Directorate-General for Competition of the European Commission. He subsequently gained experience in contentious matters, commercial contracts, and transactional work at three large international law firms. He actively contributes to law and policy development at the national and EU levels via membership in specialized committees at branch associations and the European Commission. Erik also works as an arbitrator in medical devices-related disputes and is regularly retained as an expert witness in foreign litigation. Erik worked and lived in Brussels and Stockholm for several years, and is fluent in Dutch, English, French, German, and Swedish. Chambers Europe 2017: Erik is known for his specialism in regulatory work, which covers medical technology, devices, and products as well as for biotechnology. Clients confirm his strong capabilities, with one saying “he stands out to me. Whenever I work with lawyers he has been the best, with a solid scientific background. He has the perfect combination of skills and experience.” Who is Monir El Azzouzi? Monir El Azzouzi is the founder and CEO of Easy Medical Device a Consulting firm that is supporting Medical Device manufacturers for any Quality and Regulatory affairs activities all over the world. Monir can help you to create your Quality Management System, Technical Documentation or he can also take care of your Clinical Evaluation, Clinical Investigation through his team or partners. Easy Medical Device can also become your Authorized Representative and Independent Importer Service provider for EU, UK and Switzerland. Monir has around 16 years of experience within the Medical Device industry working for small businesses and also big corporate companies. He has now supported around 100 clients to remain compliant on the market. His passion to the Medical Device filed pushed him to create educative contents like, blog, podcast, YouTube videos, LinkedIn Lives where he invites guests who are sharing educative information to his audience. Visit easymedicaldevice.com to know more. Link: Erik Vollebregt Linkedin Profile: https://www.linkedin.com/in/erikvollebregt/ Axon Lawyers Website: https://www.axonlawyers.com/ Erik Blog: https://medicaldeviceslegal.com/ Social Media to follow Monir El Azzouzi Linkedin: https://linkedin.com/in/melazzouzi Twitter: https://twitter.com/elazzouzim Pinterest: https://www.pinterest.com/easymedicaldevice Instagram: https://www.instagram.com/easymedicaldevice
In this episode of the Cyber Uncut podcast, hosts Phil Tarrant and Major General (Ret'd) Dr Marcus Thompson unpack building a nexus between academia, business and our national security organisations to build a stronger domestic cyber security industry. MAJGEN (Ret'd) Dr Marcus Thompson begins the podcast by discussing his experience setting up the Australian Defence Force's initial cyber and information warfare capabilities and discusses creating a pathway into the cyber security industry to foster public-private innovation. The pair then unpack the challenges in creating a nexus between academia, industry and national security with the risk of foreign interference on university campuses. Tarrant and Thompson then discuss the Change Healthcare hack and the operating model of politically and financially motivated criminal gangs. The podcast wraps up by analysing the benefits of the European Union's Artificial Intelligence Act and the Australian Digital ID legislation and how governments can improve legislation yet further. Enjoy the podcast, The Cyber Uncut team
In this episode of the Cyber Uncut podcast, hosts Phil Tarrant and Major General (Ret'd) Dr Marcus Thompson unpack building a nexus between academia, business and our national security organisations to build a stronger domestic cyber security industry. MAJGEN (Ret'd) Dr Marcus Thompson begins the podcast by discussing his experience setting up the Australian Defence Force's initial cyber and information warfare capabilities and discusses creating a pathway into the cyber security industry to foster public-private innovation. The pair then unpack the challenges in creating a nexus between academia, industry and national security with the risk of foreign interference on university campuses. Tarrant and Thompson then discuss the Change Healthcare hack and the operating model of politically and financially motivated criminal gangs. The podcast wraps up by analysing the benefits of the European Union's Artificial Intelligence Act and the Australian Digital ID legislation and how governments can improve legislation yet further. Enjoy the podcast, The Cyber Uncut team
Analysis and Implications of the EU AI Act (PrivacyCafé, Episode 3) On this episode of PrivacyCafé, hosts Richard Sheinis and Jade Davis, leaders of the Hall Booth Smith Data Privacy, Security, and Technology Group, discuss the European Union’s Artificial Intelligence Act and its implications for businesses globally, especially in the USA. They elaborate on how the […]
A regulamentação da IA enfrenta desafios éticos complexos, buscando equilibrar inovação tecnológica e proteção dos direitos humanos. Aprovações legislativas recentes, como a Artificial Intelligence Act da União Europeia, destacam a necessidade de diretrizes claras para garantir o uso ético e responsável da inteligência artificial em benefício da sociedade.Este episódio é uma reflexão e uma análise do processo evolutivo da IA. Estamos diante do Robô Sapiens?______Cast: Pablo Magalhães, Felipe Bonsanto, Cleber Roberto e Eliezer FernandesEdição: Reverbere EstúdioCapa: imagem gerada por Ia (DallE 2)______Leia o artigo "Alambamento: o casamento tradicional Angolano", de Isaac Jorge, colunista do Portal Águia, nosso parceiro de conteúdos!______OUÇA O HISTORIANTE NA ORELO! A cada play nós somos remunerados, e você não paga nada por isso! https://orelo.cc/ohistoriante______APOIE O HISTORIANTE! No apoia.se/historiante ou no app da Orelo, contribua com R$4 mensais. Além de nos ajudar, você tem acesso ao nosso grupo de recompensas! Você também pode colaborar com qualquer valor em nosso PIX ohistoriante@gmail.com______OUÇA NOSSA PLAYLIST______PARTICIPE DA NOSSA PESQUISA DE OPINIÃO!______- OBRIGADO APOIADORES! Alex Andrade; Aldemir Anderson; Andreia Araujo de Sousa; Aciomara Coutinho; Arley Barros; Bruno Gouvea; Carolina Yeh; Charles Guilherme Rodrigues; Eduardo dos Santos Silva; Eliezer Gomes Fernandes; Frederico Jannuzzi; Flavya Almeida; Flávio José dos Santos; Helena de Freitas Rocha e Silva; Hélio de Oliveira Santos Junior; Jarvis Clay; João Victor Dias; João Vitor Milward; Jorge Caldas Filho; Juliana Duarte; Juliana Fick; Katiane Bispo; Marcelo Raulino Silva; Marco Paulo Figueiredo Tamm; Maria Mylena Farias Martins; Márcia Aparecida Masciano Matos; Núbia Cristina dos Santos; Poliana Siqueira; Raquel; Ronie Von Barros Da Cunha Junior; Sae Dutra.
In this episode of the Self-Publishing News Podcast, literary groups celebrate a pivotal moment for the publishing industry with the near passage of Europe's Artificial Intelligence Act, a landmark legislation requiring transparency in the training of generative AI platforms, including the disclosure of works used in their development. This act, widely welcomed by publishers, marks a significant step in regulating AI's role in creative industries. Dan also revisits the London Book Fair, highlighting the Selfies Awards and the achievements of ALLi authors. He also explores various AI-related topics, including the implications of AI in legal practices and literary translation. Dan also addresses the underrepresentation of indie authors in a report on the UK publishing industry's economic impact.Top of Form Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
Ridwaan Boda, a Privacy Tech Expert at ENS Law Firm, joins John to explore whether South Africa should follow the European Parliament's lead in passing the ground-breaking Artificial Intelligence Act. See omnystudio.com/listener for privacy information.
After last week's predictions of the future of AI, we are starting to see glimpses of these innovations continue to grow in AI Agents, robotics, and more. In Episode 88 of the Artificial Intelligence Show, hosts Paul Roetzer and Mike Kaput highlight Cognition's AI software engineer 'Devin', the significance behind Figure's Humanoid Robots and OpenAI's Mira Muratis's questionable responses to questions about Sora's training data in a WSJ Interview. 00:03:27 — Cognition releases Devin, the first “AI software engineer” 00:14:52 — The Significance Behind Figure's Humanoid Robots 00:22:08 — OpenAI CTO Questioned on Sora AI Model's Data Sources in WSJ Interview 00:29:13 — European Union's Artificial Intelligence Act approved by the European Parliament 00:33:46 — Suno AI, music generation based on text prompts 00:39:04 — Grok is now open source 00:43:26 — The Top 100 Consumer GenAI Apps from venture capital firm Andreessen Horowitz 00:46:58 — Apple Inc. in talks to build Google's Gemini AI into the iPhone 00:50:06 — Midjourney introduces feature to maintain consistency in image creation This week's episode is brought to you by our Marketing AI Conference (MAICON). From September 10-12 this year, we're excited to host our 5th annual MAICON at this pivotal point for our industry. MAICON was created for marketing leaders and practitioners seeking to drive the next frontier of digital marketing transformation within their organizations. At MAICON, you'll learn from top AI and marketing experts, while connecting with a passionate, motivated community of forward-thinking professionals. Now is the best time to get your MAICON ticket. Ticket prices go up after Friday, March 22. Visit www.maicon.ai to learn more. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
①The European Parliament has approved the Artificial Intelligence Act with an overwhelming majority in Strasbourg, France. What are the details? (00:47) ②U.S. government has announced it will send 300 million U.S. dollars of military aid to Ukraine.(13:15) ③Five American labor unions have filed a petition with the office of U.S. Trade Representative requesting a probe into China's shipbuilding industry. Are their accusations fair? (25:12) ④Chinese-made C919 aircraft and ARJ21 jetliner were on show in Malaysia earlier this week.(34:58) ⑤South Korean medical students have asked their universities to justify their absences caused by joining protests against the government's plan to dramatically raise the enrollment quota of medical schools.(45:03)
In today's episode of 'AI Lawyer Talking Tech,' we delve into the recent approval of the Artificial Intelligence Act by the European Parliament. This landmark legislation aims to regulate the use and development of AI technologies, marking a significant step in global AI governance practices. The act has prompted diverse reactions from various stakeholders, with industry players focusing on compliance and regulatory options, while members of civil society have expressed concerns about the protection of human rights and intellectual property. Join us as we explore the implications of this groundbreaking legal framework and its potential impact on the future of AI regulation worldwide. How stakeholders are welcoming EU AI Act14 Mar 2024IAPP.orgTwo Ways You Can Contribute Product Reviews and Ratings to the LawNext Legal Tech Directory14 Mar 2024LawSitesHow AI Is The Catalyst For Reshaping Every Aspect Of Legal Work [Sponsored]14 Mar 2024Above The LawInnovative Marketing Solutions for Small Law Firms with Limited Resources14 Mar 2024Legal ReaderApple Vision Pro Trademark: Pending Legal Battles in China?14 Mar 2024Gizchina.comRevised deadlines set in the Ripple-SEC legal battle14 Mar 2024CryptopolitanHow Generative AI Will Change The Jobs Of Lawyers14 Mar 2024Forbes.comAI Governance in China: Strategies, Initiatives, and Key Considerations14 Mar 2024Bird & BirdMar 4, 2024 Longtime professor had profound impact on McGeorge School of Law community13 Mar 2024University of the PacificNew Chicago Bar Foundation, IAALS Network Aims to Help Middle-Class Find Affordable Legal Help13 Mar 20242CivilityEuropean Union Enacts Major AI Regulation Law13 Mar 2024Redmond MagazineThe EU AI Act is Almost Here!14 Mar 2024Stephen's LighthouseClash of the AI Titans: Elon Musk Files Lawsuit Against OpenAI and its Founders13 Mar 2024Goodmans Technology BlogDOJ Deputy Chief Announces “Stiffer Sentences” for AI-Related White-Collar Crime — AI: The Washington Report14 Mar 2024Mintz LevinKey Considerations Regarding the Recently Passed EU Artificial Intelligence Act13 Mar 2024Kramer LevinQ&A: New Hampshire Becomes the Fourteenth State to Pass a Data Privacy Law—With More States Waiting in the Wings13 Mar 2024Hinshaw & Culbertson LLPThe Next Wave of Privacy Litigation: The Illinois Genetic Information Privacy Act13 Mar 2024Perkins CoieEuropean Union Artificial Intelligence Act: An Overview13 Mar 2024BeneschNew Hampshire and New Jersey Pass Comprehensive Consumer Privacy Laws13 Mar 2024CooleyData Centre Finance and Investment Insights13 Mar 2024Norton Rose FulbrightCozen Cities – March 13, 202413 Mar 2024Cozen O Connor
The European Parliament is set to pass its landmark Artificial Intelligence Act today. It will be the most comprehensive legal framework on AI worldwide. Boston Consulting Group's Kirsten Rulf discusses the significance of this regulation with Bloomberg's Stephen Carroll and Lizzy Burden on Bloomberg Daybreak: Europe. See omnystudio.com/listener for privacy information.
This morning lawmakers in Strasbourg overwhelming approved the Artificial Intelligence Act, a world first, aimed at regulating AI according to a risk-based approach. But what does this act actually set out and is it really possible to put manners on the everchanging world of AI? Sean was joined by Fine Gael MEP, Deirdre Clune to discuss...
On the latest episode of Euractiv's Today in the EU, we're looking into the AI Act vote in Strasbourg - and why the lack of reference to how AI will be used in the defence sector is causing some concern.It's plenary week in Strasbourg and MEPs are gathering to vote on the last files left for this mandate before EU citizens head to the polls in June. Still to be finalised is the Artificial Intelligence Act. Although the EU's legislative text was adopted in mid-February, it still needs to be ratified by the Parliament. But is this truly the endgame, and what are the concerns expressed regarding the use of AI in military defence?To dive into this topic, I'm joined by Elisa Gkritsi, Euractiv's technology editor and Aurélie Pugnet, Euractiv's Global Europe and Defence reporter.
This morning lawmakers in Strasbourg overwhelming approved the Artificial Intelligence Act, a world first, aimed at regulating AI according to a risk-based approach. But what does this act actually set out and is it really possible to put manners on the everchanging world of AI? Sean was joined by Fine Gael MEP, Deirdre Clune to discuss...
The current episode of the Wolf Theiss Soundshot podcast introduces a new series on digital law. Throughout this series, our tech and data law experts will provide practical insights for navigating the complex landscape of the EU's legislative initiatives in the field of digital law. In this first episode, Roland Marko and Phillip Wrabetz give an overview of what to expect from our new series and discuss key elements of the EU's Digital Strategy, including the Digital Services Act, the Artificial Intelligence Act, the Data Act and the EU's new cybersecurity framework. Subsequent episodes of the series will focus on individual aspects of the Digital Strategy, featuring insights from our colleagues across Wolf Theiss offices in the CEE/SEE region.Find out more in our new Soundshot episode, available in English.Should you have any questions, please do not hesitate to contact us at soundshot@wolftheiss.com, roland.marko@wolftheiss.com or phillip.wrabetz@wolftheiss.com.
Mit dem Artificial Intelligence Act soll erstmalig ein europaweiter Rechtsrahmen für KI-Anwendungen geschaffen werden. Die europäische Union möchte dabei sicherstellen, dass Künstliche Intelligenz in der EU verantwortungsvoll und sicher eingesetzt wird, um so mögliche Schäden zu minimieren. --- Send in a voice message: https://podcasters.spotify.com/pod/show/lindeverlag/message
Gabriele Mazzini, Team Leader - AI Act at the European Commission, discusses the risk-based approach they took when crafting specific rules for the Artificial Intelligence Act (versus simply opposing the technology as a whole). He also discusses the complexities involved in regulating emerging technologies that evolve at a much faster pace than the legislation itself. Key Takeaways: Recommendations put forward for regulating emerging technologies within the AI Act What the process has been like for the development of the AI Act, including the key players Where regulation in this space can be most helpful despite the complexities involved Guest Bio: Gabriele Mazzini is the architect and lead author of the proposal on the Artificial Intelligence Act (AI Act) by the European Commission, where he has focused on the legal and policy questions raised by new technologies since August 2017. Before joining the European Commission, Gabriele held several positions in the private sector in New York and served in the European Parliament and the EU Court of Justice. He holds a LLM from Harvard Law School, a PhD in Italian and Comparative Criminal Law from the University of Pavia, and a Law Degree from the Catholic University in Milan. He is qualified to practice law in Italy and New York. --------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
By Adam Turteltaub When it comes to AI, there is little agreement. Some see great potential, while others see great nightmares. Some see opportunities, and many see nothing but risks. In the EU, though, there is agreement on one thing, a new EU AI Law. In December 2023 the EU Parliament and Council agreed to a bill “…to ensure AI in Europe is safe, respects fundamental rights and democracy, while businesses can thrive and expand.” Longtime compliance professional Letitia Adu-Ampoma (LinkedIn) explains that while the law won't fully come into force for two years or more, it's time for compliance teams to start paying attention and preparing. The act is a part of the EU digital strategy, which is very focused on human-centric legislation. Its goal is to keep positive the impact of AI on people and society. The approach it takes is risk-based, categorizing AI systems based on the level of risk: unacceptable (and prohibited), high risk, minimal risk and no risk. The act is very specific in how it defines which AI systems fall into each category. The unacceptable risk category, for example, includes social credit scoring, emotional recognition and behavioral manipulation. Creators and users of high risk AI will be required to register the system in a public record. They will also need to conduct an impact assessment and be transparent. Transparency will also be critical for generative AI. Providers will need to disclose the content generated and ensure that the models are not designed to create illegal content. There will also need to be governance in place to protect against copyright violations. So what should compliance teams do now? Letitia recommends reading the guidance and to start preparing the business unit for what is to come. Listening to the podcast would be good, too. NOTE: This podcast was recorded in January 2024. The final version of the EU AI Act is yet to be released - a final EU parliament debate on the text will take place before its release. In the meantime, some 'unofficial' pre-final versions of the text have been leaked online in advance of this debate. The final EU definition of AI and key timescales for enforcement mentioned in the podcast are based on proposals made public. Listeners should look out for the final position which will be detailed in the EU AI Act when it is officially published in the next few weeks.
Episode 155 considers three important developments as 2024 opens: -How the European Union's pending AI Act blazes a new trail -How umbrella insurance may or may not apply to claims involving biometrics -How Quebec's 2023 data privacy act will reshape privacy notices throughout North America. Yugo Nagashima and Brion St. Amour, attorneys with the coast-to-coast U.S. law firm Frost Brown Todd LLP, team with the Data Privacy Detective to cover these three essential matters. On December 9, the European Union published a preliminary agreement on the Artificial Intelligence Act, a pioneering law that provides a framework for sale and use of AI in the EU. We consider what the AI Act covers and the four-levels-of-risk approach the EU will take for regulating AI. We then jump into discussion of a class action lawsuit against Krispy Kreme Doughnut Corp. The suit claims a violation of the Illinois Biometric Information Privacy Act (BIPA). Does Krispy Kreme's insurance coverage apply? We consider the distinction between the lawsuit's claims and the company's umbrella policy. The insurer declared that Krispy Kreme is not entitled to an insurance paid defense, based on a policy exclusion. The Quebec Act for protection of personal information in the private sector became law in September 2023. December 18, 2023 Guidance from Quebec's Commission covers what must be in privacy notices, including that they be in clear, simple language (in French and English). https://www.cai.gouv.qc.ca/politiques-de-confidentialite/ What is “clear and simple”? The Guidance offers a checklist of what organizations should say in their website privacy postings, and is certain to force changes in websites of digital businesses that cover U.S. and Canadian markets. Time stamps: 01:16 — EU's pending AI Act 10:11 — Umbrella insurance and biometrics 17:08 — Quebec's 2023 data privacy act
L'émission 28 Minutes du 02/01/2024 Au programme de cette émission hors-série : Ce mardi 2 janvier, Renaud Dély reçoit la docteur en sciences numériques et entrepreneure Aurélie Jean. 2023 a été une année charnière pour l'intelligence artificielle, cette révolution numérique qui bouleverse nos savoirs et nos certitudes. Le 9 décembre, après trois jours de négociations, les États membres de l'Union européenne ont réussi à trouver un accord sur les règles régissant les systèmes d'IA. Avec cet « Artificial Intelligence Act », l'Europe devient ainsi une pionnière, dans un monde aux prémices de cette nouvelle technologie. Alors, faut-il avoir peur de l'avenir ? Fondatrice de la société In Silico Veritas, — notamment spécialisée en algorithmique — autrice de l'ouvrage « De l'autre côté de la machine », ou encore de l'essai « Data et sport, la révolution », Aurélie Jean nous éclairera sur cette question. Doit-on vraiment craindre que l'intelligence artificielle nous remplace — par exemple, au travail ? Comment, demain, distinguer le vrai du faux ? Nous poursuivrons ces discussions avec l'essayiste et magistrat Raphaël Doan. Dans ses essais historiques — le dernier en date, « Si Rome n'avait pas chuté », publié aux éditions Passés composés — il s'amuse à recourir à l'intelligence artificielle pour changer les événements de l'Histoire. Enfin, retrouvez également les chroniques d'Alix Van Pée et de Paola Puerari, ainsi que notre intermède musical, « À la loop » de Matthieu Conquet. 28 Minutes est le magazine d'actualité d'ARTE, présenté par Elisabeth Quin du lundi au jeudi à 20h05. Renaud Dély est aux commandes de l'émission le vendredi et le samedi. Ce podcast est coproduit par KM et ARTE Radio. Enregistrement : 02 janvier 2024 - Présentation : Renaud Dély - Production : KM, ARTE Radio
The December 2023 edition of "5 Great Reads on Cyber, Data, and Legal Discovery" covers the transformative world of AI in legal contexts. Leading with a concise analysis of the EU's new Artificial Intelligence Act, this edition provides a comprehensive overview of AI's evolving role in law and ethics. Highlights include insights into new California State Bar AI guidelines, the UK judiciary's approach to AI tools, and a thorough examination of the Winter 2024 eDiscovery Pricing Report. This issue offers a crucial blend of industry research, updates, and expert commentary, illuminating the challenges and opportunities at the intersection of cybersecurity, data governance, and legal discovery. The post Five Great Reads on Cyber, Data, and Legal Discovery for December 2023 appeared first on ComplexDiscovery.
Pushpendra Mehta meets with Ben Poole, Writer at CTMfile, to review the latest treasury news and developments. Topics of discussion include the following: 1:34 Big Five economies back global nuclear supply chain 5:06 Europe reaches deal on Artificial Intelligence Act 8:16 BoE and ECB keep interest rates unchanged 11:13 2024 outlook is neutral for global money market funds
如果要問什麼是2023年全球科技界最重要的大事,ChatGPT迅速掀起的AI熱潮肯定讓人印象深刻,但生成式AI的快速發展也讓大家警覺到人工智慧反制人類的一天好像已經快要接近,那我們該怎麼辦? 為此,歐盟最近正式通過全世界第一個完整規範AI的法案「人工智慧法案」(Artificial Intelligence Act, AIA),監管目的正是在於確保人工智慧的安全性、透明性、免除歧視和環境友善。為什麼監管AI這麼重要?AI監管對科技業與AI價值鏈上的相關產業會造成什麼影響?另外,這項法案為什麼是由歐盟率先提出?歐美各國有什麼不同看法?為什麼之前在歐盟理事會、執委會和議會三方會談時甚至差點破局? 主持人:天下雜誌總主筆 陳良榕 來賓:安永管理顧問股份有限公司總經理 萬幼筠 製作團隊:李洛梅、劉駿逸 *阿榕伯科技電子報:https://bit.ly/42A6BWj *聽眾購書專屬優惠:https://bit.ly/3ZUW72e *訂閱天下全閱讀:https://bit.ly/3STpEpV *意見信箱:bill@cw.com.tw
Inoltre, la discussa sentenza del Tribunale di Firenze in tema di sospensione dalla retribuzione per i non vaccinati Covid e la pronuncia della Consulta che ha cancellato il divieto di prevalenza del vizio parziale di mente per il reato di rapina in abitazione.>> Leggi anche l'articolo: http://tinyurl.com/s4bcu54e>> Scopri tutti i podcast di Altalex: https://bit.ly/2NpEc3w
A First Look At The Artificial Intelligence Act by Nick Espinosa, Chief Security Fanatic
A che punto sono le trattative per l’approvazione dell’Artificial Intelligence Act europeo? Facciamo il punto.La Francia ha vietato ai suoi ministri di usare i servizi di messaggistica come Whatsapp, Telegram, Signal, ecc. Parliamo di sovranità digitale e sicurezza delle app con Andrea Zapparoli Manzoni, esperto di cyber security e membro del comitato scientifico del Clusit.Torniamo a parlare di digitalizzazione nella filiera agroalimentare con Mauro Germani, CEO e CoFounder di Soplaya, la startup che collega chef e ristoratori direttamente ai produttori che ha annunciato la chiusura di un nuovo round di investimenti del valore di 12.5 milioni di euro. Parliamo di tecnologie digitali per il check-in delle imbarcazioni nei porti con Mattia Tartaglia, Ceo della start up UlissesAPP, piattaforma per il censimento in mappa in tempo reale sviluppata per facilitare le operazioni burocratiche dei diportisti.E come sempre in Digital News le notizie di innovazione e tecnologia più importanti della settimana.
European Union's Artificial Intelligence Act, the world's first comprehensive AI regulation, is facing pushback on the final details of governance systems that support AI services. Authorities are navigating between big tech companies advocating against stifling innovation and the necessity of safeguards for advanced AI systems. EU lawmakers' effort to include regulations extended to foundation models has sparked resistance from the EU's three biggest economies advocating for self-regulation. Amidst this, global powers like the U.S., U.K., China are rushing to set regulatory boundaries as the technology advances rapidly. --- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message Support this podcast: https://podcasters.spotify.com/pod/show/tonyphoang/support
On today's podcast we interview a leading European Union lawmaker about the EU's proposed Artificial Intelligence Act. Eva Maydell, a member of the European Parliament involved in the final talks on the AI Act, discusses how the EU's pioneering bill shouldn't be made “so burdensome or so uninviting” that AI investors avoid or leave Europe. She describes how the bill could be a "global standard" and how the parliament's approach balances a stress on centralized enforcement and innovation. She also gives a hint on the prospects of finalizing negotiations on the law this year.
Turtlezone Tiny Talks - 20 Minuten Zeitgeist-Debatten mit Gebert und Schwartz
„Remote Biometric Identification“, also bestimmte Formen der Gesichts- und Identitätserkennung mittels KI, der Nutzung für ein Social Scoring oder die Nutzung der Künstlichen Intelligenz und der biometrischen Daten für eine Profilbildung von interessierter Seite, sind Schwerpunktthemen in den Abstimmungen des Europäischen Artificial Intelligence Act und werden -so der Plan- als Teil der Risikoklasse „Inakzeptables Risiko“ unzulässig sein und damit einem Verbot auf dem Gebiet der europäischen Union unterliegen. Die EU nimmt mit ihrem AI Act weltweit eine Vorreiterrolle ein und der risikobasierte Ansatz mit verschiedenen Risikoklassen erlaubt eine feinabgestufte Regulierung. Die Herausforderung liegt aber im Detail und in der Abgrenzung. Der EU AI Act wird für den europäischen Markt und die Nutzung von KI-Anwendungen und KI-generierten Inhalten innerhalb der EU gelten. Damit betrifft das Gesetz zwar weltweit Unternehmen, die in Europa tätig sind und es kommt nicht auf den Standort des Servers oder den Sitz des Unternehmens an. Dennoch können europäische Bürgerinnen und Bürger über das grenzenlose Internet natürlich auch KI-Dienste von ausländischen Anbietern in Anspruch nehmen können, die von sich aus nicht den europäischen Markt adressieren. Und es wird europäischen Entwicklern möglich sein, in Europa nicht zugelassene und verbotene KI-Systeme für Anwendungen in Drittmärkten anzubieten. Ziel der Gesetzgebung ist vielmehr den Einsatz risikobehafteter KI-Lösungen zu regulieren. Daher gelten die Regelungen dann sowohl für Anbieter als auch für Nutzer, also auch für Privatpersonen, die sich in der Europäischen Union aufhalten. Und als Anbieter sind rechtlich nicht nur Software- oder IT-Unternehmen zu verstehen, sondern vor allem auch Betreiber bis hin zu Behörden.
What are the 17 United Nations Sustainable Development Goals? What are the biggest challenges in pursuing and achieving those goals? How does technology play a role? And what's the best way for government, academia, and industry to cooperate and collaborate in support of fundamental research? We will learn those answers and more in this episode with Declan Kirrane, the Chairman of the Science Summit at the United Nations General Assembly, and founder and managing director of ISC Intelligence in Science. Declan has more than 25 years of experience as a global senior advisor to governments and industry on science research, science policy and related regulation. He has been actively promoting a more significant role for science within the context of the United Nations General Assembly since 2010. This has culminated in the annual Science Summit within the context of the UN's General Assembly. The focus of the Summit is on the role and contribution of science to attain the United Nations Sustainable Development Goals – or SDGs. The current edition – UNGA78 - takes place from September 12-29, and will bring together thought leaders, scientists, technologists, policymakers, philanthropists, journalists, and community leaders to increase health science and citizen collaborations to promote the importance of supporting science. And we are thrilled that Oracle will be part of the Science Summit with a few of our executives speaking and attending, including Alison Derbenwick Miller, global head and VP of Oracle for Research. -------------------------------------------------------- Episode Transcript: http://traffic.libsyn.com/researchinaction/Research_in_Action_S01_E19.mp3 00;00;00;00 - 00;00;22;29 What are the United Nations Sustainable Development Goals? What are the biggest challenges in pursuing and achieving those goals? And what's the best way for government, academia and industry to cooperate and collaborate in support of basic research? We'll get the answers to all this and more on Research in Action. 00;00;23;02 - 00;00;49;08 Hi, and welcome back to Research and Action, brought to you by Oracle for Research. I'm Mike Stiles and today's distinguished guest is Declan Kirrane, who is the chairman of the Science Summit at the United Nations General Assembly and the founder and managing director of ISC Intelligence and Science. And we're talking to a guy with more than 25 years of experience as a global senior advisor to governments and industry on science research, science policy and regulation around science. 00;00;49;10 - 00;01;17;07 Declan has been promoting a bigger role for science in the context of the U.N. General Assembly since 2010, and that's led to an annual science summit that focuses on the role and contribution of science to reach the United Nations Sustainable Development Goals or SDGs. The current edition UNGA 78 is happening September 12th through 29th and will bring together thought leaders, scientists, technologists, policymakers, philanthropists, journalists and community leaders. 00;01;17;09 - 00;01;37;02 We'll talk about increasing health science and citizen collaborations and why it's important to support science overall. Now, Oracle's actually going to be part of that science summit a few of the executives will be there speaking, including Alison Derbenwick Miller, who's global head and VP of Oracle for Research. Declan, thank you so much for being with us today. 00;01;37;08 - 00;01;58;13 Thanks, Michael. Great to be here. Thank you for the opportunity. Delighted to be here. What we want to hear all about the science summit at the U.N. General Assembly. But before we go there, tell me what got you not just into science, but science policies and your role in creating this summit? Well, first is, I suppose, the simple answer to that is happenstance. 00;01;58;13 - 00;02;21;10 I have to tell you, it was not planned. My primary degree is the history of art. And then I did law and probably needed a job after all of that. And then as a lot of people did in the late, late eighties, emigrated to the U.S. of A and on the basis that there was nothing going on in Ireland. 00;02;21;10 - 00;02;51;23 So opportunity beckoned and therefore from that worked on Wall Street and at a boutique mutual fund company. And then between one thing and another, I ended up in a in a boutique similar boutique company in Paris. And from that to Greece and from that, I got into more consulting side of things and from that started working for global multilateral bodies such as the World Bank and the IMF on a contract basis. 00;02;51;23 - 00;03;23;25 And then from that got more into telecoms and from that into into science coming out. And I suppose from the area of telecoms, infrastructure and data rather than, if you like, a bank scientist. And I suppose my history of art background gave me a wonderful perspective on policy, at least that's what I argue. And, and from that I got very interested and from the insights, but partly because the European Commission invited me and a couple of others to set up a dissemination service. 00;03;23;25 - 00;03;57;19 It's called Cordis. Cordis and the Cordis Information Service was designed by the European Commission to provide information on ongoing collaborative research and to provide information on publicly funded research opportunities in the course. The reason the European Union did that was to was to ensure that the information resulting from funding they're providing reached a very, very wide audience. So my job was to to do that and we built that out and that brought me into the area of science policy. 00;03;57;22 - 00;04;27;19 And I gradually began to understand the huge importance of science policy. And of course, 20 years ago science policy was not a thing, you know, it doesn't really exist in terms of policy making headlines, but it gradually came to be and as you know, it's it's part of the lexicon now. A lot of governments around the world have science policy priorities, and it's recognized as a driver for economic development and global competitiveness and driving solutions to global challenges. 00;04;27;19 - 00;04;51;05 So sciences is a thing, but 20 years ago it wasn't. So it's a relatively recent and I began quickly to appreciate the policy dimension of that, and that led me to work on policy that led me to understand policy mechanisms. And, you know, from my standpoint, I mean, there's no point in looking at some global challenges or many global challenges from a national perspective. 00;04;51;12 - 00;05;21;24 Really, it has to be global, it has to be international. That led me to engage with the United Nations. And from that, we just started to build from, as you say, from 2010, to start to build, engage with nations. And I really want to stress these were designed to be very, very simple to present not to a scientific forum, but to the U.N. for it to the mother ship, to the General Assembly, to diplomats, to policy and political leaders, and show them what science is. 00;05;21;24 - 00;05;43;04 And to give you a practical example, our first meeting was on biobanking. And you know, the main attention, wasn't it? What's biobanking? You see, that's exactly what we want. The want the question we wanted them to ask. And from Matt and that first mission, I think there's about 18 people in the room and we had about four or five diplomats last year at the Science summit. 00;05;43;06 - 00;06;07;02 We had approximately 60,000 participants. We had just under 400 sessions and we had 1600 speakers. So we've come a long way. And that really now is it's it's it's established. But we want to keep promoting. We want to keep science in the eye of the U.N. and we want to ensure that the future recognizes the contribution of science. 00;06;07;05 - 00;06;27;29 That's quite a journey. I think you did just about everything except science. Are you sure you weren't in the circus as well? Yeah, well, it's it's, you know, it's all true, you know, So, yeah, it's it's put a lot of it. Last 20 years has been on primarily on science. Yeah. Well in the intro I mentioned the United Nations Sustainable Development Goals or SDGs. 00;06;27;29 - 00;06;54;00 And our listeners are pretty savvy. They probably know about those, but I'm not savvy. So what are SDGs and how do they speak to global health and humanity in the in the in the mid nineties the the United Nations. And when I say the United Nations, I mean many of the United Nations constituent entities and agencies obviously were very concerned about what we generally call global challenges. 00;06;54;00 - 00;07;18;29 And in the area of health and other forms of well-being, the environment, climate, food security and safety and so on and so forth. And that led to a consensus that there needed to be, quote unquote, you know, how's this for a cliche? We have to do something. So that we have to do something resulted in the Millennium Development Goals, which were, as you can imagine, launched on the year 2000. 00;07;19;02 - 00;07;44;01 And they set forward these goals to to address challenges. And that that 50 years went by pretty quickly. And that then led on to a similar mechanism where you identify a challenge, you define a response to it, and then you allocate specific targets within that and get everyone to sign up to that and off you go now. 00;07;44;03 - 00;08;12;18 So that then that broad approach was repeated for the United Nations SDGs, the Sustainable Development Goals, of which there are 17. And they cover the headlines that you'd imagine between poverty reduction, hunger reduction, improved health, a life below water, life on land, addressing obviously biodiversity, climate and many other areas. And then we're in the middle of these now. 00;08;12;21 - 00;08;45;10 But already the world is turning its attention to the post SDG agenda. And this is where this probably where we are now. The United Nations is organizing the summit of the future September 2024, and that I suppose you could characterize that meeting rather I do as a a banging of heads together because there is a sense of crisis, there is a sense the SDGs are not being achieved, that progress towards the attainment of the SDGs is insufficient. 00;08;45;12 - 00;09;07;19 It is exclusive. It excludes many constituencies, many countries, and again, I won't enumerate them here, but I just present that as as the scenario. So there's now a lot of momentum behind what we know. What do we do next? Why old humble viewers? I don't think it's going to be a if you like, a goals oriented process. I think that's too simplistic. 00;09;07;19 - 00;09;41;01 The world. I think as we found out, is much, much more complex. And I think the issue of inclusion and equity are issues that are present in a way that they were not when the Millennium Development Goals and the Sustainable Development Goals were designed 30 and 50 years ago, respectively. And I think this equity dimension is going to give a far stronger voice to less developed nations. 00;09;41;01 - 00;10;07;05 And just on the back of an envelope calculation, I think if you take the OECD countries and change, you've probably got 30 nations that we could call a developed. And then I suppose the big questions that what about everybody else? And that is becoming a very stark consideration, which was not there. And this needs to be addressed in terms of inclusion and equity to a much, much greater extent than is currently the case. 00;10;07;05 - 00;10;37;01 And arguably then will lead to a more successful approach to whatever succeeds the SDGs, the SDGs. I'm interested in the mechanics behind that because I'm just kind of reading between the lines of what you're saying and it's like for this thing to have true accountability and for these goals to have any teeth at all. There does need to be a someone accountable, be a very good grasp of who the participants are going to be and some form of deadline. 00;10;37;04 - 00;11;01;19 Absolutely correct. Mike And that that was that the plan A the problem with that in in in in a word is it doesn't really work you've so many moving parts you've so many constituencies that it's you know, having this set table of goals and table of targets and allocating milestones know simply doesn't work. Now, why doesn't it work? 00;11;01;21 - 00;11;29;07 I believe in my view it is that many less developed nations don't have the wherewithal to achieve these SDGs. One needs investment, one needs skills, one needs training, one needs cooperation, one is finance. I mean, these are all requirements to make change it, particularly in the area of or particularly in every area. But if you look at health, if you look at energy transformation, if you look at digital transformation, they don't happen without moolah, without money. 00;11;29;14 - 00;11;48;22 So the question is, well, where's I coming from? The answer, I'm afraid, is it's not. And that leaves a lot of they again, when I say lesser developed nations, I mean that is the majority that's 150 nations on the on the on the on a rough calculation. And they're not they don't feel involved. They don't feel they're taken seriously in terms of support for the investment. 00;11;48;24 - 00;12;13;12 And I think they're looking looking at the developed world and they're saying, well, okay, you benefited from carbonized development then and now we're supposed to do on carbonized development and how is that going to work for us? And there's no answer to that. So I think it's extremely complex. And as you say, trying to build consensus around this is extremely difficult because any move forward does require political consensus as very, very hard to get these days. 00;12;13;12 - 00;12;30;16 I mean, you can you can look at Ukraine, you can look at you can look at the Sahel, you can look at many parts of the world where consensus are at a political level. It's very difficult, if not impossible. And then you factor into that, well, how do you then adopt action plans? How do you adopt roadmaps? Again, extremely difficult. 00;12;30;16 - 00;12;54;14 So I in my view, the the SDGs have come a bit unstuck because of the inability of developed nations to provide the necessary wherewithal, including funding. And therefore, of course, the other side of that coin is the inability of of many, many nations to advance those objectives, to achieve the goals that have been set out to reach those targets. 00;12;54;14 - 00;13;32;09 And that simply is not happening. And on SDG eight in the High-Level Policy Forum in July of this year and the the process of reporting on SDH was abandoned for reasons which I think are quite obvious, and no one had anything to report. So I point to that specifically. And also I was with a number of African nation ambassadors for dinner in Brussels two weeks ago, and they pointed out that they've stopped wearing their SDG lapel pins, you see. 00;13;32;11 - 00;13;56;13 And there's two reasons for that. One is in protest at the slow progress towards the SDGs, and secondly, because of, as they see it, their exclusion from the decision making process associated with the SDGs, which, as you can imagine, has a, you know, an annual review mechanism and and and all that sort of stuff. They feel excluded from that. 00;13;56;13 - 00;14;27;04 And my own view is they are for the reasons I've I think I've mentioned or alluded to and this brings this this promotes exclusion and inequity. And again, to repeat this, this wasn't in fashion 50 years ago to the extent that it is today. Now, it is a very, very strong policy and political force. And the institutions, the multilateral institutions that take leadership on these issues now have to find ways to to address that and to build inclusion in a very, very significant and meaningful way. 00;14;27;04 - 00;14;50;08 It's not just the family photo opportunities. It's making sure that these communities, that the stakeholders feel they're involved and they are involved. They're seeing the benefits. And I suppose to that extent, it's it's you know, it's politics as usual. Boy, those those challenges are just huge. It's it's quite an undertaking to to pursue those. But I guess that's what also makes it exciting as well. 00;14;50;10 - 00;15;11;10 Since this show is called Research and Action, we do talk a lot about the need to knock down barriers and support research, but research has several stages from basic all the way through clinical. What is especially important about supporting basic research and getting that right? What are those benefits? I suppose so. Simply put, you know, that's where it all starts. 00;15;11;10 - 00;15;45;05 And when we talk about basic research, we talk about basic research, but I would also call it pre competitive research. So that's a start for, you know, is everybody's friends and everybody is collaborating before they before they apply for a patent or before they discover discover something they can monetize or exploit or innovation in whichever way. And I think a very important aspect of this is the fact that it's by and large government funded, and this gives it a very important dimension, not to mention is seeding the potential for innovation. 00;15;45;07 - 00;16;08;28 And I often reflect that if you if you the government plays a huge role in science and technology. And now I don't have the details in front of me, but, you know, as far as I understand it, about a Tesla Enterprise wouldn't be where it is today without a small business loan from the US government. And of course, Mr. Gates was a beneficiary of government contracts at a very early stage in the development of Microsoft. 00;16;08;28 - 00;16;30;01 So just to point there to the importance of government funding across the board with respect to the government investment in science and technology in the pre competitive space, there's a clear recognition that without a synchrotron or without the government investing in synchrotron or large scale science facilities, then I think we're not going to have stakeholders who can build those. 00;16;30;03 - 00;16;52;12 So it simply simply won't happen. Many, many outcomes I think are evident in terms of the investment and in science and technology. You know, basically we have an advance in knowledge. Basic research seeks to understand the fundamental principles underlying various phenomena. And I think the curiosity driven research around this then leads to much innovation. But of course you don't know that at the beginning. 00;16;52;12 - 00;17;10;28 So I think there has to be a very strong political commitment to Blue skies research. And again, I stress the word political committee because it is a policy decision for a government, any government to invest in pretty competitive research, in science, capacity building, which is predominantly pre competitive and on in there in basic science. So I think that's that's hugely important. 00;17;10;28 - 00;17;34;11 Just to point to the policy dimension, I think that then leads to various innovations and that that that is applying. So you see a very clear narrative between basic research, innovation and applied research. Many groundbreaking innovations and technological advancements have emerged from the discoveries made in basic research. And I think this needs to be spelt out very often when a policymaker gets up in the morning. 00;17;34;18 - 00;17;56;18 That can be a complicated narrative. You know what I want to be getting from this? Why spend vast sums of money on basic research, blah, blah, blah? But I think when you look at the evidence, I think then the case is is compelling. But of course, that needs to be understood continuously, primarily by policymakers. And it does bring long term benefits, The outcomes of basic research might not lead to immediate benefits or applications. 00;17;56;18 - 00;18;25;27 However, these insights often lay the groundwork for future breakthroughs, which could and very often do have significant societal, economic or technological impacts over time. Problem solving is another reason to fund and do basic research educational value. Basic research plays a critical role in educating the next generation or generations, indeed, of scientists, researchers and thinkers. It provides a training ground for students to learn research methodologies, critical thinking and analytical skills. 00;18;26;00 - 00;18;52;06 And these values have multiple applications, multiple applications. And then we have cross-disciplinary insights. I think this is self evident. Basic research often leads to unexpected connections between different fields of study. These interdisciplinary insights can spark collaborations and innovations that otherwise wouldn't come to the fore. Intellectual curiosity, I think, needs also to be highlighted. Then we have the benefits coming from scientific advancement. 00;18;52;10 - 00;19;26;18 So I think Mike, there are many, many, many benefits in that. And I'd just like to point to really one example of basic research. You may not be a follower of radio astronomy or you might be about South Africa won a global competition to build the square kilometer Array telescope, the SKA, and that was a global competition in 2011 against the UK, against Chile, China, Brazil and Canada. 00;19;26;18 - 00;19;50;25 I believe there may be one or two other countries there as South Africa won the right to host and to build the UK and it is now doing that. It's probably a 30 year project. But here you have an example of of an African nation competing to build a hugely complex scientific instrument in the middle of the Karoo desert. 00;19;50;25 - 00;20;30;21 Now why do that? Many reasons to do it. But one of the compelling reasons that I learned from exposure to the project is the enormous commitment that the South African government and now, of course, to have partner countries, including Australia, that huge commitment they have made to education and training the next generation through the scale. And you will see in the system you'll see that many US multinationals, the Dell Corporation, IBM, Microsoft have very strong project association and collaboration with the UK and South Africa. 00;20;30;24 - 00;21;00;04 When the Economist wrote about the UK in 2016, I believe it was, they said this is the world's largest science project. And I think, you know, just it's worth reflecting on that. And this has enormous, enormous future potential. It has existing benefits to the scientific community and of course it is a huge flagship idea that provides a lightning rod for scientific collaboration across Africa and across the world. 00;21;00;11 - 00;21;26;13 At a very practical level, it brings many scientists to visit the facility to work with African and South African collaborators. So this is an ongoing benefit. I think a wonderful example of what our research infrastructure is, what basic science is, and why it should be funded. Yeah, what you just described is an enormous success story. But, you know, candidly, my optimism is challenged because so much of this does rely on government participation. 00;21;26;19 - 00;21;54;08 Yet it feels like as long as money and politics is in the picture, those are the anchors that can weigh things down. And against that backdrop is the science summit. So how did the science summit become a reality and was there any resistance to it or did anybody think this wasn't a good idea or not worth doing? The as far as I've learned, I mean, the response has been universally very, very positive, extremely positive. 00;21;54;11 - 00;22;26;03 And that's because the science summit is designed aimed to advance a greater awareness of the contribution of science to the SDGs. Now, how do you do that? You do that by bringing folk together. And those folk are not just the scientists. I mean, we're not organizing an ecology conference, we're not organizing a radio astronomy conference, we're organizing a science engagement process with U.N. leadership. 00;22;26;06 - 00;22;54;09 And more than that, we are showing how science needs to be inclusive. So to that end, we have a very strong narrative around inclusion. We have a very strong narrative around development, finance for scientific education, for science, performance and investment in science. And through doing that, we are education policymakers. We are engaging with policy makers. And I need to stress this invariably is it is a process. 00;22;54;16 - 00;23;15;28 But at the end of the day, policymakers that I have engaged with at many levels in Africa, Europe and the United States, they want to make the world a better place. I don't think there's any any doubt about that at very often in that quest, they are very remote from the outputs of science for the evidence that is there that shows that science delivers. 00;23;15;28 - 00;23;38;28 Of course, it's in the system. But very often the political system of political decision making is very human. It's a very natural process. It's not always empirical. And I think as you know, and possibly in in the Western world, we see that policy making is becoming more political with a small P. So it's into that environment that we are going and showing how science makes a difference. 00;23;39;05 - 00;24;08;26 Practically. We're showing how science delivers on the SDGs, we're showing how science delivers on the future challenges. And with reference to a very important aspect, we're also highlighting the the importance of enabling access to data now, and this is you'll probably be familiar with the European Union's General Data Protection Regulation, and there are other regulatory regimes in in the United States and Canada, Japan and Brazil and and elsewhere. 00;24;08;28 - 00;24;33;19 And now we are looking at the evolution of regulation concerning artificial intelligence. Now, these regulatory processes as one outcome have impacts on access to data and the use of data for scientific purposes. There is no global regulator, there's no global policymaker. How do we address a global coordination on these issues? And that's something we want to raise within the context of Science Summit to ensure that science is data enabled. 00;24;33;21 - 00;25;00;25 When we talk about science capacity building, essentially we are talking about improving the flow of data, access to data, use of data from machine learning and AI and other purposes, and extending that capability globally. And when that can happen, then you will see dramatically improved outcomes in terms of health research at the environment, biodiversity, energy and many, many other areas. 00;25;00;29 - 00;25;44;06 But we're not there yet. That very much is in the future. So we're trying to align the debate around the objective of creating these new innovations with the need for aligning energy policy, energy technology and other information technology around alignment on regulations. That's huge, huge importance. So we see that. We see the opportunity after the United Nations General Assembly to talk to governments, to talk to political leaders, to talk to Balsillie was to talk to diplomats, to talk to regulators, to talk to bureaucrats and show them what this is, how this matters, and very importantly, how they can include optimized policies to support science in future policies at the bloc level, at nation level. 00;25;44;06 - 00;26;13;20 And we have many, many meetings bringing forward scientists to show what they do, what's necessary in terms of government regulation and support to enable. So we're talking about creating the enabling policy and regular Tory environment for more and better science. And funnily enough, we don't say that's more that's about more money. We don't feel that. We don't think that what there is, is more opportunity and a great need for alignment at government and policy level. 00;26;13;23 - 00;26;39;06 And if every country in the world goes it alone in terms of creating regulation and creating policies, then we're looking at extreme fragmentation. There is much, much untapped potential for governments to work together, and that's one reason we're very happy to be working with Oracle, because, you know, from there, you know, as a company and, you know, forgive me if this is too simplistic, but they, they they create these machines that can communicate data. 00;26;39;06 - 00;27;07;29 And this is a this is a vital and vital a vital need globally. And how they do that and future, I think, will point to many, many future opportunities, which is a very important consideration, because with the science summit and at the level of the U.N., there's there's a huge recognition of the need to work with industry players and the importance of working with industry to deliver innovations, because it's not going to be a university center in it. 00;27;07;29 - 00;27;33;27 With the greatest respect to Cork University in Ireland, they're not going to be making the mess that's going to come through a company. So and industry. So this collaboration opportunity between academia, between governments and industry, I think is ripe for transformation, I think has enormous potential to address global challenges. So can you give us kind of a feel for what kind of speakers and sessions can be expected at the summit? 00;27;34;04 - 00;28;02;24 Yes, Michael, we've got a very inclusive approach to the summit, so we're covering a lot of things, but I suppose I would accept that we have a bias towards health on the health research. On the 13th of September, we have an all day plenary on on One Health, which is a perspective that brings together planet people and animal health into a, if you like, a one world view. 00;28;02;27 - 00;28;26;10 We have a lot of amazing speakers from the five continents who will be coming to that meeting. And what we want to do then is this is relatively rare. It's a relatively new area. By that I mean it's a relatively new or a policymaking. So where want to advance policymaking in this area? We want to also promote interdisciplinary research and show how research matters across these three areas because they cannot be addressed in isolation. 00;28;26;12 - 00;28;56;06 And we'd argue at the moment, by and large, that they are. If you look at national funding systems and national priorities and all the rest of it, they look at animal health or they look at human health or they look at biodiversity. But looking at all three I think is vital. That's our that's our flagship session on Wednesday the 13th on the 14th, Thursday the 14th, we're going to focus on on pandemic preparedness and we're going to bring together the leadership from the National Research Foundation in South Africa, from the African Union Commission, from the European Union. 00;28;56;06 - 00;29;33;16 Delighted to have Irene North steps. The director for the People Directorate in Brussels is coming to join us. For three days. We have Professor Cortes at Lucca from the Medical University of Graz, who leads many European Union research initiatives. But he was the main instigator of the European Union's biobanking research infrastructure, of biobanking, of molecular resources. We should infrastructure, which does pretty much as it says on the can, and we're looking to create a UN version of that, if you like, And look at how this capacity for biobanking is going to contribute. 00;29;33;16 - 00;29;57;01 So and pandemic burden, it's very, very important that we also have President Biden's science adviser, Dr. Francis Collins, former director of the and I and the in the United States, Then we will also have representatives from Dr. Sao Victor. So from the U.S. Academy for Medicine, National Academy for Medicine. He'll be presenting the US approach to pandemic preparedness, which is called 100 days Mission. 00;29;57;06 - 00;30;22;17 What you Need to Do in the first hundred Days. We're very excited about that and very, very much looking forward to using that as a template for a global approach. And while there's been a lot of focus on global strategies, which we obviously very much support, we want to take that global strategy approach to the level of action in terms of what capacity is needed, where's that capacity needed, How can the capacity be delivered? 00;30;22;19 - 00;31;09;02 So very much looking forward to pandemic preparedness as a highlight of the summit. Then on Friday, Friday the 15th of September would have a one day plenary on genomics capacity building with a focus on Africa. But the approach will be global, But bring it forward. Will How does the capacity work for pandemic? Sorry for genomics and has been led by global industry in terms of Illumina and it's been led again by data experts, and that really looks at a future for genomics capacity building in Africa, without which we are going to be or Africa is going to be extremely hampered in the development of medicine and related therapies. 00;31;09;04 - 00;31;37;12 So there are three of the sessions. We also have the Obama Foundation having a meeting on the on the 17th of September. We're going to bring philanthropic organizations together, are for lunch on the 15th. We are going to have a number of sessions around the Amazon with the Brazilian Fapesp, the Rio National Research Agency, and they'll be looking at the future of Amazon from the perspective of collaborative research and development and science. 00;31;37;15 - 00;32;06;00 We will be working with a number of legal experts with the law firm Ropes and Gray, who will bring together experts to identify scenarios for an enabling regulatory environment for genomics that's going to take place on the afternoon of the 16th. We are going to have a number of focus days. The government of of government of Ethiopia will be joining us and they'll be presenting how the Ethiopian government presents or approaches the SDGs. 00;32;06;00 - 00;32;27;18 From the point of view of enabling science. We have a similar approach from the government of Ghana. We will have the nice people from Mongolia, the government of Mongolia. They will be presenting a regional approach from the roof of the world, and we would have the same from Nepal, from India, from Japan, from Brazil and many other nations. 00;32;27;23 - 00;32;58;22 And that national approach is very, very important because again, we want to highlight the need for synergies, highlight the similarity between national approaches and then how they can be brought together and benefit from one another. We will also have a presentation from the editor of Nature, Magdalena Skipper at They'll be presenting a what they call a storytelling evening, and that's that's designed to inform and show how science careers evolve. 00;32;58;28 - 00;33;27;05 So so the community can get an understanding of of how that has worked in a number of individuals so very much at look at looking forward to that. I think that personal aspect is is very, very important. And we will be having a number of sessions with with investors how they are approaching investing in science and technology, how that investment can be better aligned between governments, industry, not for profits, philanthropy. 00;33;27;05 - 00;33;50;18 And we're feeling we're seeing that a lot of these organizations have similar objectives. So there's enormous potential to see how they can be more aligned, work together for common objectives and thereby increase possible benefits and outputs. So very much look forward to dose those discussions. In terms of our principal outputs, what we want to do really is three levels. 00;33;50;18 - 00;34;12;01 First is we want to increase participation and collaboration. So we want to bring people together. And one of the main outputs of the science summit last year, researchers discovered each other. They went away and they started collaborating. That wouldn't have happened if they hadn't met at the science. So that's one level. Second level is what our agenda is. 00;34;12;04 - 00;34;44;27 So the United Nations will convene the summit of the future in 2024. So the question we're asking everybody is what should the science agenda for that meeting look like? And we want to compile it. And with the 400 odd sessions we're running, we want to work with them and see how can they contribute to that, What priorities can they put forward and how do they look in terms of a specific objective which the United Nations can support in terms of energy attainment or the post SDG agenda? 00;34;44;29 - 00;35;22;06 And the third element we want to advance is better policy making, make better policies. We will have tennis knocked and Dennis is the chair of the Inter-Parliamentary Union Science Committee. The Inter-Parliamentary Union is a global organization and represents 138 parliaments around the world. This dialog is hugely, hugely important. So we're going to be working with Denis to see how his members so those legislators in those 140 odd countries can incorporate better global ideas into policymaking at a local level. 00;35;22;06 - 00;35;52;29 And I'm talking about I'm talking about Nepal, I'm talking about Ghana, I'm talking about Kenya, I'm talking about many, many countries. And then what we what we hope that that will achieve is real sustained change. And as we move towards the end of this decade, that's going to be hugely, hugely demanding. But I think if we build this global momentum and we drive this cooperation and instill a sense of cooperation among scientists globally, and also we say that, you know, scientists in fact, are policy policymakers. 00;35;52;29 - 00;36;10;12 I don't see this divide between policymakers and scientists. I think scientists have a huge amount to contribute to policymaking. So, in fact, they're the policymakers. They know a lot about health, They know a lot about what policies are needed to deliver better health. And we want to give them a voice. Well, as I mentioned, Oracle will be speaking and participating at the summit. 00;36;10;12 - 00;36;37;01 And you touched on it a little bit. But when you think about the role for industry players, especially technology giants like Oracle and what's needed to pursue the SDGs, we've talked on the show a good bit about the concept of open science and increasing access to scientific data. It feels like big advances in global health can't happen if those developing or lower middle income countries are kept at arm's length from data. 00;36;37;04 - 00;37;00;02 Absolutely, Mike. Absolutely. Very, very well said. And as I've outlined, is that one of the main impediments potentially to this is regulation by advanced nations, which impacts on less developed nations. So I think an industry has a huge role to play in that because, you know, industry and providing the wherewithal to to advance this data exchange. So we very much look to industry leadership. 00;37;00;02 - 00;37;16;20 And I think Oracle is going to be very instrumental there in showing and leading the way in terms of how data is enabled and how data systems can allow access to data use of data, and of course the use of data for machine learning. And I think that's something we need to learn a lot about, particularly in developing nations. 00;37;16;23 - 00;37;35;25 I also think that the United Nations Global Sustainability Report, the latest version of which is available in draft, and I think the final version will be published at the end of this month. Points to a huge role for for industry. My own view is that I think industry need to be much more at the table at this U.N. table. 00;37;35;25 - 00;37;56;24 I'm delighted to see that Oracle is joining us in this quest, because I think we need to build a narrative and I think it'll be for industry are going to be a very credible partner in terms of telling governments what is necessary, what's needed in terms of creating the space for data to do what data needs. And again, in particular in the countries that are going to be challenged in their quest for access to data. 00;37;56;27 - 00;38;33;03 And that presumes that they have the capacity to have the infrastructure. Many don't, but they're going to need to have that and the industry going to be critical in delivering that. And I think that's that's terribly, terribly clear. So that role for industry in delivering, I think, spans the optimization of policy, the optimization of regulation, the deployment of technology, the maintenance and sustainability of that technology, and of course for the advancement of that technology into different areas in its application, particularly in ICT application, in the areas health and energy and the environment, biodiversity, climate and so forth. 00;38;33;06 - 00;38;55;25 And I think this is something that provides a gives me a lot of optimism in future. And I think also almost we're looking at a, if you like, a post, arguably a post regulatory model where where technology will allow us to define the the remit of Data Act access. I don't think we're there yet, but I think this is this is possibly in future. 00;38;55;27 - 00;39;16;01 And again, Oracle and the colleagues from Oracle will be engaging in a number of discussions on the regulatory side, on the technical side, on the access to data side that's going to help the communities understand not necessarily the solution, but at least define the questions. I think define the questions. Then we have a much greater opportunity in obtaining the answers. 00;39;16;03 - 00;39;39;17 Well, also in my intro, I mentioned that you are founder and managing director of ISC Intelligence and Science. Tell us about that endeavor. What does that do? Well, that that mainly is devoted towards building body types, capacity and advising governments on science. Capacity Building that many faces is based around scientific infrastructures. And of course they come in in many, many flavors. 00;39;39;22 - 00;39;59;29 But ours really is around the design of research infrastructures that that tends to be quite a long, competitive, drawn out, complicated process. Of course, for any funding, there is a there is a competitive process. This takes a a number a number of years, very often for an award, then a subsequent number of years for a design phase to be completed. 00;40;00;05 - 00;40;21;02 Before then you move into construction and operation. Our primary focus is on the design phase and we've done that in in Africa. We do it in India, in in North America, Latin America. And one of our main reasons for focusing on this area is because it means the capacity is there to to allow science to do what it does. 00;40;21;02 - 00;40;46;01 I've mentioned the case of the SKA and in Africa there are many others. But I would say hitherto there's been a lot of differentiation between science capacity. And of course this is this is quite understandable. But I think increasingly in future that capacity will be effectively one big data machine. It won't matter what flavor of science you're doing, you're going to be dipping into a common data reserves. 00;40;46;01 - 00;41;23;05 Now, there's some caveats around that, such as a a synchrotron, for example, or a light source. I think these are, as you can imagine, specific unique instruments. But we're looking forward very much to have the director of the Office of Science in the United States, Dr. Esmond Barrett, talk to us about how this can work on a global level and what are the challenges and how the US experience in building these science infrastructures and capacities can then help many, many other countries to to advance towards not net, not necessary do the same, but at least be on a path to access such capacity. 00;41;23;05 - 00;41;52;08 So ESI has been very, very involved in that and also involved in the regulatory aspects of the impact of updated regulation on science is something we're very exercised about. If we feel that the scientific community historically, by which I mean maybe over the last 15 years have been very slow to understand the implications of regulation of science, but equally the regulatory bodies at national level, equally have been very slow to understand the impacts of science because their primary concerns are not science. 00;41;52;13 - 00;42;23;27 The primary concerns are as they see them is the protection of individual data, etc., etc., etc. and that's very worthy and noble. But then once you pull the thread, you see that that has aspects and implications for scientific endeavor. So we're working in that interface, ensuring or trying to ensure or trying to increase respective awareness and visibility. And now this is has a very sharp focus in the advent of a EIA, the Artificial Intelligence Act in the European Union, which will be defining for reasons we mentioned earlier. 00;42;23;27 - 00;42;43;12 Also, we are very active in that space and we're very particularly active and, and how this seen, how this impacts on less developed nations. Well, Declan, again, we appreciate you being on the show today. If people wanted to learn more about the science Summit or ISC intelligence and science, how can they do that? Main ways. The website for the Science Summit is Science Summit. 00;42;43;15 - 00;45;13;24 It is sciencesummitunga.com the company website is ISC intelligence dot com and then you'll find the usual links to Twitter and all the rest there. Very good. We've got it. And if you listen are are interested in how Oracle can simplify and accelerate your own scientific research. Just take a look at Oracle dot com slash research and see what you think and of course join us again next time for research and action.
TobinSmith.io AI Investing Primer https://truemarketinsiders.io/learning/articles/an-artificial-intelligence-primer-Join Badlands Media for The Great Rest with host Sean Morgan. Each week Sean will discuss all the important financial news you need to know._What happens when the AI mania meets the transformative force of change wave investing? Prepare for an enlightening journey with Tobin Smith, the renowned author of Change Wave Investing and Change Wave Investing 2.0, and former Fox News commentator. We traverse the landscape of the tech stack, spotting winners and losers in every 15-year transformative cycle, and delve into the heart of the current AI explosion. Hear Tobin's take on Mera's Law, the NVIDIA GPU revolution, and the dramatic impact of the Fed's trillion-dollar injection on the digital image market.Every revolution promises casualties and opportunities. We probe into the world of Artificial General Intelligence (AGI), its potential to overturn businesses, and its power to unlock a staggering global opportunity worth trillions. Discussions meander through the treacherous terrain of the EU's upcoming Artificial Intelligence Act, its implications on companies, and the potential tremors it could cause in the world of work through job automation. The future of labor and productivity is poised on the edge of a radical shift. We explore how machine learning, GPUs, and AGI are transforming the workplace and making real-time answers a reality. We also examine the rise of revolutionary software like Wayfinder, altering the very fabric of logistics and reducing the need for manual labor. To wrap up our journey, Tobin offers an insightful look into the mechanics of the stock market and shares pearls of wisdom from his investment newsletter. Strap in for a deep dive into the tumultuous, thrilling wave of AI with Tobin Smith.-• Secure your financial future! GOLD AND SILVERhttps://BadlandsGold.com• MyPillowhttps://www.mypillow.com/Promo Code: BADLANDSOr call 800-795-5154• Benson Honey Farmshttps://bensonhoneyfarms.comUse REP Code: BADLANDS• Bootleg Productshttps://BootlegProducts.comCoupon Code: BADLANDS for any order over $40• No Bugs Beefhttps://NoBugsBeef.comPromo Code: BADLANDS for an additional 10% off• Flying Gang Rum Companyhttps://flyinggang.com/shoprumPromo Code: BADLANDS - For FREE SHIPPING OVER $100• The Wellness Companyhttps://spikedefend.com10% off with Promo Code BADLANDS_https://BadlandsMedia.TV_Check out our Badlands Marketplace made up of America-First businesses: https://marketplace.badlandsmedia.tv/home_Interested in promoting your business? Email Matt Byram atads.badlandsmedia@proton.me_Breaking History isSean Morgan:Website:https://SeanMorganReport.comTwitter:https://twitter.com/seanmreportTruthSocial:https://truthsocial.com/@seanmorganreport_Follow Badlands Media at:Substack: https://badlands.substack.com/Twitter: https://twitter.com/BadlandsMedia_Facebook: https://www.facebook.com/badlandsmedia22Rumble: https://rumble.com/c/BadlandsMediaTruth social:#SeanMorganSupport the show
How has our knowledge of AI and our awareness of its potential progressed in the past 5 years? This week, we're going back to the vault to re-air one of our first AI related episodes, featuring Michael Page, former Policy and Ethics advisor at OpenAI. Michael Page is the former Policy and Ethics Advisor at OpenAI: https://cset.georgetown.edu/staff/michael-page/ Stewart Baker is Of Counsel at Steptoe & Johnson https://www.steptoe.com/en/lawyers/stewart-baker.html References: "AI outperforms human lawyers in reviewing legal documents" : https://futurism.com/ai-contracts-lawyers-lawgeex "An algorithm that grants freedom, or takes it away": https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html "Can AI be taught to explain itself?": https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html H.R.4625, FUTURE of Artificial Intelligence Act of 2017: https://www.congress.gov/bill/115th-congress/house-bill/4625/text Find more about the Steptoe Cyberlaw Cast https://www.steptoe.com/en/services/practices/litigation/privacy-cybersecurity.html?tab=the_cyberlaw_podcast Join us for the 33rd Annual Review of the Field of National Security Law CLE Conference this November 16-17, held at the Renaissance Washington DC Downtown Hotel: https://web.cvent.com/event/7eb6b360-9f77-4555-844f-4fa28099f64a/summary
Czerwcowe podsumowanie świata AI/ML przynosi nam nowy szablon rozwiązań opartych o sztuczną inteligencję, przegląd roadmapy firmy OpenAI (w tym plany wobec GPT-4) a także kolejną porcję obserwacji na temat regulacji, jakie przygotowują urzędnicy w Brukseli. Materiały omawiane w podcaście: 1. The Getting Started with AI Stack for JavaScript ( https://github.com/a16z-infra/ai-getting-started) 2. Lessons from Europe's Artificial Intelligence Act: the perils of regulating AI like exploding kid's toys 3. Better GitHub Copilot Prompts 4. Ambitne plany OpenAI 5. Jak będzie wyglądało GPT-2030? Dołącz do Opanuj.AI i bądź zawsze na bieżąco!
Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education. FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)
In this episode, Ryan and Shannon discuss the EU's Artificial Intelligence Act and how it will impact the rest of the world. Please LISTEN
This week, Conan D'Arcy is joined by Ilana Kunkel to discuss the European Union's new Artificial Intelligence Act, a wide-ranging attempt to regulate the tech sector. They break down the EU's overall mentality towards AI, what the Act is missing, and its likely timeline for implementation. Hosted on Acast. See acast.com/privacy for more information.
The European Parliament has passed the Artificial Intelligence Act, which is aimed at regulating platforms that use AI in different settings. All but four Irish MEPs voted for it, with those four abstaining, such as Clare Daly and Mick Wallace. Fine Gael MEP Deirdre Clune joined Sean to discuss...
The European Parliament has passed the Artificial Intelligence Act, which is aimed at regulating platforms that use AI in different settings. All but four Irish MEPs voted for it, with those four abstaining, such as Clare Daly and Mick Wallace. Fine Gael MEP Deirdre Clune joined Sean to discuss...
Generative AI has garnered significant attention recently due to its unique ability to create novel content designed to mimic humans.ChatGPT is a form of generative AI currently gaining popularity. It is designed to generate human-like text in a chatbot context. This AI-powered chat tool is an example of how generative AI can automate content creation, in this case, by generating responses to user input in a chatbot.It has the potential to revolutionise many industries by automating the creation of content, analysing large amounts of data and overall improving efficiency, which frees up workers' time. However, generative AI's potential impact on the work landscape of the information industry has led to scepticism. There are concerns about job displacement and a loss of human perspective and voice.Another drawback of generative AI is that it reflects society's biases on issues such as gender and race. It can generate fake news, such as ‘deepfakes': images or videos created by AI that appear realistic but are false and misleading.Currently, the EU's approach to artificial intelligence centres on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.In December 2022, the Council adopted its common position on the Artificial Intelligence Act which aims to ensure that AI systems placed on the EU market and used in the Union are safe and respect existing laws on fundamental rights and Union values.Rewatch this EURACTIV Hybrid Conference, part of the Horizon Europe project AI4TRUST, to find out about the benefits and risks of generative AI. Discussed questions included:- Is there a place for generative AI in our society?- What repercussions does generative AI have for the information industry? How does it impact journalism and content creation?- What safeguards can be put in place to regulate generative AI?- Does the European Commission's AI Act adequately protect us from the drawbacks of generative AI?This project has received funding from the European Union's Horizon Europe Programme under Grant Agreement no 101070190.Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.>> Click here for more information about the event.>> Click here to check out our upcoming events.
In this podcast, Ian Duffy, Partner and Ciara Anderson, Senior Associate from our Technology and Innovation Group look at the new regulation around data protection, outsourcing, operational resilience and information security. They look at the recent and emerging laws in the space including the Digital Operations Resilience Act (DORA), Revised Network and Information Directive (NIZ 2) and the soon-to-be finalised Artificial Intelligence Act and how they are relevant to technology services providers. They also look at the current regulatory obligations that apply to fintech providers and how they can ensure they are complying with these. Disclaimer: The contents of this podcast are to assist access to information and do not constitute legal or other advice. Specific advice should be sought in relation to specific cases. If you would like more information on this topic, please contact a member of our team or your usual Arthur Cox contact.
Startup: lanciare e far crescere la propria impresa innovativa
Grow Digital, l'evento annuale di EIT Digital a Bruxelles: geopolitica dei chip, Artificial Intelligence Act, i soldi dei VC italiani quanti sono e da dove vengono, Sequoia si fa in tre, round e acquisizioni della settimana
The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. As people realised that GPT technology was a game-changer, they called for the Act to be reconsidered.Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world's most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.John Higgins joins us in this episode to discuss the AI Act. John is the Chair of the Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.Follow-up reading:https://www.globaldigitalfoundation.org/https://artificialintelligenceact.eu/Topics addressed in this episode include:*) How different is generative AI from the productivity tools that have come before?*) Two approaches to regulation compared: a "Franco-German" approach and an "Anglo-American" approach*) The precautionary principle, for when a regulatory framework needs to be established in order to provide market confidence*) The EU's preference for regulating applications rather than regulating technology*) The types of application that matter most - when there is an impact on human rights and/or safety*) Regulations in the Act compared to the principles that good developers will in any case be following*) Problems with lack of information about the data sets used to train LLMs (Large Language Models)*) Enabling the flow, between the different "providers" within the AI value chain, of information about compliance*) Two potential alternatives to how the EU aims to regulate AI*) How an Act passes through EU legislation*) Conflicting assessments of the GDPR: a sledgehammer to crack a nut?*) Is it conceivable that LLMs will be banned in Europe?*) Why are there no tech giants in Europe? Does it matter?*) Other metrics for measuring the success of AI within Europe*) Strengths and weaknesses of the EU single market*) Reasons why the BCS opposed the moratorium proposed by the FLI: impracticality, asymmetry, benefits held back*) Some counterarguments in favour of the FLI position*) Projects undertaken by the Global Digital Foundation*) The role of AI in addressing (as well as exacerbating) hate speech*) Growing concerns over populism, polarisation, and post-truth*) The need for improved transparency and improved understandingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Cerchiamo di scoprire che tipi di usi si vogliono vietare, cosa è la certificazione dei software e come è intaccata ChatGPT!È nata la mia Newsletter!FastLetter: Una fonte buona dalla quale aggiornarsihttps://giorgiotaverniti.substack.com/-------------------------------INDICE E FONTIAi ACT00:00 Opening01:34 Che tipi di uso vieta?01:59 La certificazione: un punto fondamentale04:21 ChatGPT è impattato da questo?AI ACT e normativa europea - Intervista a Brando Benifei, Eurodeputatohttps://relevant.searchon.it/ai-act-e-normativa-europea-intervista-a-brando-benifei-eurodeputato/Video @MgpF che consiglio a tuttihttps://www.youtube.com/watch?v=JdgcLvAGUcI
Big changes are coming to ChatGPT OpenAI just announced two big updates to ChatGPT. The first is a soon-to-be-released subscription tier called ChatGPT Business. Designed for enterprises, the plan will follow OpenAI's API data usage policies. That means user data won't, by default, be used to train ChatGPT. The second is a feature that now allows ChatGPT users to turn off their chat history, which will prevent conversations from being used to train ChatGPT. We got a startling preview of how AI is going to impact politics In the U.S., the 2024 presidential election season kicked off with an attack ad generated 100% by artificial intelligence. The ad imagines a future dystopia where President Joe Biden remains in office after next year's results. The images, voices, and video clips are stunningly real and were created with widely available AI tools. And they foreshadow an election season where AI can be used by all parties and actors to generate hyper-realistic synthetic content at scale. At the same time, lawmakers in the U.S. and Europe signaled this week that they're taking more aggressive action to regulate AI. In the U.S., four major federal agencies, including the Federal Trade Commission and the Department of Justice, released a joint statement on their stance toward AI companies. The agencies clarified that they would not treat AI companies differently from other firms when enforcing rules and regulations. In Europe, the European Parliament has reached a deal to move forward on the world's first “AI rulebook,” the Artificial Intelligence Act. This is a broad suite of regulations that will govern the use of AI within the European Union. These include safeguards against the misuse of these systems and rules that protect citizens from AI risks. AI's major impact on big tech companies A recent round of tech earnings calls saw major companies like Microsoft, Google, and Meta displaying strong or better-than-expected results—and some of that growth was driven by AI. In Microsoft's case, Azure revenue was up 27% year-on-year and Microsoft said it was already generating new sales from its AI products. Google was less specific about its AI plans but committed to incorporating generative AI into its products moving forward. Reports have surfaced that Meta is playing catch-up to retool its infrastructure for AI but still saw an unexpected increase in sales in the past quarter. At the same time, these companies face enormous pressure from shareholders to get leaner. Some have conducted layoffs already, with more expected to come. And they're all relying on AI to capture efficiencies. We saw a stark example of this in practice with a recent announcement from Dropbox that they're cutting staff by 16%, or 500 people. How should knowledge workers think about this? What steps should we be taking? Today's rapid-fire topics include Runway Gen-a for mobile, PwC invests $1 billion in generative AI, and AI and human empathy in healthcare, Replit's funding round, and Hinton's Google exit. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
Macht uns die künstliche Intelligenz bald alle arbeitslos? Natürlich nicht. Verunsichert das Thema KI gerade ziemlich viele in der Arbeitswelt? Natürlich schon. Ist es sinnvoll, alles schwarz oder weiß zu malen? Natürlich nicht. Müssen wir KI-Systeme differenziert betrachten? Natürlich schon. Also, dann sprechen wir doch mal in unserer neuen Podcast-Folge darüber, was überhaupt unter KI verstanden wird, wie und wo sie unterstützen kann und warum nicht jeder gleich denken muss, dass sie den Menschen in der Arbeitswelt ersetzen wird.
House Republicans are proposing new actions on the federal debt limit. What did House Speaker Kevin McCarthy say on Wall Street? Artificial intelligence regulations are coming. The European Parliament announced it is working on the Artificial Intelligence Act, saying AI needs serious political attention. Electric vehicles that cost as low as $25,000 are on the way. What could this mean for consumers and carmakers? Apple is offering Apple Card users a savings account option. How much interest can you earn? ⭕️ Watch in-depth videos based on Truth & Tradition at Epoch TV
EXPERTS PHILIPPE DESSERTINE Directeur de l'Institut de Haute Finance GASPARD KOENIG Philosophe, écrivain Auteur de « La fin de l'individu » LAURENCE DEVILLERS Professeure en intelligence artificielle – Université La Sorbonne Auteure de « Les robots émotionnels » NICOLAS BERROD Journaliste au service futurs – « Le Parisien – Aujourd'hui en France » Transports, santé, éducation, sécurité, téléphonie, Internet… Alors que l'intelligence artificielle s'impose de plus en plus dans nos vies, des centaines d'experts mondiaux et patrons de la tech appellent à une pause dans son développement, évoquant "des risques majeurs pour l'humanité". Dans une pétition parue sur le site futureoflife.org la semaine dernière, ils réclament un moratoire jusqu'à la mise en place de systèmes de sécurité, dont notamment de nouvelles autorités réglementaires, la surveillance des systèmes d'IA, des techniques pour aider à distinguer le réel de l'artificiel et des institutions capables de gérer les "perturbations économiques et politiques dramatiques (en particulier pour la démocratie) que l'IA provoquera". Cette pétition réunit le cofondateur d'Apple, de nombreux universitaires, des ingénieurs de Microsoft et Elon Musk, propriétaire de Twitter et fondateur de SpaceX et de Tesla. Également signataire, Yoshua Bengio, pionnier canadien de l'IA, a exprimé ses inquiétudes, lors d'une conférence de presse à Montréal : "Je ne pense pas que la société est prête à faire face à cette puissance-là, au potentiel de manipulation par exemple des populations qui pourrait mettre en danger les démocraties". "Il faut donc prendre le temps de ralentir cette course commerciale qui est en route", a-t-il ajouté, appelant à discuter de ces enjeux au niveau mondial, "comme nous l'avons fait pour l'énergie et les armes nucléaires". Mais de quoi parle-t-on ? Le secteur de la tech traverse une révolution profonde, avec l'avènement de nouvelles formes d'intelligence artificielle. Mais en quelques mois, la vague de l'IA dite "générative", emmenée par ChatGPT (texte) et par Midjourney (image) a fait vaciller bien des certitudes, tant par exemple les images produites par les IA ont atteint un niveau de réalisme sans précédent. Ainsi on a pu voir ces dernières semaines sur les réseaux sociaux des images d'Emmanuel Macron en train de ramasser des déchets, le pape François emmitouflé dans une longue doudoune blanche façon rappeur américain, Barack Obama et Angela Merkel bâtir des châteaux de sable... Dans ce boom de l'IA, les deepfakes, cette technologie permettant de plaquer, le visage d'une personne sur celui d'une autre déjà présente dans une vidéo, sont également de la partie. Dans ce contexte, le Commissaire européen Thierry Breton qui prépare actuellement la règlementation européenne sur l'intelligence artificielle (Artificial Intelligence Act) a déclaré que celle-ci devrait inclure une réponse aux préoccupations concernant les risques liés à ChatGPT. "Les solutions d'IA peuvent offrir de grandes opportunités aux entreprises et aux citoyens, mais peuvent aussi présenter des risques. Les gens devraient être informés qu'ils ont affaire à un chatbot et non à un être humain. La transparence est également importante au regard du risque de partialité et de fausses informations", a-t-il expliqué. Alors face à la révolution technologique en cours, sommes-nous prêts ? Quels sont les risques ou les opportunités pour nos démocraties et nos économies ? L'Intelligence artificielle (IA) est déjà bien présente dans nos vies sans que nous nous en doutions. Où s'applique-t-elle ? Quels sont les domaines où tout pourrait changer ? Enfin alors que selon l'INSEE l'illectronisme, l'incapacité à utiliser des appareils numériques, touchait 17 % de la population en 2019, comment lutter contre la fracture numérique ? DIFFUSION : du lundi au samedi à 17h45 FORMAT : 65 minutes PRÉSENTATION : Caroline Roux - Axel de Tarlé REDIFFUSION : du lundi au vendredi vers 23h40 RÉALISATION : Nicolas Ferraro, Bruno Piney, Franck Broqua, Alexandre Langeard, Corentin Son, Benoît Lemoine PRODUCTION : France Télévisions / Maximal Productions Retrouvez C DANS L'AIR sur internet & les réseaux : INTERNET : francetv.fr FACEBOOK : https://www.facebook.com/Cdanslairf5 TWITTER : https://twitter.com/cdanslair INSTAGRAM : https://www.instagram.com/cdanslair/
With Nina Müller, Ethical Commerce Alliance Director and host of the Ethical Allies podcast. __ This was a pretty active season in terms of regulatory updates and decisions or guidelines coming out of supervisory bodies: Spain's AEPD issued a decision on the use of Google Analytics by the Royal Academy of Spanish Language (“RAE”), becoming the first EU Data Protection Agency to see the glass half full in the use of the widespread digital data collection service (having been considered high-risk in Denmark, Italy, France, the Netherlands and Austria). It must however be noted that the RAE was only using the most basic version of the tool, without any AdTech integrations or individual user profiling - and in this regard aligned with the CNIL's long-standing guidelines for the valid use of the tool. At EU level, the Artificial Intelligence Act (which we have covered this quarter in a couple of Masters of Privacy interviews) made fast progress with the Council adopting its final position. At the same time, new common rules on cybersecurity became a reality with the approval of the NS2 Directive (or v2 of the Network and Information Security Directive) on November 28th. The updated framework covers incident response, supply chain security and encryption among other things, leaving less wiggle room for Member States to get creative when it comes to “essential sectors” (such as energy, banking, health, or digital infrastructure). Across the Channel, the UK's Data Protection Agency (ICO) issued brand new guidelines on international data transfers, providing a practical tool for businesses to properly carry out Transfer Risk Assessments and making it clear that either such tool or the guidelines provided by the European Data Protection Board will be considered valid. Already into the new year, the European Data Protection Board (EDPB) issued two important reports, on valid consent in the context of cookie banners (in the hope to agree on a common approach in the face of multiple NOYB complaints across the EU) and the use of cloud-based services by the public sector. The former concluded that the vast majority of DPAs (Supervisory Authorities) did not accept hiding the “Reject All” button in a second layer - which most notably leaves Spain's AEPD as the odd one out. They did all agree on the non-conformity of: a) pre-ticked consent checkboxes on second layer; b) a reliance on legitimate interest; c) the use of dark patterns in link design or deceptive button colors/contrast; and d) the inaccurate classification of essential cookies. The latter concluded that public bodies across the EU may find it hard to provide supplementary measures when sending personal data to a US-based cloud (as per Schrems II requirements) in the context of some Software as a Service (SaaS) implementations, suggesting that switching to an EEA-sovereign Cloud Service Provider (CSP) would solve the problem and getting many to wonder whether it also refers to US-owned CSPs, which would leave few options on the table and none able to compete at many levels in terms of features or scale. All of which can easily lead us to the latest update on the EU-US Data Privacy Framework: The EDPB released its non-binding opinion on the status of the EU-US Data Privacy Framework (voicing concerns about proportionality, the data protection review court and bulk data collection by national security agencies). The EU Commission will now proceed to ask EU Member States to approve it with the hope of issuing an adequacy decision by July 2023. This would do away with all the headaches derived from the Schrems II ECJ decision (including growing pressure to store personal data in EU-based data centers), were it not for the general impression that a Schrems III challenge looms in the horizon. In the United States, long-awaited new privacy rules in California (CPRA) and Virginia (CDPA) entered into force on January 1st. Although both provide a set of rights in terms of ensuring individual control over personal data being collected across the Internet (opt-out, access, deletion, correction, portability…), California's creates a private right of action that could pave the way for a new avalanche of privacy-related lawsuits.In any case, only companies meeting a minimum threshold in terms of revenue or the amount of consumers affected by their data collection practices (both of them varying across the two states) will have to comply with the new rules. Lastly, Privacy by Design will become ISO standard 31700 on February 8th, finally introducing an auditable process to conform to the seven principles originally laid out by Anne Cavoukian as Ontario(Canada)'s former Data Protection Commissioner. Enforcement updates It's been interesting to see how continental Data Protection Agencies (“DPAs”) keep milking the cow of the ePrivacy Directive's lack of a one-stop-shop for US or China-based Big Tech giants. The long-awaited ePrivacy Regulation never arrived to keep this framework in sync with the GDPR (which does have a one-stop-shop), and this leaves an opening for any DPA to avoid referring large enforcement cases involving such players to the Irish Data Protection Commissioner (“DPC”) whenever cookie consent is involved. This criterion has been further strengthened by the recent conclusions of EPDB cookie banner task force. Microsoft was the last major victim of this particular gap (following Meta and Google), receiving a 60-million euro fine from France's DPA (CNIL), which shortly after honored TikTok with a 5m euro fine (once again, due to the absence of a “Reject All” button on its first layer - or “not being as easy to reject cookies as it is to accept them”) and, not having had enough, went on to give Apple an 8m euro fine for collecting unique device identifiers of visitors to its App Store without prior consent or notice, in order to serve its own ads (which is akin to a cookie or local storage system when it comes to article 5.3 of the ePrivacy Directive). The CNIL ePrivacy-related enforcement spree did not stop short at Big Tech. Voodoo, a leader in hyper-casual mobile games, was also a target, receiving a 3 million euro fine for lack of proper consent when serving an IDFV (unique identifier “for vendors”, which Apples does allow app publishers to set when IDFA or cross-app identifiers have been declined via the App Tracking Transparency prompt). Putting the ePrivacy Directive aside, and well into pure GDPR domain, Discord received a 800k euro fine (again, at the hands of CNIL) on the basis of: a) a failure to properly determine and enforce a concrete data retention period; b) a failure to consider Privacy by Design requirements in the development of its products; c) accepting very low security levels for user-created passwords; and d) failing to carry out a Data Protection Impact Assessment (given the volume of data it processed and the fact that the tool has become popular among minors). And yet, one particular piece of news outshined mostly everything else in this category: Ireland's DPC imposed a 390 euro fine on Meta following considerable pressure from the EDPB for relying on the contractual legal basis in order to serve personalized advertising - itself the core business model of both social networks. We had a debate on the matter with Tim Walters (English) and Alonso Hurtado (Spanish) on Masters of Privacy, and published an opinion piece on our blog. This last affair is a good segue into Twitter's latest troubles. Its new owner, Elon Musk, not content with having fired key senior executives in charge of EU privacy compliance (including its Chief Privacy Officer and DPO), has suggested that he will oblige its non-paying users to consent to personalized advertising. The Irish DPC (once again, in charge of its supervision under the one-stop-shop rule) asked Twitter for a meeting in the hope to draw a few red lines. Meanwhile, the Spanish AEPD, still breaking all records in terms of monthly fines, sanctioned UPS (70,000 euros) for handing out a MediaMarkt (consumer electronics) delivery to a neighbor, thus breaching confidentiality duties. This will have a serious impact on the regular practices of courier services in the country. Back in the United States, Epic Games and the FTC agreed to a $520m fine for directly targeting children under the age of 13 with its Fortnite game (a default setting that allows them to engage in voice and text communications with strangers has made it worse), as well for using for “dark patterns” in in-game purchases. Separately, in what we believe it is a first case of its kind, even in the EU (with the ECJ FashionID case possibly being the closest we have been to it). Betterhelp has received an FTC $7,8m fine for using the Facebook Lookalike Audiences feature (and alternative offerings in the programmatic advertising space, including those of Criteo, Snapchat or Pinterest) to find potential customers on the basis of their similarity with the online mental health service's current user base. This involved sensitive data and follows repetitive disclaimers by Betterhelp that data would in no case be shared with third parties. On the private lawsuits front (especially important in the US), Meta agreed to pay $725m after a class action was brought in California against Facebook on the back of the ever-present Cambridge Analytica scandal. Also, the Illinois Biometric Information Privacy Act (BIPA) kept putting money into the pockets of claimants and class action lawyers, in this case forcing Whole Foods (an upscale organic food supermarket chain owned by Amazon) to settle for $300.000 - we have previously previous cases against TikTok, Facebook or Snapchat, albeit it was the monitoring, via “voiceprints”, of its own employees (rather than its customers) that triggered this particular lawsuit. Legitimate Interest strikes back To finish with this section, very recent developments justify turning our eyes back to the UK and the EU as there is growing momentum for the acceptance of the legitimate interest as a legal basis for purely commercial or direct marketing purposes: While the CJEU decides on a question posed by a Dutch court in January, in which the DPA issued a fine to a tennis association for relying on legitimate interest to share member details with its sponsors (who then sent commercial offers to them), a UK court (First-Tier Tribunal) has ruled against the ICO (UK DPA) and in favor of Experian (a well-known data broker) for collecting data about 5.3m people from publicly available sources, including the electorate register, to build customer profiles and subsequently selling them to advertisers. Experian has relied on legitimate interest and found it too burdensome to properly inform every single individual (this being the ICO's main point of contention). The decision does appear to indicate that using legitimate interest would not be possible if the original data collection had been based on consent, but even this is not entirely clear. So, just to make it even more clear and simple, the UK Government presented a new draft of a new UK Data Protection Bill on March 8th that includes a pre-built shortcut to using legitimate interest without need for the so-called three part test (purpose, necessity, balancing). Data controllers can now go ahead with this legal basis if they find their purpose in a non-exhaustive list provided - which includes direct marketing. Competition and Digital Markets Google was sued by the Department of Justice for anti-competitive behavior in its dominance of the AdTech stack across the open market (or the ads that are shown across the web and beyond its own “walled gardens”), using its dominance of the publisher ad server market (supply side) to further strengthen its stranglehold of the demand side (advertisers, many of them already glued to its Google Ads or DV360 platforms in order to invest in search keywords or YouTube inventory) and, worse, artificially manipulating its own ad exchange to favor publishers at the expense of advertisers - thereby reinforcing the flywheel, as digital media publishers found themselves with even less incentives to work with competing ad servers. Zero-Party Data and Future of Media (The piece of news below obliges us to combine both categories this season) The BBC has rolled out its own version of SOLID pods to allow its own customers to leverage their own data (exported from Netflix, Spotify, and the BBC) in order to obtain relevant recommendations while staying in full control of such data. Perhaps a little step towards individual agency, but a giant one for a digital media ecosystem mostly butchered by the untenable notice-and-consent approach derived from the current legal framework - which takes us back full circle to Elizabeth Renieris' new book.
House of Pain: new EU cyber regulationsNIS2, DORA, the Cyber Resilience and Artificial Intelligence acts; have you started to familiarise yourself with the new EU cyber regulations that are coming into force?In this episode, Robby welcomes Rolf von Roessing, former Vice Chair of ISACA Global, and CEO of FORFA Consulting, a German company specialising in senior level consultancy and advisory work.During their conversation, Rolf provides an introduction to a few new and upcoming EU regulations many are now starting the familiarise themselves with; the Network and Information Security Directive, version 2 (NIS2), the EU Cyber Resilience Act, the Artificial Intelligence Act and the Digital Operational Resilience Act (DORA).Rolf walks us through these upcoming regulations, and provides an overview of the main differences between them, who the regulations are for and who they will affect.Feel free to check out the video version of the podcast on ISACA Norway's channel - https://www.youtube.com/@isacanorwaychapter
EU MDR extension Implementation of the Medical Device Regulation https://data.consilium.europa.eu/doc/document/ST-15520-2022-INIT/en/pdf Provisional Agenda 9th Meeting – Implementation of the Medical Device Regulation (MDR) https://data.consilium.europa.eu/doc/document/ST-15453-2022-INIT/en/pdf Implementation of the Medical Device Regulation (MDR): EU MDR Transition Period extension proposal by the European Commission. https://video.consilium.europa.eu/event/en/26353 Erik Vollebregt Article: https://medicaldeviceslegal.com/2023/01/01/mdr-and-ivdr-outlook-for-2023/ Implementing rolling plan Implementation Rolling Plan: Regulation (EU) 2017/745 and Regulation (EU) 2017/746 – Latest update: November 2022 https://health.ec.europa.eu/system/files/2022-12/md_rolling-plan_en.pdf Borderline manual Manual on borderline and classification under Regulations (EU) 2017/745 and 2017/746 – Version2 – December 2022 https://health.ec.europa.eu/latest-updates/manual-borderline-and-classification-under-regulations-eu-2017745-and-2017746-version2-december-2022-2022-12-15_en Team NB AI act for Notified Bodies – Team-NB Position Paper – The designation of notified bodies under the upcoming Artificial Intelligence Act https://www.team-nb.org/wp-content/uploads/members/M2022/Team-NB PositionPaper-AI Designation-V1-20221216.pdf Notified Bodies appointed QMD Services GmbH (NB 2962), 8th Notified Body designated under IVDR (EU) 2017/746 https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.html&ntf_id=320456&version_no=1 ICIM S.P.A., 36th Notified Body designated under MDR (EU) 2017/745 https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.html&ntf_id=320256&version_no=12 11:31 UK Approved bodies UK Approved bodies. https://www.gov.uk/government/publications/medical-devices-uk-approved-bodies/uk-approved-bodies-for-medical-devices Training to attend and Books to read Green Belt 24th Edition : https://school.easymedicaldevice.com/course/gb24/ EUDAMED Simplified 28th February 2023: https://eudamed.com/index.php/eudamed-training/ PRRC Training 28 Feb 2023: https://boumansconsulting.com/prrc-academy-cat/2023-02-28-03-07-in-house-manufacturer-prrc-starter-training/ Erik Vollebregt Book – easymedicaldevice10 https://medicaldeviceslegal.com/2022/10/27/the-2nd-edition-of-the-enriched-mdr-and-ivdr-is-available-now/ MDCG 2022-17 MDCG position paper on ‘hybrid audits' – December 2022 https://health.ec.europa.eu/latest-updates/mdcg-2022-17-mdcg-position-paper-hybrid-audits-december-2022-2022-12-06_en MDCG 2022-18 MDCG Position Paper on the application of Article 97 MDR to legacy devices for which the MDD or AIMDD certificate expires before the issuance of an MDR certificate https://health.ec.europa.eu/latest-updates/mdcg-position-paper-application-art97-mdr-legacy-devices-which-mddaimdd-certificate-expires-issuance-2022-12-09_en MDCG 2022-19 and 20 Performance study application/notification documents under Regulation (EU) 2017/746 https://health.ec.europa.eu/latest-updates/mdcg-2022-19-performance-study-applicationnotification-documents-under-regulation-eu-2017746-2022-12-12_en Substantial modification of performance study under Regulation (EU) 2017/746 https://health.ec.europa.eu/latest-updates/mdcg-2022-20-substantial-modification-performance-study-under-regulation-eu-2017746-december-2022-2022-12-14_en MDCG 2022-21 Guidance on Periodic Safety Update Report (PSUR) according to Regulation (EU) 2017/745 https://health.ec.europa.eu/latest-updates/mdcg-2022-21-guidance-periodic-safety-update-report-psur-according-regulation-eu-2017745-december-2022-12-16_en Switzerland Annex XVI products Frequently Asked Questions on medical devices – FAQ MD: Update of the section “Products without medical purpose” https://www.swissmedic.ch/swissmedic/en/home/medical-devices/regulation-of-medical-devices/faq.html US Product Codes Non-Invasive Body Contouring Technologies https://www.fda.gov/medical-devices/aesthetic-cosmetic-devices/non-invasive-body-contouring-technologies Augmented Reality and Virtual Reality in Medical Devices https://www.fda.gov/medical-devices/digital-health-center-excellence/augmented-reality-and-virtual-reality-medical-devices SFDA Classification Guidelines for classification of medical devices and supplies https://www.sfda.gov.sa/sites/default/files/2022-12/MDS–G008.pdf PODCAST nostalgia Team-PRRC panel discussion https://podcast.easymedicaldevice.com/210-2/ Is EU MDR extended? https://podcast.easymedicaldevice.com/211-2/ Grow your LinkedIn Profile: https://podcast.easymedicaldevice.com/212-2/
Nearly five years after the implementation of the EU General Data Protection Regulation, Europe is immersed in a digital market strategy that is giving rise to a host of new, interconnected regulation. Among this complexity resides the proposed Artificial Intelligence Act. Originally presented by the European Commission April 2021, the AI Act is now in the hands of the Council of the European Union and European Parliament. If passed, this would be the world's first comprehensive, horizontal regulation of AI. On my visit to Brussels for the IAPP Data Protection Congress, I had the opportunity to meet with AI Act Co-rapportuer and Romanian Member of Parliament Dragoș Tudorache in his office. During our extended conversation, we discussed the risk-framework for the proposal, how the legislation will intersect with existing regulations, like the GDPR, current sticking points with stakeholders and what this means for privacy and data protection professionals.
Interview mit Philipp Adamidis, Co-Founder und CEO von QuantPi In der Nachmittagsfolge begrüßen wir heute Philipp Adamidis, Co-Founder und CEO von QuantPi, und sprechen mit ihm über die erfolgreich abgeschlossene Pre-Seed-Finanzierungsrunde in Höhe von 2,5 Millionen Euro. QuantPi hat eine Plattform entwickelt, mit der Unternehmen sicherstellen können, dass rechtliche, wirtschaftliche, ethische und Reputationsrisiken im Zusammenhang mit ihren KI-Lösungen identifiziert, bewertet und entschärft werden. Zahlreiche Initiativen zur Regulierung und Standardisierung von KI, wie beispielsweise der Artificial Intelligence Act der Europäischen Union (AIA), machen es notwendig, dass KI-basierte Produkte effizient, sicher und konform sind. Durch den AIA werden Unternehmen zum Beispiel mit einer Geldstrafe von bis zu 6% des weltweiten Jahresumsatzes belegt, wenn sie die getroffenen Entscheidungen ihrer künstlichen Intelligenz nicht erklären können. Die Plattform sammelt dafür Artefakte und Metriken über KI-Systeme in den wichtigsten Risiko- und Prüfungsdimensionen und lässt sich nahtlos mit modernen ML- und BI-Tools integrieren. Nach einer Forschungszeit von knapp 7 Jahren wurde QuantPi im Jahr 2020 von führenden Köpfen der Mathematik, Informatik und Wirtschaftswissenschaften als Spin-off des CISPA-Helmholtz-Zentrums für Informationssicherheit und der Universität des Saarlandes gegründet. Ursprünglich sollte lediglich eine Black-Box-KI durch eXplainable AI verständlich gemacht werden. Mit der Forschung und Entwicklung hat sich die Technologie allerdings zu einer ganzheitlichen Plattform entwickelt, die KI sicherer und für die verschiedenen Akteure in Gesellschaft, Forschung und Wirtschaft verständlicher machen kann. Die Saarbrückener Entwicklungsplattform für Responsible AI hat nun in einer Pre-Seed-Finanzierungsrunde 2,5 Millionen Euro unter der Führung von Capnamic eingesammelt. Der deutsche Early-Stage Venture Capital Investor stellt seinen Portfoliounternehmen neben der finanziellen Unterstützung auch sein globales Netzwerk, Hands-On Support und Mentoring zur Verfügung. First Momentum, der Autor von „The AI-First Company“ namens Ash Fontana, der Ex-Instana-Gründer Mirko Novaovic und weitere Business Angels haben sich ebenfalls an der Runde beteiligt. Das frische Kapital möchte QuantPi für die Erweiterung des Teams und den Beginn der Kommerzialisierung der Plattform mit Fokus auf hochsensible und aufgabenkritische Anwendungsfälle in den Bereichen Finanzdienstleistungen, Gesundheitswesen sowie autonomes Fahren einsetzen.
Making sense of health data, telemedicine, and adopting AI/ML are some of the newest innovations in healthcare. Intel's Nathan Peper, Head of Strategy and Business Innovations, and Patrick Boisseau, Director General, Strategic Initiatives at MedTech Europe, provided their insight to Michelle Dawn Mooney, host of Health and Life Sciences at the Edge, about the related policy challenges to powering the healthcare network in the EU & U.S.Navigating interoperability in the U.S. is challenging. Currently, there is a large focus on lowering the associated costs of healthcare while improving the patient experience. With a shortage of healthcare workers, this proves difficult. Additionally, the hospital networks must remain business-oriented, pushing profit over expense. “Policy is just the first step of admitting a problem we need to address, once you have these emerging innovations, it makes it a lot easier to push beyond the boundaries of policy to better benefit the healthcare system – patients & healthcare workers,” says Peper. By expanding broadband to more rural areas of the US and increasing data storage and access regulations, federated learning can assist and potentially provide robust statistical model for medical data, while keeping patient privacy. In Europe, some of the challenges are similar to the U.S. interoperability of systems. Yet, Europe comes with its' unique challenges, including implementing a standardized solution for data transportation, support, and language translations across 27 embassies while maintaining internal communication. “Policy is also an enabler, even though sometimes it seems like a constraint… digital technologies is a fantastic enabler and will sooner or later improve the delivery of healthcare, in central facilities, but also at the patient's house. The biggest impact is the change of the relationship between the patient and healthcare professionals…Now because of digital technology, patients are empowered like never before,” explains Boisseau. Currently, European regulations include the Artificial Intelligence Act, Data Act, and Cybersecurity, which help improve the adoption of digital technologies and privacy in Europe. Patients now have more control over information and their conditions while also having the ability to access a much larger knowledge set. The benefits of digital solutions include assisted diagnostic decision support systems at all levels and improved customer and patient-centric models. “We are far from deployment, but the existing examples are very promising, and if we go towards a wider adoption, cost will go down and digital health is also one way to reduce cost-pressure on hospitals and one way to contribute to managing shortage of skills, “ says Boisseau. Learn more about the policy challenges that are defining healthcare in the EU and U.S . by. Connecting with Nathan Peper and Patrick Boisseau on LinkedIn or visit:MedTech EuropeIntelSubscribe to this channel on Apple Podcasts, Spotify, and Google Podcasts to hear more from the Intel Internet of Things Group.
The European Parliament and Council are currently negotiating the Artificial Intelligence Act, which introduces common regulatory and legal framework for Artificial Intelligence (AI) in all domains except the military. However, the negotiations pose several challenges for legislators. How should the risk categories be established? Do they take into account unintended impacts of AI? What divergences between public and private sectors could emerge, and how can they be adressed? And how is the AI Act going to help protecting fundamental rights and values? We will answer these questions with Maria-Manuel Leitão-Marques, MEP from Portugal, who is the Vice-Chair of the Committee on the Internal Market and Consumer Protection and a member of the OECD Parliamentary Group on AI; and Ilina Georgieva, research scientist working on AI, cyber regulation and cyber norms at the Netherlands Organisation for Applied Scientific Research (TNO), an independent research organisation. This podcast is second in the 2022 series on Artificial intelligence, brought to you by the OECD's Global Parliamentary Network and the European Parliament's Panel for the Future of Science and Technology, also known as STOA. Guests: Maria-Manuel Leitão-Marques, Ilina Georgieva Host: Christopher Mooney To learn more about Netherlands Organisation for Applied Scientific Research, go to https://www.tno.nl/en/ To learn more about the EU Parliament's Panel for the Future of Science and Technology's work on AI, go to: https://www.europarl.europa.eu/stoa/en/home/highlights To learn more about the OECD Global Parliamentary Network, go to: https://www.oecd.org/parliamentarians/ To learn more about the OECD's work on AI, go to: oecd.ai
La France, en tant que présidente du Conseil de l'Union européenne, a présenté sa vision des choses et proposé plusieurs évolutions du texte. Comme le rapporte Euractiv, on a le sentiment que la France compte bien peser sur certains aspects de l'Artificial Intelligence Act, notamment au sujet de l'utilisation de l'intelligence artificielle par les forces de l'ordre. Si on s'en tient aux modifications proposées, l'objectif de la France semble être de ne pas bloquer cette utilisation, mais plutôt « d'offrir une plus grande flexibilité » aux forces de l'ordre.Lire l'article sur Siècle Digital. Voir Acast.com/privacy pour les informations sur la vie privée et l'opt-out.
Frans van Bruggen: Preparing for the EU's Artificial Intelligence Act
The Artificial Intelligence Act is still on the EU's legislative drawing board, but the debate over how the new law will be enforced has already begun. National data-protection authorities say they're best placed to police the new rules; but some governments in the bloc are already starting to set up designated AI regulators — a move that may hamper coordinated EU oversight. Also on today's podcast: Why the makers of connected cars aren't keen to emulate the crash-or-crash-through approach to data collection embraced by digital platforms.
Governments around the world are looking at their legal frameworks and how they apply to the digital technologies and platforms that have brought widespread disruptive change to their economies, societies and politics. Most governments are aware that their regulations are inadequate to address the challenges of an industry that crosses borders and pervades all aspects of daily life. Three regulatory approaches are emerging: the restrictive regime of the Chinese state; the lax, free-market approach of the United States; and the regulatory frameworks of the European Union, which are miles ahead of those of any other Western democratic country.In this episode of Big Tech, host Taylor Owen speaks with Mark Scott, the chief technology correspondent at Politico, about the state of digital technology and platform regulations in Europe. Following the success of implementing the General Data Protection Regulation, which went into effect in 2018, the European Parliament currently has three big policy proposals in the works: the Digital Services Act, the Digital Markets Act and the Artificial Intelligence Act. Taylor and Mark discuss how each of these proposals will impact the tech sector and discuss their potential for adoption across Europe — and how many other nations, including Canada, are modelling similar regulations within their own countries.
The European Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a 'risk-based approach'. Some AI systems presenting 'unacceptable' risks would be prohibited. A wide range of 'high-risk' AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. Those AI systems presenting only 'low or minimal risk' would be subject to very light transparency obligations. In this podcast, we'll talk about the EU artificial intelligence act, the first ever comprehensive attempt at regulating the uses and risks of this emerging technology. - Original publication on the EP Think Tank website- Subscription to our RSS feed in case your have your own RSS reader- Podcast available on Deezer, iTunes, TuneIn, Stitcher, YouTubeSource: © European Union - EP
Tonya Hall questions Dr. Benjamin Mueller, senior analyst at the Center for Data Innovation, about the pros and cons of the EU's proposed Artificial Intelligence Act. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, we are going to talk about the regulation of Artificial Intelligence and Machine Learning to understand what businesses need to think about from a regulatory perspective. Our special guest, Ben, talks about the global context of regulations around AI and the complexity of the parallel 'race to AI' and 'race to regulation' . In particular we look at the proposed Artificial Intelligence Act from the European Union and consider the impact on innovation. We explore what businesses and DPOs need to consider when building, using or deploying Machine learning or Artificial Intelligence systems. As we enter what is arguably, the start of our journey into a new era of innovation with huge benefits to humankind, this podcast will follow developments in and around the regulation of future Artificial Intelligence and Machine Learning. GDPR Now! Is brought to you by Data Protection 4 Business & This Is DPO. www.dpo4business.co.uk www.thisisdpo.co.uk Guest Benjamin Mueller Ben is a senior policy analyst at the Center for Data Innovation, focusing on AI and technology governance. Read Ben's note on: AI Act Explainer is here: https://datainnovation.org/2021/05/the-artificial-intelligence-act-a-quick-explainer/ Special Guest: Benjamin Mueller.
In this episode of our TECHPLACETM Talk series, Danielle Ochs and Jenn Betts are joined by Colleen DeRosa, Stephen Riga, and Justin Tarka to address new guidance relating to employer use of artificial intelligence. In particular, the speakers discuss the Federal Trade Commission’s (FTC) recent guidance in the United States and the European Commission’s proposal for the Artificial Intelligence Act.
The European Commission presented a Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Via the act, the development of European AI-solutions shall be supported and the European approach of trustworthy and compliant developments shall be fosterd. In this episode we assess the document further. The panel consists of O.J. Gstrein, Gloria Gonzalez Fuster, Cornelia Kutterer, and Sofia Ranchordas. Links: Proposal: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence Consultation: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Kunstliche-Intelligenz-ethische-und-rechtliche-Anforderungen_de Cornelia Kutterer (https://blogs.microsoft.com/eupolicy/author/corneliakutterer/), O.J. Gstrein (https://europainstitut.de/en/faculty-research/faculty/team-g/gstrein), Gloria Gonzalez Fuster (https://lsts.research.vub.be/en/gloria-gonz%C3%A1lez-fuster) and Sofia Ranchordas (https://www.sofiaranchordas.com/).
Alex Moltzau, som jobber med AI-politikk og etikk i Norwegian Artificial Intelligence Research Consortium (NORA), gir oss et lynraskt overblikk over det nye EU-dokumentet "Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)".Og som lovt i episoden, her er linken: alexmoltzau.medium.com.
This week on the AI Business podcast, we look at the draft European Regulation on artificial intelligence, a.k.a. the Artificial Intelligence Act. This long-expected piece of legislation will be the first attempt to regulate AI on a super-national level – but does it go far enough to meet the aim of stopping AI systems that pose a ‘clear threat' to citizens' rights and livelihoods? Is not just a draft, but a declaration of intent – the proposed policy offers a vision that's very different from both the relaxed regulatory approach seen in the US, and the embrace of AI for the purposes of the state that is practiced in China. The EU framework Proposes to categorize AI systems in terms of their impact, and the risk they pose. ‘Unacceptable risk' would cover systems that are deemed to be a "clear threat to the safety, livelihoods, and rights of people” – like systems designed to manipulate human behavior, or those used for ‘social scoring.' The ‘High-risk' category would cover systems for critical infrastructure, and some systems for law enforcement. ‘Limited risk' and ‘Minimal risk' categories would cover products like chatbots, AI-enabled video games, and spam filters. The draft seems to take a strong position on biometric surveillance systems in public spaces. At first sight, these appear to be banned, but the document lists a large number of potential exceptions. We're not the only ones confused by this; the EU's chief data protection supervisor is confused too. We also cover: Gonzo the Cat! Bernie memorabilia! Reasons to distrust the intelligence services! Apple VS Facebook! As always, you can find the people responsible for the circus podcast online: Max Smolaks (@maxsmolax) Sebastian Moss (@SebMoss) Tien Fu (@tienchifu) Ben Wodecki (@benwodecki)