Podcasts about digital policy

  • 98PODCASTS
  • 126EPISODES
  • 34mAVG DURATION
  • 1WEEKLY EPISODE
  • Jun 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about digital policy

Latest podcast episodes about digital policy

Irish Tech News Audio Articles
ISPCC announces global project to prevent online child sexual exploitation and abuse

Irish Tech News Audio Articles

Play Episode Listen Later Jun 10, 2025 4:26


The project, spearheaded by Greek non-profit child welfare organisation The Smile of the Child, will be co-created by children and young people to ensure their voices are heard. The ISPCC is honoured to announce its participation in a worldwide project designed to transform how we prevent and respond to online child sexual exploitation and abuse. Safe Online, a global fund dedicated to eradicating online child sexual exploitation and abuse, is funding the project called "Sandboxing and Standardising Child Online Redress". The COR Sandbox project will establish a first-of-its-kind mechanism to advance child online safety through collaboration across sectors, borders and generations. The project is led by The Smile of the Child, Greece's premier child welfare organisation and ISPCC is a partner alongside The Young and Resilient Research Centre at Western Sydney University, Child Helpline International and the Centre for Digital Policy at University College Dublin. Sandboxes bring together industry, regulators and customers in a safe space to test innovative products and services without incurring regulatory sanctions and they are mainly used in the finance sector to test new services. The EU is increasingly encouraging the use of sandboxes in the field of high technology and artificial intelligence. Through the participation of youth, platforms, regulators and online safety experts, this first regulatory sandbox for child digital wellbeing will provide for consistent, systemic care and redress for children from online harm, based on their rights under the United Nations Convention on the Rights of the Child (UNCRC). Getting reporting and redress right means that we can keep track of harms and be able to identify systemic risk. Co-designing the reporting and redress process with young people as equitable participants can help us understand what they expect from the reporting process and what remedies are fair for them putting Article 12 of the UNCRC into action. The project also benefits from the guidance of renowned digital safety experts, including Project Lead and Scientific Coordinator Ioanna Noula, PhD, an international expert on tech policy and children's rights; pioneering online safety and youth rights advocate Anne Collier; youth rights and participation expert Amanda Third, PhD, of the Young and Resilient Research Centre; international innovation management consultant Nicky Hickman; IT innovation and startup founder Jez Goldstone; and leading child online wellbeing scholar Tijana Milosevic, PhD. ISPCC Head of Policy and Public Affairs Fiona Jennings said: "This project is a wonderful example of what we can achieve when we collaborate and listen to children and young people. Having robust online reporting mechanisms in place is a key policy objective for ISPCC and this project will go a long way towards making the online world safer for children and young people to participate in." Project lead Ioanna Noula said: "ISPCC's contribution to a project, which seeks to build coherence around the issue of online redress, will be a catalyst for real and substantial change in the area of online reporting. Helplines play a key role in flagging illegal and/or harmful content. As the experts in listening and responding to children, ISPCC can provide insight from an Irish context to help spearheading the implementation of the Digital Services Act and the wellbeing of children online." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to ...

SSPI
Better Satellite World: India Rising – Building Bridges in the Global Space Economy

SSPI

Play Episode Listen Later May 30, 2025 52:26


In this Better Satellite World podcast, based on the May 2025 edition of the New York Space Business Roundtable, leaders from Pixxel, Dhruva Space, TMT Law Practice, VegaMX, and the US-India Business Council have a candid conversation about opportunity, access, and alignment in India's rapidly growing space sector. Our speakers include: Chetan Bikkina, Chief Commercial Officer, Dhruva Space Private Limited Jay Gullish, Executive Director, Digital Policy, U.S.-India Business Council (USIBC) Abhishek Malhotra, Senior Advocate, Chamber of Abhishek Malhotra Vivek Mital, CEO, VegaMX Inc. Aakash Parekh, Chief Commercial Officer, Pixxel India is rapidly transforming its space sector, moving from a government-led model dominated by ISRO to a new era of public-private collaboration and global engagement. Recent policy reforms - including liberalized FDI caps, the launch of IN-SPACe as a commercial regulator, and the Indian Space Policy 2023 - are opening doors for private enterprise and international investment. At the same time, Indian companies are stepping into global supply chains, developing satellite constellations, launch vehicles, and Earth observation capabilities that are redefining India's position in the global space economy. Yet, foreign participation still requires careful navigation of regulatory, legal, and strategic frameworks shaped by India's evolving priorities.

The Agenda with Steve Paikin (Audio)
Is Data Collection Undermining Human Rights?

The Agenda with Steve Paikin (Audio)

Play Episode Listen Later May 21, 2025 24:32


In "We, the Data: Human Rights in the Digital Age," author Wendy H. Wong makes the case that the collection and tracking of our data by Big Tech comes at a cost to our humanity. She's a professor of political science and principal's research chair at the University of British Columbia and her book won the 2024 Balsillie Prize for Public Policy. She joins Steve Paikin to discuss the link between data and human rights. See omnystudio.com/listener for privacy information.

Afternoon Drive with John Maytham
Fired Over a Post? Why labour law needs to catch up with social media

Afternoon Drive with John Maytham

Play Episode Listen Later May 7, 2025 8:05


John Maytham is joined by Heike Hartnick, a recent Master of Laws graduate from the University of the Western Cape and candidate legal practitioner, to unpack the legal grey areas surrounding dismissals for misconduct outside the workplace, particularly on social media. Follow us on:CapeTalk on Facebook: www.facebook.com/CapeTalkCapeTalk on TikTok: www.tiktok.com/@capetalkCapeTalk on Instagram: www.instagram.com/capetalkzaCapeTalk on YouTube: www.youtube.com/@CapeTalk567CapeTalk on X: www.x.com/CapeTalkSee omnystudio.com/listener for privacy information.

Global Minnesota
A Primer on AI

Global Minnesota

Play Episode Listen Later Apr 30, 2025 33:48


Global Minnesota's Great Decisions foreign policy discussion groups make international topics accessible to people all across the state. Anyone can join or start a group to begin discussing important global topics in your community. One of this year's Great Decisions topics is AI and National Security. Artificial Intelligence has the potential to change large sections of the global economy, social interactions, and geopolitical dynamics. On May 19, Global Minnesota will be hosting an in-depth discussion all about the intersection of AI and National Security with one of our newest Great Decisions Speakers, Ren Bin Lee Dixon. This episode is primer on the upcoming conversation with Ren Bin to help bring you up to speed on all things AI. She is an Artificial Intelligence policy researcher with a Master's in Public Policy, specializing in AI governance, from the Humphrey School of Public Affairs at the University of Minnesota. As a Research Fellow at the Center for AI and Digital Policy, she provided policy recommendations to governments and multilateral organizations, shaping frameworks for responsible AI governance. Ren Bin also collaborates with the Center for Security and Emerging Technology on policy briefs addressing AI harm.   Links Upcoming Event: AI & National Security Great Decisions Discussion Groups Podcast Episode: Great Decisions Discussion Groups   This interview was recorded on April 21, 2025.

Conecta Ingeniería
5G, la nueva revolución tecnológica

Conecta Ingeniería

Play Episode Listen Later Apr 25, 2025 54:54


No paramos de hablar del 5G pero, ¿qué es exactamente? ¿Cómo nos cambiará la vida?. Francisco Hortigüela, (Director General en AMETIC) y Amalia Pelegrín Martínez-Canales, (Directora General de Digital Policy & Talento en Ametic), nos explican lo que supondrá la implantación de la red móvil de quinta generación que cambiará para siempre la forma en la que nos comunicamos y transformará nuestros hábitos cotidianos, pues objetos que forman parte de nuestro día a día, desde los electrodomésticos hasta los automóviles podrán conectarse con nosotros y/o entre sí. También han analizado el impacto y potencial que esta revolución va a tener en nuestra sociedad a nivel industrial, pues abre multitud de nuevas posibilidades, desde llevar a cabo el proceso de robotización de una fábrica, hasta realizar los servicios de mantenimiento a distancia.

Disruption Talks by Netguru
Ep. 177. Implementing AI Agents,Getting Started and Tackling Key Challenges – with Microsoft, Wise and Center for AI and Digital Policy

Disruption Talks by Netguru

Play Episode Listen Later Apr 9, 2025 32:17


From planning to pitfalls—how do you actually get started with AI agents?Listen to for a real-world take on adoption, risks, and navigating early implementation challenges.Speakers:Marc Teipel, Azure App Innovation & AI Specialist – Enterprise at MicrosoftPat Szubryt, AI Policy Lead at Center for AI and Digital PolicyEgor Kraev, Former Head of AI at WiseHost: Justyna Kałębasiak, Delivery Lead at Netguru

Canadian Government Executive Radio
CGE Radio: Digital Trust Starts with Modern and Accessible Government Infrastructure

Canadian Government Executive Radio

Play Episode Listen Later Apr 7, 2025 17:19


In this episode of CGE Radio, J. Richard Jones sits down with EY Canada's Janice Horne, Jen Mossop Scott, and Rohit Boolchandani to explore how digital credentials could play a pivotal role in strengthening Canada's economy and restoring its global competitiveness. With Canada navigating a tense trade dispute with the U.S. and seeking ways to enhance productivity, resiliency, and interprovincial trade, our guests unpack how secure and trustworthy digital identity systems can be a key enabler of growth and unity. We dive into: Why global rankings matter and how Canada can climb back up Lessons from India's Aadhaar system and what Canada could adopt How to get started in a complex digital policy landscape Ensuring citizen trust, privacy, and equity in adoption Tune in for an insightful discussion on how governments can put citizen-centric digital trust at the heart of Canada's economic future.

Shipping Forum Podcast
2025 3rd Capital Link Cyprus Business Forum |Technology Gateway to Europe Middle East and North Africa - Keynote Remarks

Shipping Forum Podcast

Play Episode Listen Later Apr 4, 2025 15:11


KEYNOTE REMARKS - CYPRUS: A TECHNOLOGY GATEWAY TO EUROPE MIDDLE EAST & NORTH AFRICA KEYNOTE REMARKS H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus 2025 3rd Capital Link Cyprus Business Forum Friday, April 4, 2025 Metropolitan Club in New York City Organized in cooperation with the Cyprus Union of Shipowners and supported by the Deputy Shipping Ministry of the Republic of Cyprus and Invest Cyprus, this premier forum will foster an open dialogue on Cyprus’s business and investment landscape, highlighting its openness and competitiveness on the global stage. The event convenes a distinguished delegation of government public officials from Cyprus and industry leaders from the private sector to address key topics, including security and stability, energy, cyber technology, banking and finance, and shipping. For further information visit: https://forums.capitallink.com/cyprus/2025/overview.html

Shipping Forum Podcast
2025 3rd Capital Link Cyprus Business Forum |Technology Gateway to Europe Middle East and North Africa

Shipping Forum Podcast

Play Episode Listen Later Apr 4, 2025 26:31


CYPRUS: A TECHNOLOGY GATEWAY TO EUROPE MIDDLE EAST & NORTH AFRICA KEYNOTE REMARKS H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus Moderator: Mr. Marios A. Cosma, Managing Partner – Treppides Panel Discussion: • H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus • Mr. Evan Kotsovinos, Vice President of Engineering and General Manager - Google • Mr. Giorgos Zacharia, Co-CEO – Insurify; Member – Cyprus AI Taskforce • Mr. Simon Liepold, Senior Director, United Nations and International Organizations – Microsoft • Mr. Andreas Panayi, Founder - Kinisis Ventures Ltd; Investment Advisory Committee member - Kinisis Ventures Fund 2025 3rd Capital Link Cyprus Business Forum Friday, April 4, 2025 Metropolitan Club in New York City Organized in cooperation with the Cyprus Union of Shipowners and supported by the Deputy Shipping Ministry of the Republic of Cyprus and Invest Cyprus, this premier forum will foster an open dialogue on Cyprus’s business and investment landscape, highlighting its openness and competitiveness on the global stage. The event convenes a distinguished delegation of government public officials from Cyprus and industry leaders from the private sector to address key topics, including security and stability, energy, cyber technology, banking and finance, and shipping. For further information visit: https://forums.capitallink.com/cyprus/2025/overview.html

C-Suite Market Update
2025 3rd Capital Link Cyprus Business Forum |Technology Gateway to Europe Middle East and North Africa

C-Suite Market Update

Play Episode Listen Later Apr 4, 2025 26:31


CYPRUS: A TECHNOLOGY GATEWAY TO EUROPE MIDDLE EAST & NORTH AFRICA KEYNOTE REMARKS H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus Moderator: Mr. Marios A. Cosma, Managing Partner – Treppides Panel Discussion: • H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus • Mr. Evan Kotsovinos, Vice President of Engineering and General Manager - Google • Mr. Giorgos Zacharia, Co-CEO – Insurify; Member – Cyprus AI Taskforce • Mr. Simon Liepold, Senior Director, United Nations and International Organizations – Microsoft • Mr. Andreas Panayi, Founder - Kinisis Ventures Ltd; Investment Advisory Committee member - Kinisis Ventures Fund 2025 3rd Capital Link Cyprus Business Forum Friday, April 4, 2025 Metropolitan Club in New York City Organized in cooperation with the Cyprus Union of Shipowners and supported by the Deputy Shipping Ministry of the Republic of Cyprus and Invest Cyprus, this premier forum will foster an open dialogue on Cyprus’s business and investment landscape, highlighting its openness and competitiveness on the global stage. The event convenes a distinguished delegation of government public officials from Cyprus and industry leaders from the private sector to address key topics, including security and stability, energy, cyber technology, banking and finance, and shipping. For further information visit: https://forums.capitallink.com/cyprus/2025/overview.html

C-Suite Market Update
2025 3rd Capital Link Cyprus Business Forum |Technology Gateway to Europe Middle East and North Africa - Keynote Remarks

C-Suite Market Update

Play Episode Listen Later Apr 4, 2025 15:11


KEYNOTE REMARKS - CYPRUS: A TECHNOLOGY GATEWAY TO EUROPE MIDDLE EAST & NORTH AFRICA KEYNOTE REMARKS H.E. Nicodemos Damianou, Deputy Minister of Research, Innovation & Digital Policy – Republic of Cyprus 2025 3rd Capital Link Cyprus Business Forum Friday, April 4, 2025 Metropolitan Club in New York City Organized in cooperation with the Cyprus Union of Shipowners and supported by the Deputy Shipping Ministry of the Republic of Cyprus and Invest Cyprus, this premier forum will foster an open dialogue on Cyprus’s business and investment landscape, highlighting its openness and competitiveness on the global stage. The event convenes a distinguished delegation of government public officials from Cyprus and industry leaders from the private sector to address key topics, including security and stability, energy, cyber technology, banking and finance, and shipping. For further information visit: https://forums.capitallink.com/cyprus/2025/overview.html

The Gateway
AI Policy and Regulation

The Gateway

Play Episode Listen Later Mar 1, 2025 46:59


In this episode, The Gateway was thrilled Mélissa M'Raidi-Kechichian (they/them,) a Research and Advocacy Fellow at the Center for AI and Digital Policy. Mélissa is a social entrepreneur and civic tech practitioner working at the intersection of technology, AI regulation, and advocacy. As the founder of Activists Of Tomorrow, they focus on how digital spaces can be used by everyday people to bring meaningful and lasting change to their community. Mélissa is an expert in AI policy, frameworks, and regulation, and has previously worked in the field of AI and digital policy, civic technology, and digital identity. They held several consulting positions in the private sector and is part of the AI ethics Advisory Panel of the Canadian Digital Governance Council. During their free time, Mélissa hosts the podcast they created called Activists of Tech — The Responsible Tech podcast, exploring the intersection of technology and social justice.You can follow it here: https://rss.com/podcasts/activistsoftech/

The Brave Marketer
The EU's Approach to Digital Policy and Lessons Learned From The GDPR

The Brave Marketer

Play Episode Listen Later Feb 12, 2025 50:36


Kai Zenner, Head of Office and Digital Policy Adviser for German Member of the European Parliament Axel Voss, discusses the emerging regulatory landscape for artificial intelligence in Europe and its implications for innovation and consumer safety. He also discusses implementation hurdles of the EU AI Act, specifically the shortage of AI experts and the complexity of enforcement across 27 member states. Key Takeaways:  Challenges with the AI Act (such as vague use cases, balancing innovation with regulation, and the impact on SMEs) Lessons from GDPR, including upcoming changes being considered that could impact data privacy Horizontal legislative approaches and their implications Future prospects for AI regulation in Europe Guest Bio: Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, focusing on AI, privacy, and the EU's digital transition. He is involved in negotiations on the AI Act, AI Liability Directive, ePrivacy Regulation, and GDPR revision. A member of the OECD.AI Network of Experts and the World Economic Forum's AI Governance Alliance, Zenner also served on the UN's High-Level Advisory Body on AI. He was named Best MEP Assistant in 2023, and ranked #13 in Politico's Power 40 for his influence on EU digital policy. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

The Future of Supply Chain
Episode 95: Cyber Fortress: Securing the Digital Lifelines of Global Supply Chains with SAP's Niall Brennan

The Future of Supply Chain

Play Episode Listen Later Feb 5, 2025 24:03 Transcription Available


This week, Niall Brennan - former FBI agent turned SAP cybersecurity strategist - details the critical role of securing supply chains amid rising cyber threats. With 22 years of FBI experience in counterterrorism, cybercrime, and intelligence, Brennan now bridges public-private partnerships to safeguard SAP's digital ecosystem. He discusses threat intelligence sharing, incident response, regulatory compliance, and anticipatory resilience strategies, highlighting the need for "personality-resistant" global collaboration to neutralize vulnerabilities in an era of rapid technological disruption. Come join us as we discuss the Future of Supply Chain.

The Privacy Advisor Podcast
Digital policy 2024: A year in review with Omer Tene

The Privacy Advisor Podcast

Play Episode Listen Later Dec 13, 2024 41:57


It's hard to believe we've reached the final weeks of 2024, a year filled with policy and legal developments across the map. From the continued emergence of AI governance, to location privacy enforcement, children's online safety to novel forms of privacy litigation, no doubt this was a year that kept privacy and AI governance pros very busy. One such professional in the space is Goodwin Partner Omer Tene. He's been immersed in many of these thorny issues, and as always, has thoughts about what's transpired in 2024 and what that means for the year ahead. I caught up with Tene to discuss the year in digital policy. Here's what he had to say.

Ralph Nader Radio Hour
AI: Can Frankenstein Be Tamed?

Ralph Nader Radio Hour

Play Episode Listen Later Nov 30, 2024 71:40


Ralph welcomes Marc Rotenberg, founder and president of the Center for AI and Digital Policy to fill us in on the latest international treaty aimed at putting guardrails on the potential Frankenstein monster that is Artificial Intelligence. Plus, as we get to the end of the Medicare enrollment period, we put out one last warning for listeners to avoid the scam that is Medicare Advantage.Marc Rotenberg is the founder and president of the Center for AI and Digital Policy, a global organization focused on emerging challenges associated with Artificial Intelligence. He serves as an expert advisor on AI policy to many organizations including the Council of Europe, the Council on Foreign Relations, the European Parliament, the Global Partnership on AI, the OECD, and UNESCO. What troubles me is the gap between an increasingly obscure, technical, and complex technology—abbreviated into “AI” —and public understanding. You know, when motor vehicles came and we tried to regulate them and did, people understood motor vehicles in their daily lives. When solar energy started coming on, they saw solar roof panels. They could see it, they could understand it, they could actually work putting solar panels on roofs of buildings. This area is just producing a massively expanding gap between the experts from various disciplines, and the power structure of corporatism, and their government servants and the rest of the people in the world.Ralph NaderThe difference between these two types of [AI] systems is that with the old ones we could inspect them and interrogate them. If one of the factors being used for an outcome was, for example, race or nationality, we could say, well, that's impermissible and you can't use an automated system in that way. The problem today with the probabilistic systems that US companies have become increasingly reliant on is that it's very difficult to actually tell whether those factors are contributing to an outcome. And so for that reason, there are a lot of computer scientists rightly concerned about the problem of algorithmic bias.Marc Rotenberg[The sponsors of California SB 1047] wanted companies that were building these big complicated systems to undertake a safety plan, identify the harms, and make those plans available to the Attorney General…In fact, I work with many governments around the world on AI regulation and this concept of having an impact assessment is fairly obvious. You don't want to build these large complex systems without some assessment of what the risk might be.Marc RotenbergWe've always understood that when you create devices that have consequences, there has to be some circuit breaker. The companies didn't like that either. [They said] it's too difficult to predict what those scenarios might be, but that was almost precisely the point of the legislation, you see, because if those scenarios exist and you haven't identified them yet, you choose to deploy these large foundational models without any safety mechanism in place, and all of us are at risk. So I thought it was an important bill and not only am I disappointed that the governor vetoed it, but as I said, I think he made a mistake. This is not simply about politics. This is actually about science, and it's about the direction these systems are heading.Marc RotenbergThat's where we are in this moment—opaque systems that the experts don't understand, increasingly being deployed by organizations that also don't understand these systems, and an industry that says, “don't regulate us.” This is not going to end well.Marc RotenbergIn Case You Haven't Heard with Francesco DesantisNews 11/27/241. Last week, the International Criminal Court issued arrest warrants for Israeli Prime Minister Netanyahu and former Defense Minister Yoav Gallant. According to a statement from ICC prosecutor Karim Khan, the international legal body found reasonable grounds to believe that each has committed war crimes and crimes against humanity, including the use of starvation as a method of warfare and intentionally directing attacks against civilians. This news has been met with varied reactions throughout the world. These have been meticulously documented by Just Security. The United States, which is under no obligation to honor the warrant as it is not a party to the Rome Statute, has said it “fundamentally rejects” the judgment and has called the issuing of warrants “outrageous.” Canada, which is party to the Rome Statue has vowed to uphold their treaty obligations despite their close ties to Israel. Germany however, another signatory to the Rome Statute, has suggested that they would not honor the warrants. In a statement, Congresswoman Rashida Tlaib said the warrants are “long overdue” and signal that “the days of the Israeli apartheid government operating with impunity are ending.” One can only hope that is true.2. On November 21st, 19 Senators voted for at least one of the three Joint Resolutions of Disapproval regarding additional arms transfers to Israel. As Jewish Voice for Peace Action puts it, “this is an unprecedented show of Senate opposition to President Biden's disastrous foreign policy of unconditional support for the Israeli military.” The 19 Senators include Independents Bernie Sanders and Angus King, progressive Democrats like Elizabeth Warren, Chris Van Hollen and Raphael Warnock, and Democratic caucus leaders like Dick Durbin, among many others. Perhaps the most notable supporter however is Senator Jon Ossoff of Georgia, whom Ryan Grim notes is the only Democrat representing a state Trump won and who is up for reelection in 2026 to vote for the resolution. Ossoff cited President Reagan's decision to withhold cluster munitions during the IDF occupation of Beirut in a floor speech explaining his vote. The Middle East Eye reports that the Biden Administration deployed Democratic Senate Leader Chuck Schumer to whip votes against the JRD.3. Last week, we covered H.R. 9495, aka the “nonprofit killer” bill targeting pro-Palestine NGOs. Since then, the bill has passed the House. Per the Guardian, the bill passed 219-184, with fifteen Democrats crossing the aisle to grant incoming-President Trump the unilateral power to obliterate any non-profit organization he dislikes, a list sure to be extensive. Congressman Jamie Raskin is quoted saying “A sixth-grader would know this is unconstitutional…They want us to vote to give the president Orwellian powers and the not-for-profit sector Kafkaesque nightmares.” The bill now moves to the Senate, where it is unlikely to pass while Democrats cling to control. Come January however, Republicans will hold a decisive majority in the upper chamber.4. President-elect Donald Trump has announced his selection of Congresswoman Lori Chavez-DeRemer as his pick for Secretary of Labor. Chavez-DeRemer is perhaps the most pro-labor Republican in Congress, with the AFL-CIO noting that she is one of only three Republicans to cosponsor the PRO Act and one of eight to cosponsor the Public Service Freedom to Negotiate Act. Chavez-DeRemer was reportedly the favored choice of Teamsters President Sean O'Brien, who controversially became the first ever Teamster to address the RNC earlier this year. While her selection has been greeted with cautious optimism by many labor allies, anti-labor conservatives are melting down at the prospect. Akash Chougule of Americans for Prosperity accused Trump of giving “A giant middle finger to red states,” by “picking a teachers union hack” and urged Senate Republicans to reject the nomination.5. Unfortunately, most of Trump's selections are much, much worse. Perhaps worst of all, Trump has chosen Mehmet Cengiz Öz – better known as Dr. Oz – to lead the Center for Medicare and Medicaid Services. Beyond his lack of qualifications and history of promoting crackpot medical theories, Oz is a longtime proponent of pushing more seniors into privatized Medicare Advantage, or “Disadvantage,” plans, per Yahoo! Finance. This report notes that the Heritage Foundation's Project 2025 called for making Medicare Advantage the default health program for seniors.6. According to CNN, Brazilian police have arrested five people who conspired to assassinate leftist President Luiz Inácio Lula da Silva, better known as Lula, in 2022. This assassination plot was allegedly cooked up even before Lula took office, and included plans to kill Lula's Vice President Geraldo Alckmin and Supreme Court Justice Alexandre de Moraes. The conspirators included a former high-ranking Bolsonaro advisor and military special forces personnel. Reuters reports investigators have discovered evidence that Bolsonaro himself was involved in the scheme.7. In more news from Latin America, Drop Site reports that the United States and Colombia engaged in a secretive agreement to allow the country's previous U.S.-backed conservative President Ivan Duque to utilize the Israeli Pegasus spyware for internal surveillance in the country. Details of the transaction and of the utilization of the spyware remain “murky,” but American and Colombian officials maintain it was used to target drug-trafficking groups and not domestic political opponents. Just two months ago, Colombia's leftist President Gustavo Petro delivered a televised speech revealing details of this shadowy arrangement, including that the Duque government flew $11 million cash from Bogotá to Tel Aviv. As Drop Site notes, “In Colombia, there's a long legacy of state intelligence agencies surveilling political opposition leaders. With the news that the U.S. secretly helped acquire and deploy powerful espionage software in their country, the government is furious at the gross violation of their sovereignty. They fear that Colombia's history of politically motivated surveillance, backed by the U.S. government, lives on to this day.”8. Following the Democrats' electoral wipeout, the race for new DNC leadership is on. Media attention has mostly been focused on the race to succeed Jamie Harrison as DNC Chair, but POLITICO is out with a story on James Zogby's bid for the DNC vice chair seat. Zogby, a longtime DNC member, Bernie Sanders ally and president of the Arab American Institute has criticized the party's position on Israel and particularly of the Kamala Harris campaign's refusal to allow a Palestinian-American speaker at this year's convention. He called the move “unimaginative, overly cautious and completely out of touch with where voters are.” This report notes Zogby's involvement in the 2016 DNC Unity Reform Commission, and his successful push to strip substantial power away from the so-called superdelegates.9. Speaking of Democratic Party rot, the Lever reports that in its final days the Biden Administration is handing corporations a “get out of jail free card.” A new Justice Department policy dictates that the government will essentially look the other way at corporate misconduct, even if the company has “committed multiple crimes, earned significant profit from their wrongdoing, and failed to self-disclose the misconduct — as long as the companies demonstrate they ‘acted in good faith' to try to come clean.” This is the logical endpoint of the longstanding Biden era soft-touch approach intended to encourage corporations to self-police, an idea that is patently absurd on its face. Public Citizen's Corporate Crime expert Rick Claypool described the policy as “bending over backward to protect corporations.”10. Finally, on November 23rd lawyer and former progressive congressional candidate Brent Welder posted a fundraising email from Bernie Sanders that immediately attracted substantial interest for its strong language. In this note, Sanders writes “The Democrats ran a campaign protecting the status quo and tinkering around the edges…Will the Democratic leadership learn the lessons of their defeat and create a party that stands with the working class[?]…unlikely.” The email ends with a list of tough questions, including “should we be supporting Independent candidates who are prepared to take on both parties?” Many on the Left read this as Bernie opening the door to a “dirty break” with the Democratic Party, perhaps even an attempt to form some kind of independent alliance or third party. In a follow-up interview with John Nichols in the Nation, Sanders clarified that he is not calling for the creation of a new party, but “Where it is more advantageous to run as an independent, outside of the Democratic [Party]…we should do that.” Whether anything will come of this remains to be seen, but if nothing else the severity of his rhetoric reflects the intensity of dissatisfaction with the Democratic Party in light of their second humiliating defeat at the hands of a clownish, fascistic game show host. Perhaps a populist left third party is a far-fetched, unachievable goal. On the other hand, how many times can we go back to the Democratic Party expecting different results. Something has got to give, or else the few remaining pillars of our democracy will wither and die under sustained assault by the Right and their corporate overlords.This has been Francesco DeSantis, with In Case You Haven't Heard. Get full access to Ralph Nader Radio Hour at www.ralphnaderradiohour.com/subscribe

AI, Government, and the Future by Alan Pentz
AI Transparency and Human Rights with Christabel Randolph of the Center for AI and Digital Policy

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Nov 20, 2024 39:03


In this episode of AI, Government, and the Future, host Marc Leh is joined by Christabel Randolph, Associate Director at the Center for AI and Digital Policy, to discuss AI transparency, governance frameworks, and the intersection of artificial intelligence with human rights.

Futureproof with Jonathan McCrea
Extra: Regeneration - Virtual Worlds

Futureproof with Jonathan McCrea

Play Episode Listen Later Nov 19, 2024 15:07


Peter Lynch - Lecturer in Computer Games in TUD 0868130745 @WeAreTUDublinAphra Kerr - Professor of Information & Communication Studies and a senior adviser at the UCD Centre for Digital Policy 0872891531 @AphraK @DigitalPolicyIE

Intrigue Outloud
Intrigue Events: Securing Tomorrow – The Future of Cyber Threats and Global Defense

Intrigue Outloud

Play Episode Listen Later Oct 28, 2024 68:57


Welcome back to Intrigue Events! The space for geopolitical discussion and exploration is often relegated to dusty rooms, with jargony conversations, and one too many uses of the word 'tripolarity.' At Intrigue Media, we're here to change that. Our mission is to discover, contextualize, and analyze the consequences of global political events. Intrigue Events transforms these insights into vibrant, engaging experiences where professionals connect, hear exclusive insights, and engage in dynamic discussions. On October 24th we hosted an event in partnership with Samsung at their Future Center in Washington DC: “Securing Tomorrow: The Future of Cyber Threats and Global Defense.” Our incredible guests from the State Department, DARPA, and SentinelOne offered great insight into the growing role of cybersecurity in a geopolitically active world. Enjoy! Chapters: 0:00-2:00 Opening Remarks from Intrigue's Helen Zhang 2:00-4:30 Remarks from Eric Tamarkin – Director & Senior Public Policy Counsel at Samsung 4:30-27:50 Liesyl Franz – Deputy Assistant Secretary for International Cyberspace Security, Bureau of Cyberspace and Digital Policy at the Department of State 27:50-47:45 Dr. Matt Turek – Deputy Director, Information Innovation Office, Defense Advanced Research Projects Agency (DARPA) 47:45-1:08:56 Brandon Wales – Vice President of Cybersecurity Strategy, SentinelOne and Former Executive Director at the Cybersecurity and Infrastructure Security Agency (CISA) Subscribe to International Intrigue, the free 5-minute global news briefing: https://www.internationalintrigue.io/

All Things Policy
All Things Digital Policy

All Things Policy

Play Episode Listen Later Oct 22, 2024 45:14


Why do governments, including India's, struggle in the cybersecurity domain? In the context of global AI governance wherein the US and EU are pushing for soft touch and hard touch regulation, respectively, what is the Indian approach to AI governance? How can we secure supply chains? To find answers, Lokendra, Rijesh and Bharath sit down with Michael (Mike) R. Nelon (Senior fellow in the Carnegie Asia Program) in this episode of All Things Policy. Mike leverages his nearly four decades of experience of working in internet and digital policy to illuminate the discussion on above questions. They also discuss key insights from Carnegie's 2024 report "Korea's Path to Digital Leadership," lessons India can learn from the Korean experience, and how India fares on the parameters employed in the report. All Things Policy is a daily podcast on public policy brought to you by the Takshashila Institution, Bengaluru. Find out more on our research and other work here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://takshashila.org.in/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Check out our public policy courses here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://school.takshashila.org.in⁠

Cyprus Beat
October 21 Daily News Briefing

Cyprus Beat

Play Episode Listen Later Oct 21, 2024 4:28


In today's episode, Interior Minister Constantinos Ioannou will be travelling to Apia, Samoa for the Commonwealth Heads of Government meeting taking place between 21 and 26 October, an announcement said on Sunday. Elsewhere, Cyprus' government portal was the victim of a cyberattack Sunday but the authorities coped with the incident successfully, the Deputy Ministry of Research, Innovation and Digital Policy said in a statement. Also, Firefighters rescued three people stuck in their car at the Larnaca Salt Lake, fire services spokesman Andreas Kettis said on Sunday. All this and much more in today's Daily News Briefing brought to you by The Cyprus Mail.

Global Insights
Digital Transformation: Washington's Rewired Diplomacy in Africa

Global Insights

Play Episode Listen Later Jul 30, 2024 42:10


Visit us at Network2020.org. In December 2022, the U.S. – Africa Leaders Summit launched the Digital Transformation with Africa (DTA) initiative, catalyzing over $350 million in investment and $450 million in finance mobilization. The DTA is expected to expand digital access and literacy and strengthen digital enabling environments across Africa. Many experts also see this as an opportunity for the U.S. to balance with China regarding technology investment in the continent. What has been accomplished since the DTA launched? What perspectives do African countries have on this initiative? How is and can the private sector effectively engage with DTA? How does the DTA fit into geopolitical competition on the continent?Join us for a conversation with Dr. Jane Munga, Fellow in the Africa Program at the Carnegie Endowment for International Peace; Mr. Rob Floyd, Director for Innovation and Digital Policy at the African Center for Economic Transformation, and Ms. Pren-Tsilya Boa-Guehe, Google's Head for Pan-African Institutions, Government Affairs & Public Policy.

The Daily Scoop Podcast
Biden's AI Executive Order milestones; USAID's new digital policy

The Daily Scoop Podcast

Play Episode Listen Later Jul 29, 2024 5:25


The White House marked nine months since the signing of President Biden's executive order on artificial intelligence with a new voluntary safety commitment from Apple and several new completed actions on the technology across the government. Apple's agreement to safety, testing, and transparency measures outlined by the Biden administration brings the total number of AI companies that have signed on to the commitments to 16. These commitments were initially announced last year and include companies such as Meta, OpenAI, IBM, and Adobe. Federal agencies have completed a number of actions required within 270 days of the executive order's issuance, including the first technical guidelines from the AI Safety Institute, initial guidance for agencies on AI training data, and a national security memo on AI. The national security memo was sent to the president, with non-classified portions to be made available later. Another expected report from the Department of Commerce's National Telecommunications and Information Administration will address the risks and benefits of dual-use foundation models. Previous actions under the order included using the Defense Production Act for safety measures, setting up a National AI Research Resource, and launching an AI Talent Surge initiative. In other news, USAID has released a 10-year digital policy aimed at guiding the international development agency's approach to emerging technologies in partner countries, from boosting internet access to embracing artificial intelligence. Administrator Samantha Power emphasized the need for U.S. leadership in promoting global internet connectivity and countering the misuse of technology by authoritarian governments. USAID plans to double its investment in its technology team and is promoting its new site, digitaldevelopment.org, which features ongoing work on digital ecosystem assessments. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on on Apple Podcasts, Soundcloud, Spotify and YouTube.

AI, Government, and the Future by Alan Pentz
AI Ethics and National Security with Nidhi Sinha of the Center for AI and Digital Policy

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Jun 5, 2024 33:04


In this episode of AI, Government, and the Future, host Max Romanik is joined by Nidhi Sinha, a research fellow at the Center for AI and Digital Policy, to discuss the ethical challenges of AI in national security, such as predictive policing and cyber surveillance. They explore how to balance innovation with individual rights and the role of AI in shaping global equity. Nidhi shares insights from her extensive experience to illuminate how democratic societies can manage AI's impact responsibly.

Live at America's Town Hall
Constitutional Challenges in the Age of AI

Live at America's Town Hall

Play Episode Listen Later May 21, 2024 61:45


Tech policy experts Mark Coeckelbergh, author of the new book Why AI Undermines Democracy and What To Do About It, Mary Anne Franks of George Washington University Law School, and Marc Rotenberg of the Center for AI and Digital Policy explored the evolving relationship between artificial intelligence and constitutional principles and suggest strategies to protect democratic values in the digital age. This conversation was moderated by Thomas Donnelly, chief content officer at the National Constitution Center. This program was made possible through the generous support of Citizen Travelers, the nonpartisan civic engagement initiative of Travelers. Resources: Mark Coeckelbergh, Why AI Undermines Democracy and What To Do About It (2024) Center for AI and Digital Policy (CAIDP), “Universal Guidelines for AI” CAIDP, “Artificial Intelligence and Democratic Values” Mary Anne Franks, Fearless Speech: Breaking Free from the First Amendment, (forthcoming Oct. 2024) “Tougher AI Policies Could Protect Taylor Swift—And Everyone Else—From Deepfakes,” Scientific American (Feb. 8, 2024) Marc Rotenberg, “Human Rights Alignment: The Challenge Ahead for AI Lawmakers,” (Dec. 2023) EU General Data Protection Regulation (GDPR), https://gdpr-info.eu/ “U.S. Senate Will Debate Three Bipartisan Bills Addressing the Use of AI in Elections,” Democracy Docket (May 14, 2024) OECD Principles on AI Marc Rotenberg, “The Imperative for a UN Special Rapporteur on AI and Human Rights,” Vol. 1 (2024) Mark Coeckelbergh, “The case for global governance of AI: arguments, counter-arguments, and challenges ahead,” (May 2024) Bipartisan Senate AI Working Group Report Council of Europe and AI Council of Europe AI Treaty Stay Connected and Learn More: Questions or comments about the show? Email us at programs@constitutioncenter.org Continue the conversation by following us on social media @ConstitutionCtr. Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate. Subscribe, rate, and review wherever you listen. Join us for an upcoming live program or watch recordings on YouTube. Support our important work. Donate

Federal Drive with Tom Temin
State Department pursues digital solidarity with like-minded countries

Federal Drive with Tom Temin

Play Episode Listen Later May 17, 2024 10:07


The State Department has released what you might call a diplomacy strategy for the digital world. It seeks what officials call digital solidarity with other countries. It even has a four-part action plan. For details, Federal Drive Host Tom Temin spoke with the Senior Adviser to the State Department's Bureau of Cyberspace and Digital Policy, Adam Segal. Learn more about your ad choices. Visit megaphone.fm/adchoices

Federal Drive with Tom Temin
State Department pursues digital solidarity with like-minded countries

Federal Drive with Tom Temin

Play Episode Listen Later May 17, 2024 9:22


The State Department has released what you might call a diplomacy strategy for the digital world. It seeks what officials call digital solidarity with other countries. It even has a four-part action plan. For details, Federal Drive Host Tom Temin spoke with the Senior Adviser to the State Department's Bureau of Cyberspace and Digital Policy, Adam Segal. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

RTÉ - Drivetime
Launch of the Report of the Task Force on Safe Participation in Political Life

RTÉ - Drivetime

Play Episode Listen Later May 15, 2024 7:01


A survey on abuse encountered by TDs and Senators has found that 94% of respondents experienced some form of threat, harassment, abuse or violence during the course of their work. Elizabeth Farries is Director of the UCD Centre for Digital Policy and was a co-author of the survey outlines the findings.

The Lawfare Podcast
Lawfare Daily: The U.S. International Cyberspace and Digital Policy Strategy with Adam Segal

The Lawfare Podcast

Play Episode Listen Later May 13, 2024 43:45


On May 6, the U.S. State Department unveiled its U.S. International Cyberspace and Digital Policy Strategy. Lawfare's Fellow in Technology Policy and Law, Eugenia Lostri, discussed the new strategy with Adam Segal, Senior Advisor in the Bureau of Cyberspace and Digital Policy. They talked about how the strategy fits with other cyber actions from the Biden administration, what the principle of digital solidarity looks like in practice, and how to future-proof these initiatives. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Irish Tech News Audio Articles
UCD Centre for Digital Policy Unveil AI Video Series

Irish Tech News Audio Articles

Play Episode Listen Later Apr 22, 2024 3:22


The UCD Centre for Digital Policy, with the support of Minister of State for Trade Promotion, Digital and Company Regulation Dara Calleary TD and Microsoft, today announced the release of a newly created AI video series to help build AI policy understanding and capabilities among policymakers, developers and others. Bringing expert academic, legal, industry, political and policy expertise and insights together, the five short videos provide a solid base for anyone interested in deepening their knowledge and understanding of this dynamic technology and social policy space. Contributors include Minister Dara Calleary, AI Ambassador Patricia Scanlon, and Drs. Elizabeth Farries and Susan Leavy from UCD; AI Advisory Council member Barry Scannell; and TrialView's Stephen Dowling. The video series builds on a collaboration between UCD and Microsoft, which saw the introduction of the Microsoft-UCD Digital Policy Programme at UCD in 2020 with the goal of building digital policy capability amongst the public and private sector in Ireland and across the wider EU. The announcement was made at the Digital Ireland Conference organised by the Department of Enterprise, Trade and Employment today in Dublin Castle. The event sought to underline Ireland's position as a digital leader at the heart of European and global digital developments and demonstrate Government's commitment to drive greater clarity, coherence and cooperation in digital in Ireland. Welcoming the release of the AI video series, Minister Dara Calleary TD said: "Ireland can lead in responsible AI and innovative AI and be at the core of AI innovation in Europe. As we look ahead, skilling up in AI will give people the skills and confidence to deal with and manage AI. Skills are also crucial to understanding ethical AI and person-centred AI, which are two key principles of Ireland's national AI strategy." Dr Elizabeth Farries from the UCD Centre for Digital Policy said: "Communication and comprehension need to occur along every point of the AI supply and development chain. We need communication and understanding of ethics from researchers and developers to Governments embracing these technologies. That is why we recommend capacity building for policymakers and developers alike through education, including the programmes offered at UCD Centre for Digital Policy." James O'Connor, Microsoft Ireland Site Lead and Vice President of Microsoft Global Operations Service Center, said: "AI is a transformative technology that has huge potential to empower workers, businesses and communities across Ireland. As the use of AI tools and technologies accelerates, it is important that both the policy opportunities and challenges created by the technology are well understood. By providing insights from a wide-ranging set of experts across academia, policy and industry, the new AI video series produced in collaboration with the UCD Centre for Digital Policy can help to deepen understanding in these key areas and ensure responsible AI principles are put into practice." The AI Video Series, along with a similar series on Cyber Security produced last year, are available to view at www.digitalpolicy.ie.

Highlights from The Pat Kenny Show
We discuss the UN resolution on AI

Highlights from The Pat Kenny Show

Play Episode Listen Later Mar 25, 2024 8:16


A new UN resolution on AI promises to start a new wave of action on countries ability to safeguard human rights, protect personal data and monitor ai for risks. However there have been mixed reactions amongst experts as to what it will actually achieve.To discuss further Pat spoke to Elizabeth Farries Director of Digital Policy.ie & Assistant Professor at UCD and also Barry O'Sullivan Professor of AI at University College Cork.

Tools and Weapons with Brad Smith
U.S. Ambassador Nate Fick: Choosing a radio over a rifle in combat

Tools and Weapons with Brad Smith

Play Episode Listen Later Jan 8, 2024 38:12


As the United States' first Ambassador-at-Large for Cyberspace and Digital Policy, Nathaniel Fick is leading a tech-centered global diplomatic mission. Nate brings extraordinary depth to this important role in contemporary foreign policy – not as a career diplomat, but from a wide range of experiences: a Classics graduate from Dartmouth, a Marine leader in Afghanistan and Iraq, a venture capitalist, and a CEO for a cybersecurity firm.As we kick off 2024, we discuss his priorities for the year ahead, why he'd always choose his radio over his rifle, the parallels between philosophy and AI policy, and an inspiring call for each of us to find time for national service.Click here for the full transcript.

AI, Government, and the Future by Alan Pentz
Navigating the Global Web of AI Regulation with Yasin Tokat, Policy Group Member at the Center for AI and Digital Policy

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Dec 6, 2023 31:02


Yasin Tokat, Policy Group Member at the Center for AI and Digital Policy, joins this episode of AI, Government, and the Future by Alan Pentz to unveil the global challenges in AI regulation and how to solve them. They dive into the divergent paths of AI regulation in the US and EU, the risks of AI in cyberattacks, and the urgent need for global collaboration in AI policy and data privacy.

FP's First Person
Why America Has a New Tech Ambassador

FP's First Person

Play Episode Listen Later Nov 10, 2023 46:42


The State Department has a new Bureau of Cyberspace and Digital Policy, and it's run by Nathaniel Fick, a former cybersecurity executive and marine. Ambassador Fick joined the Biden administration to make sure that every department's digital policy is connected up together. And his job is to make sure the White House can combat threats emerging from cyberspace and AI in the best possible way. Fick joins Ravi Agrawal to share his vision for this new department. Suggested reading: Ravi Agrawal: Why America Has a New Tech Ambassador Rishi Iyengar: Biden Turns a Few More Screws on China's Chip Industry Rishi Iyengar: Inside the White House-Backed Effort to Hack AI Learn more about your ad choices. Visit megaphone.fm/adchoices

AI, Government, and the Future by Alan Pentz
How the EU's New AI Act Can Change Everything for Tech Companies with Michael Kolain, Advisor for Digital Policy at Fraktion Bündnis 90/Die Grünen im Deutschen Bundestag

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Nov 8, 2023 26:21


Michael Kolain, Advisor for Digital Policy at Fraktion Bündnis 90/Die Grünen im Deutschen Bundestag, joins this episode of AI, Government, and the Future by Alan Pentz to share thoughts on the forthcoming AI Act. They dive into the history and current state of AI regulation in the EU, the impact the AI Act will have on innovation, as well as the importance of safety and transparency in AI products.

Highlights from Newstalk Breakfast
Should a 'collective decision' be made not to buy smartphones for children?

Highlights from Newstalk Breakfast

Play Episode Listen Later Nov 7, 2023 4:28


Education Minister Norma Foley is bringing a memo to Cabinet that encourages parents to avoid buying smartphones for their children in primary schools. Speaking to Newstalk Breakfast was Dr Elizabeth Farries, Director at UCD Centre for Digital Policy.

RTL Today - In Conversation with Lisa Burke
Observe the Moon Night, News and Connectivity, 20/10/2023

RTL Today - In Conversation with Lisa Burke

Play Episode Listen Later Oct 20, 2023 57:47


NASA invites us to look at the moon tonight as a global initiative. MyConnectivity Luxembourg helps us better connect. And Sasha Kehoe goes over the week's news. Observe the Moon Night - 21 October Julie Anne Fooshee joins me online from blustery Scotland to talk about International Observe the Moon Night with NASA. Now a PhD student at the University of Edinburgh in Science Communication and Public Engagement, Julie Anne has already spent over a decade working in the field, and was on the organising committee to celebrate the first ever International Observe the Moon Night back in 2009. From standing outside, taking a look towards the sky and noticing the moon, to attending or hosting an event, and joining a global livestream, the point is to come together with other lunar enthusiasts and gaze upon our near-Earth neighbour. The moon has spurred lunar science and exploration; some intensely challenging undertakings. But the moon is also celebrated in arts and culture. And NASA wants everyone to get involved at every level. The first step is simply to register with NASA, even as an individual observer. NASA will be collecting photos on Flickr; there's a Facebook page and LiveStreams. Tonight happens to also be the peak visibility for the Orionid meteor shower from Halley's comet so it's a win-win, to be able to observe fragments left from this famous comet. https://moon.nasa.gov/observe-the-moon-night/participate/connect/ moon.nasa.gov/observe https://www.flickr.com/groups/observethemoon2023 https://www.facebook.com/observethemoon/ https://moon.nasa.gov/observe-the-moon-night/participate/live-streams/ #ObserveTheMoon @NASAMoon MyConnectivity MyConnectivity is an initiative between the Ministry of Media, Connectivity and Digital Policy and LU-CIX (the Luxembourg Internet Exchange) and has been in existence since December 2021. Julien Larios is the Technical Director of MyConnectivity and Marc Lis is the Head of Marketing and Communication. There is also an Advisory Community It matters because connectivity is a key enabler for our increasingly digital society. Homes and businesses need to upgrade in order to be future-proof: limitless bandwidth, the switch from copper etc. We are moving into a world of AI and Smart Buildings. https://www.instagram.com/myconnectivity https://www.facebook.com/myconnectivity.gie https://www.linkedin.com/company/myconnectivity www.myconnectivity.lu Listen on Today Radio or Podcast Listen on Today Radio Saturdays at 11am, Sundays at noon and Tuesdays at 10am. Please subscribe to the podcast on Apple and Spotify. Get in touch with Lisa here.

Innovation Files
'Regulation by Outrage' is a Detriment to Emerging Technologies, With Patrick Grady

Innovation Files

Play Episode Play 29 sec Highlight Listen Later Sep 11, 2023 20:52 Transcription Available


Policy regarding new technologies can be reactionary, confused and focus on the wrong things. Rob and Jackie sat down with Patrick Grady, former policy analyst at ITIF's Center for Data Innovation to discuss what the European Union policymaking process can teach us about regulating emerging tech. MentionedProposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence, (European Commission, April 2021).RelatedPatrick Grady. The AI Act Should Be Technology-Neutral, (Center for Data Innovation, February 2023).Ashley Johnson. Restoring US Leadership on Digital Policy, (ITIF, July 2023).

Resilient Cyber
S5E1: Amit Elazari - Convergence of Technology & Digital Policy

Resilient Cyber

Play Episode Listen Later Sep 1, 2023 40:05


- For those who haven't met you yet or come across your work, can you tell us a bit about your background?- First off, tell us a bit about OpenPolicy, what is the organizations mission and why did you found it?- Why do you think it's important for there to be tight collaboration and open communication between businesses, startups and policy makers? - Some often say that policy is written by those unfamiliar with the technology it governs or the impact of the regulation and it has unintended consequences. Do you think this occurs and how do we go about avoiding it?- You were recently involved in the launch of the U.S. Cyber Trust Mark program launch for IoT labeling, can you tell us a bit about that?- We're seeing increased calls and efforts for regulating technology and software, especially around software supply chain security, Secure-by-Design products and not leaving risk to the consumers. How do we balance the regulatory push without stifling innovation, which is often the concern?- I recently saw you launch your own show and interview Jim Dempsey, who I've interviewed in the past. Among other topics, you all touched on the recent SEC rule changes and the increased push for cybersecurity to be a key consideration and activity for governing publicly trading companies. Why do you think we're seeing such a push?- For those looking to learn more about Open Policy, and your efforts around digital policy and regulation, where can folks learn more and potentially even get involved?

CFR On the Record
Higher Education Webinar: Implications of Artificial Intelligence in Higher Education

CFR On the Record

Play Episode Listen Later Jun 27, 2023


Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education.   FASKIANOS: Welcome to CFR's Higher Education Webinar. I'm Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today's discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he's received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone's mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it's going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you're seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I'll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I'm a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven't. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there's a big textbook about this big. I'm not endorsing it. All I'm saying, for those people who are very curious, there are two great academics, Russell and Norvig. They're in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you're really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It's called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We're in a moment where there's a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I'm speaking to you from, in Italy. I'm in Rome on vacation right now. And Italian data protection agency said: Listen, we're concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we're asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we're still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it's groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there's an executive order in February of 2023—that many of us in higher education read because, once again, we're trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it's all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don't even add some bachelor's degrees in this field, or we don't modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We're looking at amazing research opportunities, things that we'll be able to do with artificial intelligence that we couldn't even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We're looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we're looking for, the things that we need in order to conduct our academic work. We're certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you're not getting this particular concept. Why don't you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We're looking very concerned, because we're concerned about offering, you know, a good value for the money when it comes to education. So we're hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let's not forget this, but we still have many underserved students, and they're underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I'd love to talk about all the things that can go wrong. I'd love to talk about all the things that we should be doing so that things don't go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we're going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I'm going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you're cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we're going to educate our students and we're going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it's a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I'm going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it's formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don't—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don't want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I'm struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena's at Hofstra University. MOLINA: Thank you. You know, it's a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We'll see in the future because, you know, it depends on how it's regulated. But they're not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They're trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you're right in a sense that that information is already on the web. But we've aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we're going to the next level. The next level is where you ask ChatGPT in human-natural language. You're not trying to combine the three words that say, OK, is the economics class required? No, no, you're telling ChatGPT, hey, listen, I'm in the master's in business administration at Drexel University and I'm trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We're not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We're just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn't get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I'm going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn't have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we'll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It's only forty years until I retire, probably eighty more until I die. I don't have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don't want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you've used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don't want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I'm going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It's something that has been—I'm from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I'm kind of concerned about that how it will affect the students' desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn't, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we're teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who's just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you're absolutely right. Let's say that we have an adjunct faculty member who's teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it's a little bit like this fighting of different sources and business opportunities for all of them. And we've done this. We've used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn't write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we're using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I'm sure you don't want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who's the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe's question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I'm going to leave it that. Thank you. (Laughter.) It's a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we've been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we're programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we're doing something even more concerning than that, which is we have some basic algorithms but then we're feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it's very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don't know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don't have time to explain how the algorithm works for others, including the regulators. Let's remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let's not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I'm going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you're using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don't address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it's fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I'm going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I'm sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I'm a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they're all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it's very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I'm going to be honest, I don't want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you'll find links to articles, you'll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let's be honest, many of those have no teeth in our institutions. You know, we promulgate them. They're very nice. They look beautiful. They are beautifully written. But oftentimes when people don't follow them, there's not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it's difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It's hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn't get them, we'll include them in the follow-up email. So I'm going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don't know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we've done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you're thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I'm going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let's be honest, we've seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we're going to have to continue to work together, as we've done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we'll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I'm going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There's been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I'm not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I'm not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I'm going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you're cheating in an ethics and technology class, I failed miserably. So please don't. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they're not as good as they could be. So like every other technology and, what I'm going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They're remarkably good for the handful of papers that I tried myself, but I haven't conducted enough research myself to tell you if they're really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn't we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he's a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master's of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We're very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don't think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn't happen in the institutions that I know of. If you get time off because you're entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it's much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I'm going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we've seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it's a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we're always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We're finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it's unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we're using these tools because we're incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there's no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I'm going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she's from Fordham University—about she read something about Germany's best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky's question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I've done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we've seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there's good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we're going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you're going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn't get to them all. I'm just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let's be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it's also a national race. For example, it's very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let's be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It's not going to be easy for any small company to say: Oh, now I'm a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don't have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we're using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We'll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they'll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we'll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we're still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We'll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn't happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don't get tired, and they can be driving twenty-four by seven, and they don't require health benefits, or retirement. They don't get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we're so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn't get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to CFRacademic@CFR.org. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis
The EU AI Act effect: Background, blind spots, opportunities and roadmap. Featuring Aleksandr Tiulkanov, AI, Data & Digital Policy Counsel

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis

Play Episode Listen Later Jun 19, 2023 73:47


The EU Parliament just voted to bring the EU AI Act regulation into effect. If GDPR is anything to go by, that's a big deal. Here's what and how it's likely to effect, its blind spots, what happens next, and how you can prepare for it based on what we know. Article published on Orchestrate all the Things

Coffee Talk with SURGe
Coffee Talk with SURGe: 2022-APR-05 State Department, Elections, Spring4Shell, Certs, Lapsus$, RSAC

Coffee Talk with SURGe

Play Episode Listen Later Jun 14, 2023 31:36


Grab a cup of coffee and join Ryan Kovar, Audra Streetman, and Mick Baccio for another episode of Coffee Talk with SURGe. You can watch the episode livestream here.    This week the team discussed the takedown of Hydra, the U.S. State Department's new Bureau of Cyberspace and Digital Policy, and a coordinated phishing campaign aimed at targeting U.S. election officials in the lead up to the 2022 midterm elections. Mick and Ryan both competed in a 60 second charity challenge to explain the current situation regarding the Spring4Shell vulnerability. They also discussed the recent arrest of teenagers in connection with the Lapsus$ criminal hacking group and the importance of ethics in cybersecurity.

The Periphery
AI, Diversity, and Democratic Values (with Merve Hickok Director at the Center for AI and Digital Policy)

The Periphery

Play Episode Listen Later May 31, 2023 45:23


This week, we discuss developments in AI with Merve Hickok, Director at the Center for AI and Digital Policy. Join our conversation where we discuss regulation, diversity, and the future of AI use! Hosted on Acast. See acast.com/privacy for more information.

Democracy Now! Video
Artificial Intelligence History, How It Embeds Bias, Displaces Workers, as Congress Lags on Regulation

Democracy Now! Video

Play Episode Listen Later May 18, 2023


Part 2 of our interview with Marc Rotenberg, executive director of the Center for AI and Digital Policy.

Better Innovation
Season 6, Ep. 8 - Yolanda Jinxin Ma: The United Nations of Digital Inclusion

Better Innovation

Play Episode Listen Later May 1, 2023 79:50


Today's guest exemplifies what it truly means to ethically harness data and technology to transform the most vulnerable nations in the world. Yolanda Jinxin Ma, Head of Digital Policy and Global Partnerships at the United Nations Development Programme, joins Jeff on today's show to address digital transformation in the UN's international development.  Yolanda leads the development and implementation of the UNDP's digital strategy. She advises on multi-disciplinary initiatives in numerous countries. This includes public sector digital transformation, innovative financing for sustainable development, digital public goods proliferation, and cross-sector public-private partnerships. Yolanda has such a unique perspective on today's digital, social, political, and economic landscape. Listen-in as Jeff and Yolanda explore what it means to transform societies in an era of digital division and rapid technological change, and how nations can overcome the challenges that accompany such important work. 

The Bogosity Podcast
Bogosity Podcast for 02 April 2023

The Bogosity Podcast

Play Episode Listen Later Apr 2, 2023 24:01


News of the Bogus: 14:33 – Biggest Bogon Emitter: Center for AI and Digital Policy 19:02 – Idiot Extraordinaire: Alvin Bragg This Week's Quote: “When you want to help people, you tell them the truth. When you want to help yourself, you tell them what they want to hear.” —Thomas Sowell

TechTank
Technology adoption in Africa: current and future use cases for development

TechTank

Play Episode Listen Later Dec 12, 2022 33:29


On this episode, host Nicol Turner Lee explores technology adoption in Africa and universal access throughout the continent with guests Yolanda Jinxin Ma, head of Digital Policy and Global Partnerships at the United Nations Development Programme, Addisu Lashitew, Global Economy and Development nonresident fellow at the Brookings Institution, and Jane Munga, Africa Program fellow at Carnegie who focuses on technology policy. Hosted on Acast. See acast.com/privacy for more information.

Confluence
Canada's Digital Policy Landscape with Taylor Owen

Confluence

Play Episode Listen Later Sep 22, 2022 38:16


How do we regulate the Internet when there are so many components and so many ways it can go wrong? That's what host Rana Sarkar discusses with guest Taylor Owen on this week's episode. Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communications, and founding director of The Center for Media, Technology and Democracy, and an Associate Professor in the Max Bell School of Public Policy at McGill University. In their conversation, Rana and Taylor dive into the challenges of monitoring content online; frameworks for tech regulation; how the information ecosystem has changed in the past decade and tensions that Web 3 may create for future tech regulations. The episode also links back to earlier conversations on Confluence about global tech policy, as Rana and Taylor share updates on Canadian tech policy and discuss the role that Canada can play in global tech regulation. LINKS:Taylor Owen WebsiteTaylor Owen TwitterBig Tech podcast

The CyberWire
Doxing, trolling, and censorship in a hybrid war. Borat RAT. State's Bureau of Cyberspace and Digital Policy. National Supply Chain Integrity Month. Wild youth. Hey spooks: brown bag it like the GRU.

The CyberWire

Play Episode Listen Later Apr 4, 2022 29:50 Very Popular


Doxing, trolling, and censorship in a hybrid war. Western organizations remain on alert for a Russian cyber campaign. Known Russian threat actors continue operations against Ukraine proper. Borat RAT described. Welcome the US State Department's Bureau of Cyberspace and Digital Policy. National Supply Chain Integrity Month. Your wild ways will break your mother's heart. Rick Howard weighs in on Shields Up. Josh Ray from Accenture on ideological differences on underground forums. And fast food as an OPSEC issue (and an OSINT source). For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/11/64 Selected reading. Ukraine intelligence leaks names of 620 alleged Russian FSB agents (Security Affairs)  Anonymous leaked 15 GB of data allegedly stolen from the Russian Orthodox Church (Security Affairs)  Listen Now: Deputy national security adviser talks about the risk of Russia waging cyberwar (NPR One)  Inside Cyber Front Z, the ‘People's Movement' Spreading Russian Propaganda (Vice) Ukraine Accuses Russia of Using WhatsApp Bot Farm to Ask Military to Surrender (Vice) ‘It's like 1937': Informants denounce anti-Ukraine war Russians (The Telegraph)  Cyber Espionage Actor Deploying Malware Using Excel (Bank Info Security) New Borat remote access malware is no laughing matter (BleepingComputer) Deep Dive Analysis – Borat RAT (Cyble) Establishment of the Bureau of Cyberspace and Digital Policy (United States Department of State)  Supply Chain Integrity Month (CISA) April is National Supply Chain Integrity Month. As Russia Plots Its Next Move, an AI Listens to the Chatter (Wired)  Data leak from Russian delivery app shows dining habits of the secret police (The Verge)