POPULARITY
It doesn't look good.What used to be the AI Safety Summits were perhaps the most promising thing happening towards international coordination for AI Safety.This one was centrally coordination against AI Safety.In November 2023, the UK Bletchley Summit on AI Safety set out to let nations coordinate in the hopes that AI might not kill everyone. China was there, too, and included.The practical focus was on Responsible Scaling Policies (RSPs), where commitments were secured from the major labs, and laying the foundations for new institutions.The summit ended with The Bletchley Declaration (full text included at link), signed by all key parties. It was the usual diplomatic drek, as is typically the case for such things, but it centrally said there are risks, and so we will develop policies to deal with those risks.And it ended with a commitment [...] ---Outline:(02:03) An Actively Terrible Summit Statement(05:45) The Suicidal Accelerationist Speech by JD Vance(14:37) What Did France Care About?(17:12) Something To Remember You By: Get Your Safety Frameworks(24:05) What Do We Think About Voluntary Commitments?(27:29) This Is the End(36:18) The Odds Are Against Us and the Situation is Grim(39:52) Don't Panic But Also Face RealityThe original text contained 4 images which were described by AI. --- First published: February 12th, 2025 Source: https://www.lesswrong.com/posts/qYPHryHTNiJ2y6Fhi/the-paris-ai-anti-safety-summit --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try
In this week's episode, Kim Beesley and Jen Morris, community representatives, from SPA ethics board, speak about AI in health care and in particular, speech pathology practice. As service users Kim and Jen share what they are excited about, and what they would like to feel assured that ethical reasoning is at the centre of any decision making about how and when to use AI. Speech Pathology Australia acknowledge the Traditional Custodians of lands, seas and waters throughout Australia, and pay respect to Elders past, present and future. We recognise that the health and social and emotional wellbeing of Aboriginal and Torres Strait Islander peoples are grounded in continued connection to culture, country, language and community and acknowledge that sovereignty was never ceded. SPA member resource: Artificial intelligence in speech pathology position statement: https://soundcloud.com/speechpathologyaustralia/ethical-ai-in-speech-pathology-part-2-community-voices-s06-e15?si=6142434801f043c7a35251343aeeec34&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing Resources: Register for the SPA 2024 Conference, to see the AI in ethics debate, on Wednesday May 29th, 2024: https://www.speechpathologyaustralia.org.au/Public/Public/CPD-events/SPA-conference-2024/Home/SPA-Conference-Home.aspx You can find part one of this series here: https://soundcloud.com/speechpathologyaustralia/ethical-ai-in-speech-pathology-part-1?si=02e9891f778748698dafa9935f8a50db&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing The Bletchley Declaration- https://www.industry.gov.au/publications/bletchley-declaration-countries-attending-ai-safety-summit-1-2-november-2023 WHO Global report on AI in health- https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use Free access to audio transcripts for all Speak Up Podcast episodes are available via the Association's Learning Hub 1. Go to: www.speechpathologyaustralia.org.au/Public/…e616542. 2. Filter – Format – Podcast – Search 3. Select the podcast of your choice 4. Enrol (you will need to sign in or create an account) 5. Add to cart – Proceed to checkout – Submit 6. You will receive an email Order Confirmation with a link back to the Learning Hub 7. The Podcast and transcript will be available in your Learning Centre For further enquiries, please email learninghub@speechpathologyaustralia.org.au
In this week's episode, Professor Emma Power, from UTS, speaks about AI in speech pathology practice. Emma speaks about the opportunities and considerations that using Generative AI can provide and the ethical considerations that we need to be thinking about. This conversation will continue next week when we hear from consumers about their perceptions of using AI, as well as hearing about the new SPA position statement Artificial Intelligence in Speech Pathology; Ethical Considerations. Speech Pathology Australia acknowledge the Traditional Custodians of lands, seas and waters throughout Australia, and pay respect to Elders past, present and future. We recognise that the health and social and emotional wellbeing of Aboriginal and Torres Strait Islander peoples are grounded in continued connection to culture, country, language and community and acknowledge that sovereignty was never ceded. Artificial intelligence in speech pathology position statement: https://soundcloud.com/speechpathologyaustralia/ethical-ai-in-speech-pathology-part-2-community-voices-s06-e15?si=6142434801f043c7a35251343aeeec34&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing Resources: The Conversation article- https://theconversation.com/will-ai-tech-like-chatgpt-improve-inclusion-for-people-with-communication-disability-196481 Resources that Emma mentions: Otter AI (meeting note taker) https://get.otter.ai/otter_ai_chat/?utm_source=google_ads&utm_medium=search&utm_campaign=search-prospecting-nonbrand-ai-phrase-au-otteraichat-maxconv-02092024&utm_term=note%20taker%20ai&gad_source=1&gclid=EAIaIQobChMI8NvJo-DNhQMVWaVmAh1Log6PEAAYASAAEgIpuPD_BwE Chat GPT- https://chat.openai.com/ HeyGen- https://www.heygen.com/?sid=rewardful&via=tri-u-le-th-anh&gad_source=1&gclid=EAIaIQobChMI1--g8uHNhQMVCaVmAh0ENwDGEAAYASAAEgLiE_D_BwE Yoodli AI speech coach https://www.youtube.com/@yoodli/videos The Bletchley Declaration- https://www.industry.gov.au/publications/bletchley-declaration-countries-attending-ai-safety-summit-1-2-november-2023 WHO Global report on AI in health- https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use Digital NSW- https://www.digital.nsw.gov.au/ Free access to audio transcripts for all Speak Up Podcast episodes are available via the Association's Learning Hub 1. Go to: www.speechpathologyaustralia.org.au/Public/…e616542. 2. Filter – Format – Podcast – Search 3. Select the podcast of your choice 4. Enrol (you will need to sign in or create an account) 5. Add to cart – Proceed to checkout – Submit 6. You will receive an email Order Confirmation with a link back to the Learning Hub 7. The Podcast and transcript will be available in your Learning Centre For further enquiries, please email learninghub@speechpathologyaustralia.org.au
Lester Kiewit speaks to Nazareen Ebrahim, CEO of Naz Consulting International, about the forthcoming and second global AI Safety Summit, which is set to take place 21-22 may and hosted by South Korea, where it would build on the "Bletchley Declaration" signed at the first summit, an agreement between countries - including the U.S. and China - to cooperate on AI safety. See omnystudio.com/listener for privacy information.
The second global AI Safety Summit, co-hosted by Britain and South Korea, is encountering difficulties with lower turnout and complex questions. While the first summit resulted in the "Bletchley Declaration" on AI safety, consensus on regulation has proven challenging. Thorny issues, such as copyright, data scarcity, and environmental impact, are overshadowed by existential threats. OpenAI CEO Sam Altman emphasizes the need for an energy breakthrough and seeks to raise funds for computer chip production. However, experts caution against relying solely on scientific breakthroughs and financing. With a decrease in attendance, attention has shifted from existential risks to the resources necessary for AI development. Despite these challenges, the goal of the summit is to make progress in AI safety and innovation. --- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message
For those following the regulation of artificial intelligence, there is no doubt passage of the AI Act in the EU is likely top of mind. But proposed policies, laws and regulatory developments are taking shape in many corners of the world, including in Australia, Brazil, Canada, China, India, Singapore and the U.S. Not to be left behind, the U.K. held a highly touted AI Safety Summit late last year, producing the Bletchley Declaration, and the government has been quite active in what the IAPP Research and Insights team describes as a “context-based, proportionate approach to regulation.” In the upper chamber of the U.K. Parliament, Lord Holmes, a member of the influential House of Lords Select Committee on Science and Technology, introduced a private members' bill late in 2023 that proposes the regulation of AI. The bill also just received a second reading in the House of Lords 22 March. Lord Holmes spoke of AI's power at a recent IAPP conference in London. While there, I had the opportunity to catch up with him to learn more about his Artificial Intelligence (Regulation) Bill and what he sees as the right approach to guiding the powers of this burgeoning technology.
Discusses recent developments in the regulation of artificial intelligence.Our guest today is Brenda Leong who is a Partner at Luminos.Law and an adjunct faculty member teaching privacy and information security at George Mason University.Additional resources:· White House Fact Sheet on the Executive Order on Safe, Secure, and Trustworthy AI: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ · European Union AI Act overview: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai · The Bletchley Declaration: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 · G7 Guiding Principles and Code of Conduct for AI: https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process · Luminos.Law: https://luminos.law/· CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/
What was achieved at the recent AI Safety Summit, hosted by the United Kingdom? How do the geopolitics of technology now impact economies and societies? And how can governments equip themselves better to handle these complex changes? In this episode, senior UK public servant Jonathan Black joins Jennifer Jackett to talk about AI, and the policy responses to it from governments across the globe. Jonathan Black is a Heywood Fellow at the Blavatnik School of Government, University of Oxford. Prior to this appointment he was the UK Prime Minister's representative for the recent AI Safety Summit, G7 and G20 Sherpa and Deputy National Security Advisor. Jennifer Jackett is a Sir Roland Wilson Scholar at the ANU National Security College. Show notes: ANU National Security College academic programs: find out more AI Safety Summit 2023: find out more The Bletchley Declaration: find out more We'd love to hear from you! Send in your questions, comments, and suggestions to NatSecPod@anu.edu.au. You can tweet us @NSC_ANU and be sure to subscribe so you don't miss out on future episodes. The National Security Podcast is available on Acast, Apple Podcasts, Spotify, and wherever you get your podcasts. Hosted on Acast. See acast.com/privacy for more information.
It's been a whirlwind month, and not in the way that one might have predicted. Opening with the UK's AI Safety Summit and ending with the anniversary of ChatGPT, November was already set to be a big month for the field of AI, particularly when it came to global legislation. As the month progressed, it turned out that November had even more planned, in the form of a huge upheaval at OpenAI.In this episode, Jane and Rory look at some of the standout moments of November 2023 and draw a line between them all to pick out a trend for the month.Read more:UK aims to be an AI leader with November safety summitCan the UK's AI Safety Summit succeed?Bletchley Declaration draws cautious approvalThe UK's hollow AI Safety Summit has only emphasized global dividesRishi Sunak's stance on AI goes against the demands of businessesSam Altman joins Microsoft to lead new “advanced AI research team”OpenAI chaos: What next for Sam Altman and the beleaguered tech darling?Sam Altman makes triumphant return to OpenAI after days of chaosSalesforce looks to poach outbound OpenAI staff with "full cash" compensation offer
GRU's Sandworm implicated in campaign against Danish electrical power providers. Paris wastewater agency hit by cyberattack. LockBit hits Boeing. Bletchley Declaration represents a consensus starting point for AI governance. The US Executive Order on artificial intelligence is out. Guest Austin Reid of ABS Group discusses Ship and Shore challenges for security and the current and emerging regulatory landscape. On the Learning Lab, Dragos Mark Urban part 1 of 3 discussing building automation systems with Dragos' Daniel Gaeta and Zach Spencer. Control Loop News Brief. GRU's Sandworm implicated in campaign against Danish electrical power providers. The attack against Danish critical infrastructure (SektorCERT) Exclusive: This pizza box-sized equipment could be key to Ukraine keeping the lights on this winter (CNN) Paris wastewater agency hit by cyberattack. Greater Paris wastewater agency dealing with cyberattack (The Record) Cyberattaque D'Ampleur Au SIAAP (SIAAP) Iranian hacktivists claim an attack on a Pennsylvania water utility. Iranian-Linked Cyber Army Had Partial Control Of Aliquippa Water System (BeaverCountian.com) Municipal Water Authority of Aliquippa hacked by Iranian-backed cyber group (CBS News) LockBit hits Boeing. Ransomware groups rack up victims among corporate America (CyberScoop) #StopRansomware: LockBit 3.0 Ransomware Affiliates Exploit CVE 2023-4966 Citrix Bleed Vulnerability (CISA) Bletchley Declaration represents a consensus starting point for AI governance. Can Rishi Sunak's big summit save us from AI nightmare? (BBC) The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Gov.uk) The US Executive Order on artificial intelligence is out. Administration Actions on AI (AI.gov) Control Loop Interview. Guest is Austin Reid of ABS Group discussing ship and shore challenges for security and the current and emerging regulatory landscape. Control Loop Learning Lab. On the Learning, Mark Urban discusses building automation systems in part 1 of 3 with Dragos' Daniel Gaeta, ICS/OT Cybersecurity Senior Solutions Architect, and Zach Spencer. Senior Enterprise Account Executive. Control Loop OT Cybersecurity Briefing. A companion monthly newsletter is available through free subscription and on the CyberWire's website.
Podcast: Control Loop: The OT Cybersecurity Podcast (LS 35 · TOP 3% what is this?)Episode: Building automation systems and maritime cyber regulations.Pub date: 2023-11-29GRU's Sandworm implicated in campaign against Danish electrical power providers. Paris wastewater agency hit by cyberattack. LockBit hits Boeing. Bletchley Declaration represents a consensus starting point for AI governance. The US Executive Order on artificial intelligence is out. Guest Austin Reid of ABS Group discusses Ship and Shore challenges for security and the current and emerging regulatory landscape. On the Learning Lab, Dragos Mark Urban part 1 of 3 discussing building automation systems with Dragos' Daniel Gaeta and Zach Spencer.Control Loop News Brief.GRU's Sandworm implicated in campaign against Danish electrical power providers.The attack against Danish critical infrastructure (SektorCERT)Exclusive: This pizza box-sized equipment could be key to Ukraine keeping the lights on this winter (CNN)Paris wastewater agency hit by cyberattack.Greater Paris wastewater agency dealing with cyberattack (The Record)Cyberattaque D'Ampleur Au SIAAP (SIAAP)Iranian hacktivists claim an attack on a Pennsylvania water utility.Iranian-Linked Cyber Army Had Partial Control Of Aliquippa Water System (BeaverCountian.com)Municipal Water Authority of Aliquippa hacked by Iranian-backed cyber group (CBS News) LockBit hits Boeing.Ransomware groups rack up victims among corporate America (CyberScoop)#StopRansomware: LockBit 3.0 Ransomware Affiliates Exploit CVE 2023-4966 Citrix Bleed Vulnerability (CISA)Bletchley Declaration represents a consensus starting point for AI governance.Can Rishi Sunak's big summit save us from AI nightmare? (BBC)The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Gov.uk)The US Executive Order on artificial intelligence is out.Administration Actions on AI (AI.gov)Control Loop Interview.Guest is Austin Reid of ABS Group discussing ship and shore challenges for security and the current and emerging regulatory landscape.Control Loop Learning Lab.On the Learning, Mark Urban discusses building automation systems in part 1 of 3 with Dragos' Daniel Gaeta, ICS/OT Cybersecurity Senior Solutions Architect, and Zach Spencer. Senior Enterprise Account Executive.Control Loop OT Cybersecurity Briefing.A companion monthly newsletter is available through free subscription and on the CyberWire's website.The podcast and artwork embedded on this page are from N2K Networks, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Podcast: Control Loop: The OT Cybersecurity Podcast (LS 33 · TOP 5% what is this?)Episode: Building automation systems and maritime cyber regulations.Pub date: 2023-11-29GRU's Sandworm implicated in campaign against Danish electrical power providers. Paris wastewater agency hit by cyberattack. LockBit hits Boeing. Bletchley Declaration represents a consensus starting point for AI governance. The US Executive Order on artificial intelligence is out. Guest Austin Reid of ABS Group discusses Ship and Shore challenges for security and the current and emerging regulatory landscape. On the Learning Lab, Dragos Mark Urban part 1 of 3 discussing building automation systems with Dragos' Daniel Gaeta and Zach Spencer.Control Loop News Brief.GRU's Sandworm implicated in campaign against Danish electrical power providers.The attack against Danish critical infrastructure (SektorCERT)Exclusive: This pizza box-sized equipment could be key to Ukraine keeping the lights on this winter (CNN)Paris wastewater agency hit by cyberattack.Greater Paris wastewater agency dealing with cyberattack (The Record)Cyberattaque D'Ampleur Au SIAAP (SIAAP)Iranian hacktivists claim an attack on a Pennsylvania water utility.Iranian-Linked Cyber Army Had Partial Control Of Aliquippa Water System (BeaverCountian.com)Municipal Water Authority of Aliquippa hacked by Iranian-backed cyber group (CBS News) LockBit hits Boeing.Ransomware groups rack up victims among corporate America (CyberScoop)#StopRansomware: LockBit 3.0 Ransomware Affiliates Exploit CVE 2023-4966 Citrix Bleed Vulnerability (CISA)Bletchley Declaration represents a consensus starting point for AI governance.Can Rishi Sunak's big summit save us from AI nightmare? (BBC)The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Gov.uk)The US Executive Order on artificial intelligence is out.Administration Actions on AI (AI.gov)Control Loop Interview.Guest is Austin Reid of ABS Group discussing ship and shore challenges for security and the current and emerging regulatory landscape.Control Loop Learning Lab.On the Learning, Mark Urban discusses building automation systems in part 1 of 3 with Dragos' Daniel Gaeta, ICS/OT Cybersecurity Senior Solutions Architect, and Zach Spencer. Senior Enterprise Account Executive.Control Loop OT Cybersecurity Briefing.A companion monthly newsletter is available through free subscription and on the CyberWire's website.The podcast and artwork embedded on this page are from N2K Networks, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Curious about the future of privacy in Australia? Join us as we delve into the world of privacy regulation in Australia. Our expert panel shares their thoughts on the Government's response to the Privacy Act Review Report. It's a follow-up to episode #22 ‘Privacy is Not Dead'. Returning guest Anna Johnston, founder and Principal of Salinger Privacy, is joined by Ryan Black, Head of Policy for the Tech Council of Australia, and Kate Bower, a fellow at the UTS Human Technology Institute, currently on sabbatical from CHOICE as Consumer Data Advocate. The panel discuss: - highlights and gaps in the Government's response - the definition of ‘personal information' - the future of the small business and political exemptions - legitimate business interests, fair and reasonable use tests, and best interest duties - interoperability with international privacy frameworks - privacy and artificial intelligence - the urgency of reform - what happens next Relevant Links: Salinger Privacy blog, including Anna Johnston's 2015 article about the need to reform the definition of personal information https://www.salingerprivacy.com.au/2015/04/19/bradley-coopers-taxi-ride-a-lesson-in-privacy-risk/ Arvind Narayanan's “No Silver Bullet “ paper about de-identification http://randomwalker.info/publications/no-silver-bullet-de-identification.pdf Arvind Narayanan's AI Snake Oil Substack https://www.aisnakeoil.com This Week in Digital Trust podcast, latest episode featuring an interview with Johanna https://elevenm.com.au/podcast/episode/86-johanna-weaver-the-future-of-tech-policy/ The Future of Privacy Forum https://fpf.org/ UTS Human Technology Institute's State of AI Governance report https://www.uts.edu.au/human-technology-institute/news/report-launch-state-ai-governance-australia DP-REG joint submission to the Department of Industry, Science and Resources (DISR) consultation on the ‘Safe and responsible AI in Australia' Discussion Paper https://www.accc.gov.au/about-us/media/media-updates/digital-platform-regulators-make-joint-submission-on-ai US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ UK's consultation on a Pro-Innovation Approach to AI - A pro-innovation approach to AI regulation - amended (web-ready PDF) (publishing.service.gov.uk) The Bletchley Declaration by Countries Attending the AI Safety Summit (November 2023) https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 China Global AI Governance Initiative: http://gd.china-embassy.gov.cn/eng/zxhd_1/202310/t20231024_11167412.htm
They discuss the Bletchley Declaration, an AI safety meeting across 28 countries, and discuss some of the key points that came from it. They smoke the AJ Fernandez San Lotano Oval Maduro with Balcones Straight Bourbon Whiskey finished in wine casks Cask strength. https://www.msn.com/en-us/news/world/the-bletchley-declaration-is-no-game-changer-but-it-s-a-solid-start-to-the-global-fight-for-ai-safety/ar-AA1jdTvg
This week we dip back into the postbag to look at some more listener questions. First up we return to our episode looking at recent shifts in abortion rates – is the narrative of ‘it's my body and I'll do what I want' what is truly driving increases in abortion figures in recent years, or is that a bit of a myth? We also take a closer look into recent reports that expose how cutting-edge artificial intelligence models are being trained by incredibly underpaid and exploited workers in the developing world. How should we as Christians respond to what is being claimed as the exploitation of workers around the globe in the name of technological advancement that seeks to benefit humanity? Should governments moderate this kind of employment or is there an argument that digital technology is actually positively transforming economic outlook in the third world? Finally we wrap up today's episode considering if the UK government's recent AI Safety Summit is meaningless ‘motherhood and apple pie' platitudes and, if so, how can we actually push for meaningful regulation? - The WIRED article on the underpaid workers from poorer nations helping train AI data sets https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/ - The UK government's Bletchley Declaration on AI safety https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 - Subscribe to the Matters of Life and Death podcast: https://pod.link/1509923173 - If you want to go deeper into some of the topics we discuss, visit John's website: http://www.johnwyatt.com - For more resources to help you explore faith and the big questions, visit: http://www.premierunbelievable.com
This week, Tristan and Tasia unpack the highlights and key outcomes from the UK's recent AI Safety Summit, where global leaders, tech giants, and AI researchers convened to forge the Bletchley Declaration. We dive into the debate over open-source AI, the commitments from major tech companies to preemptively test AI products, and the challenges of aligning international AI regulations, particularly around frontier AI. Join us as we discuss whether these collaborative efforts to secure a safe future with AI will actually be effective, given various competing geopolitical priorities and relentless technological advancements. Also: another Microsoft AI whoopsie-doopsie.
On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. It was signed by 29 countries, including the United States and China, the global leaders in AI research. In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.
On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. It was signed by 29 countries, including the United States and China, the global leaders in AI research. In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.
It's said that great power comes with great responsibility, but what if that power is not in human hands at all? This week marked a historic milestone with a global conference on AI risks, culminating in the landmark Bletchley Declaration. Signed and sealed by over 25 nations, this pivotal agreement promises to steer the direction of AI's future. But are we focusing too much on the long-term impact of AI?Joining Anton to answer this question is Professor Barry O'Sullivan, the European Union Delegate to Global Partnership on AI, and Professor of AI at UCC.
Der Fortschritt in der Künstlichen Intelligenz ist enorm, die Diskussionen drehen sich längst nicht mehr nur um die Chancen, im Gegenteil: Immer mehr Fachleute äußern die Sorge, das hier eine Technologie außer Kontrolle geraten könnte, die mächtiger ist als alle anderen IT-Ideen, die es jemals gegeben hat. Doch welche Gefahren drohen? Wie groß sind die Risiken wirklich? Die britische Regierung hat in dieser Woche nach Bletchley Park eingeladen, einen historischen Ort, an dem Alan Turing einst den Enigma-Code der Nazis mit entschlüsselte. Dort trafen sich nun Regierungsvertreter des Westens und Chinas, Unternehmer wie Elon Musk und Sam Altman - und zahlreiche renommierte Fachleute. Worüber haben sie gesprochen? Was kam dabei heraus? Was steht in der "Bletchley Declaration"? Über all das diskutieren wir in dieser Episode.
Elon Musk declared artificial intelligence “one of the most disruptive forces in history” in a sit down conversation with British Prime Minister Rishi Sunak that dove into the dangers and opportunities of AI on Thursday, capping off the UK's inaugural AI Safety Summit. “AI will be a force for good most likely,” Musk said. “But the probability of it going bad is not zero percent.” The two men spoke in an interview-style chat from a stage at Lancaster House, a government venue in central London often used for diplomatic purposes, before the conversation was opened up to questions from journalists. The conversation was then posted for streaming on Musk's personal account on X, the social media site formerly known as Twitter that he owns. Musk was present throughout the two-day event held mainly at Bletchley Park, the headquarters for the Allied Forces codebreaking program during World War II, along with US Vice President Kamala Harris, OpenAI CEO Sam Altman, and other notable politicians and global tech leaders. Chinese officials were also present at the event. “I'm glad to see at this point that people are taking AI seriously,” Musk said to Sunak on Thursday. “Thanks for this summit. I think it will go down in history as quite important.” Musk unpacked several predictions for AI, including a future where no jobs would be necessary and AI companionship would be one of the highest forms of friendship. In office for just over a year, Sunak has restored some calm to British politics, but also faced challenges over his elite background, having studied at the exclusive Winchester College, Oxford and Stanford universities. Before entering politics, he worked for banks and hedge funds, including Goldman Sachs. In the first day of his AI event, more than 25 countries and the European Union signed the Bletchley Declaration, agreeing to work together to create a united approach to oversight in efforts to deploy AI technology in a “human-centric, trustworthy and responsible” way, underscoring the “potential for serious, even catastrophic, harm” that many tech leaders have expressed concern over. Musk and world leaders Musk's conversation with Sunak is one of numerous chats with world leaders the SpaceX and Tesla CEO has joined in the past few months. It showed his growing influence in geopolitical affairs as well as various technology and industrial sectors. Musk met with Israeli Prime Minister Benjamin Netanyahu in September, weeks before the outbreak of the Israel-Hamas war. Netanyahu met with Musk to discuss artificial intelligence and antisemitism on the Musk-owned social media platform X, formerly known as Twitter, at a time Musk was warring with the Anti-Defamation League. Musk also sat down with French President Emmanuel Macron numerous times in the past year. Other big names that recently made time for Musk in their schedule include Italian Prime Minister Giorgia Meloni in June, Indian Prime Minister Narendra Modi in June and Turkish President Recep Tayyip Erdogan in September. At the event Thursday, Musk noted that if the AI powerhouses such as the United States and the UK “are aligned on AI safety, that is a good thing.” And he suggested other global powers such as China should also remain involved in discussions. He compared AI to a magic genie and noted that fairy tales with magic genies that grant wishes “don't end well” and cause people to “be careful what you wish for.” Musk has increasingly become a fixture in international affairs, making headlines not only for his meetings with heads of state but also for the provision – or lack thereof – of SpaceX's Starlink satellite services in war-torn regions. Starlink in war zones The billionaire received backlash this week for pledging to provide aid organizations in Gaza with Starlink satellite service as the besieged strip struggles with internet connectivity. The Israeli Minister of Communications Shlomo Karhi said on X that Hamas “will use it for terrorist activities.” “Perhaps Musk would be willing to condition it with the release of our abducted babies, sons, daughters, elderly people. All of them! By then, my office will cut any ties with starlink,” Karhi posted. Musk made the announcement on X after US Democratic Rep. Alexandria Ocasio-Cortez said that “cutting off all communication to a population of 2.2 million is unacceptable.” “Starlink will support connectivity to internationally recognized aid organizations in Gaza,” Musk said in a post replying to Ocasio-Cortez. In Walter Isaacson's new biography of the eccentric billionaire titled “Elon Musk,” it was revealed that Musk secretly ordered his engineers not to turn on his company's Starlink satellite communications network near the Crimean coast last year to avoid supporting a Ukrainian sneak attack on the Russian naval fleet. “There was an emergency request from government authorities to activate Starlink all the way to Sevastopol. The obvious intent being to sink most of the Russian fleet at anchor,” Musk posted on X in September. Musk's decision, which left Ukrainian officials begging him to turn the satellites on, was driven by an acute fear that Russia would respond to a Ukrainian attack on Crimea with nuclear weapons, a fear driven home by Musk's conversations with senior Russian officials, according to Isaacson. “If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and conflict escalation,” Musk tweeted in September. Sunak's AI Summit The billionaire's presence at the UK's summit brought an increased level of media attention to the event that is key to Sunak's hope for global AI regulation. Sunak and Musk discussed how digital super-intelligence could affect the public and require regulation the same way industries such as aviation and cars require regulation. “I agree with the vast majority of regulations,” Musk said. “A referee is a good thing.” At the same time, Musk reiterated his “fairly utopian” belief that AI could create an “age of abundance” with “no shortage of goods and services.” He says AI could lead to a future where “no job is needed” and people enjoy a universal high income. He mentioned a world of AI tutors and companionship for people like his son who has learning disabilities and difficulty making friends. The next AI safety summits are set to be hosted by Korea and France and are scheduled for 2024. - by Jennifer Korn, CNNSee omnystudio.com/listener for privacy information.
Generated by Tailor.Get your own personalized daily podcast! Sign up for freeIn this episode, we discuss key tech news stories, including global efforts around AI safety, warnings about the theft of technology, and the endless possibilities – and risks – of AI. We also cover recent shake-ups in the world of venture capital, the legacy of The Beatles, and new initiatives to democratize cryptocurrency trading. Plus, we explore how a rapidly expanding Indian tech market is driving collaboration and innovation. Music: Mosaic [Electro] by Hardcore Scm. Licensed under: http://creativecommons.org/licenses/by/3.0/News articles cited in this episode:- ‘Existential to who?' US VP Kamala Harris urges focus on near-term AI risks https://www.politico.eu/article/existential-to-who-us-vp-kamala-harris-urges-focus-on-near-term-ai-risks/?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication- British PM Rishi Sunak secures ‘landmark' deal on AI testing https://www.politico.eu/article/british-pm-rishi-sunak-secures-landmark-deal-on-ai-testing/?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication- Rishi Sunak says tech giants 'can't mark their own homework' on Artificial Intelligence as he praises Elon Musk for warning about its 'risks' ahead of his talks with billionaire (but PM denies he's lining up a post-No10 career) https://www.dailymail.co.uk/news/article-12699017/Rishi-Sunak-says-tech-giants-mark-homework-Artificial-Intelligence-praises-Elon-Musk-warning-risks-ahead-talks-billionaire-PM-denies-hes-lining-post-No10-career.html?ns_mchannel=rss&ns_campaign=1490&ito=1490- The Bletchley Declaration: 29 countries form coalition to tackle risks of advanced AI https://economictimes.indiatimes.com/tech/technology/the-bletchley-declaration-29-countries-form-coalition-to-tackle-risks-of-advanced-ai/articleshow/104909094.cms- Decoding AI: Rishi Sunak's call for action https://www.politico.eu/podcast/decoding-ai-rishi-sunaks-call-for-action/?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication- Kamala Harris seizes agenda as Rishi Sunak's AI summit kicks off https://www.politico.eu/article/us-vp-kamala-harris-seizes-agenda-as-rishi-sunaks-ai-summit-kicks-off/?utm_source=RSS_Feed&utm_medium=RSS&utm_campaign=RSS_Syndication- What to expect from AI Safety Summit today as science secretary warns of Terminator-style ‘loss of control' https://www.independent.co.uk/news/uk/politics/ai-safety-summit-sunak-elon-musk-b2440259.html
Bletchley Declaration represents a consensus starting point for AI governance. Lazarus Group prospects blockchain engineers with KANDYKORN. Boeing investigates ‘cyber incident' affecting parts business. NodeStealer's use in attacks against Facebook accounts. Citrix Bleed vulnerability exploited in the wild. MuddyWater spearphishes Israeli targets in the interest of Hamas. India to investigate alleged attacks on iPhones. Tim Starks from the Washington Post on the SEC's case against Solar Winds. In today's Threat Vector segment David Moulton from Unit 42 is joined by Matt Kraning of the Cortex Expanse Team for a look at Attack Surface Management. And Venomous Bear rolls out some new tools. On the Threat Vector segment, David Moulton, Director of Thought Leadership for Unit 42, is joined by Matt Kraning, CTO of the Cortex Expanse Team. They dive into the latest Attack Surface Management Report. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/210 Threat Vector Read the Attack Surface Management Report. Please share your thoughts with us for future Threat Vector segments by taking our brief survey. To learn what is top of mind each month from the experts at Unit 42 sign up for their Threat Intel Bulletin. Selected reading. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (GOV.UK) US Vice President Harris calls for action on "full spectrum" of AI risks (Reuters) Elastic catches DPRK passing out KANDYKORN (Elastic Security Labs) North Korean Hackers Targeting Crypto Experts with KANDYKORN macOS Malware (The Hacker News) Lazarus used ‘Kandykorn' malware in attempt to compromise exchange — Elastic (Cointelegraph) An info-stealer campaign is now targeting Facebook users with revealing photos (Record) Mass Exploitation of 'Citrix Bleed' Vulnerability Underway (SecurityWeek) MuddyWater eN-Able spear-phishing with new TTPs | Deep Instinct Blog (Deep Instinct) Centre's Cyber Watchdog CERT-In To Probe iPhone "Hacking" Attempt Charges (NDTV.com) Over the Kazuar's Nest: Cracking Down on a Freshly Hatched Backdoor Used by Pensive Ursa (Aka Turla) (Unit 42) Learn more about your ad choices. Visit megaphone.fm/adchoices
Join our Discord: https://discord.gg/TRrgAyeMBuy the merch: https://www.thisdayinaimerch.com/ This week the AI guys unpack the White House's sweeping executive order on regulating AI - will this lead to the death of open-source models? They also discuss the vague and fluffy Bletchley Declaration signed by world leaders, why Geoffrey Hinton just won't stop fearmongering, and introduce some hilarious new merch including a life-size shower curtain! Tune in for hot takes on the AI ethics debate, prompt engineering tricks, and key insights on the future of language models.CHAPTERS:=====00:00 - King Charles on AI (Cold Open)00:20 - Thoughts on White House AI Executive Order 23:09 - The Bletchley Declaration & AI Safety Summit38:04 - LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2 & They Killed Tay!48:34 - Adversarial Attacks and Defenses in Large Language Models: Old and New Threats Paper51:51 - Mike proposes What The Future of AI Computing Might Look Like55:00 - Leaked: The Secret Prompt Powering ChatGPT's New Multi-Tool Mode (and How to Hack It)1:01:39 - Anthropic Have Raised More Billions & Our Merch Store!SOURCES:======https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/https://www.aisnakeoil.com/p/what-the-executive-order-means-forhttps://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023https://www.businessinsider.com/sam-altman-and-demis-hassabis-just-want-to-control-ai-2023-10https://twitter.com/ylecun/status/1718263147591573949?s=20https://twitter.com/ldjconfirmed/status/1718456393026490523Leaked Prompt: https://raw.githubusercontent.com/spdustin/ChatGPT-AutoExpert/main/_system-prompts/all_tools.mdhttps://www.cnbc.com/2023/10/27/google-commits-to-invest-2-billion-in-openai-competitor-anthropic.htmlPAPERS:======https://arxiv.org/pdf/2310.17688.pdfhttps://arxiv.org/pdf/2310.20624.pdfhttps://arxiv.org/pdf/2310.19737v1.pdf
Israel's justification for airstrikes on a Palestinian refugee camp has sparked a debate over the rules of war. Elon Musk and Kamala Harris in the UK as Britain publishes the Bletchley Declaration work to find a way to mitigate the dangers of AI. And a pause on rate hikes - but what comes next for the global economy? Visit the Thomson Reuters Privacy Statement for information on our privacy and data protection practices. You may also visit megaphone.fm/adchoices to opt out of targeted advertising.
The US central bank maintains its pause on raising interest rates, while leaving the door open for future hikes, as inflation remains well above its 2 percent target. Also, 28 countries and the EU sign the "Bletchley Declaration" calling for international cooperation on ensuring AI safety. Plus the fate of FTX founder Sam Bankman-Fried lies in a jury's hands after the closing arguments in his fraud trial.
A world-first commitment to safe and responsible Artificial Intelligence development has been signed by 29 countries including the UK, US, China and Australia, as well as the EU. New Zealand has not signed the Bletchley Declaration as it wasn't represented at the A.I. Safety Summit in London this week. It affirms that, for the good of all, A.I. should be designed, developed, deployed, and used, in a manner that is safe, human-centric, trustworthy and responsible. But Canterbury University's Senior Lecturer in Data Science, Giulio Dalla Riva says it's a good start. Dalla Riva spoke to Ingrid Hipkiss.
AI for Everyone:Early release. We usually drop episodes on a Sunday morning, but the UK AI Safety Summit just concluded and we felt that we needed to give you all the updates straight away. Uncover the World of AI with Myles Dhillon Artificial Intelligence can feel like an intricate labyrinth, especially with AI, Chat GPT, and Machine Learning consistently making headlines. Are you concerned about falling behind? Well, you're not alone! I'm Myles Dhillon, a non-tech enthusiast deeply fascinated by AI's transformative impact on our lives.Join me as I break down the week's top AI news, covering Artificial Intelligence, Machine Learning, Large Language Models (LLM), and updates from tech giants like Apple, Google, OpenAI, Tesla, and Microsoft. In just 10 minutes, I'll provide you with a concise, digestible update, ensuring you stay informed without sifting through lengthy headlines.Whether you're a tech novice or a seasoned AI enthusiast, 'AI for Everyone' is your shortcut to comprehending the ever-evolving AI landscape. Subscribe now to join the AI conversation!UK AI Safety Summit:Inside the UK AI Safety Summit Get a front-row seat to the UK AI Safety Summit where the biggest names in AI gathered. Elon Musk, Sam Altman, Demis Hassabis from Google DeepMind, and more were in attendance. The lineup included heavyweights like Microsoft President Brad Smith, Meta AI's Yann LeCun and Nick Clegg, and many other luminaries from Amazon, OpenAI, Nvidia, IBM, and beyond.Discover the details of the 'Bletchley Declaration' that marks a historic milestone in international AI governance. The US, EU, China, and over 25 countries are now united in overseeing AI risks. Explore the implications of this collaboration and the optimism it brings for the future.Elon Musk's 80/20 perspective on AI's benefits and dangers and the potential for global AI cooperation will be explored. Stay informed about the event that shapes the AI landscape worldwide.Asteroid Mining:Unveiling the Space Resources Frontier SpaceX is gearing up to launch a mission to the Moon's surface as part of the Artemis program. Simultaneously, the smaller spacecraft, Brokkr-2, created by Astroforge, is on its way to an asteroid. Discover the exciting world of asteroid mining and its potential to revolutionize resource acquisition.Astroforge recently completed a successful test-fire of the rockets crucial for the Brokkr-2 mission, bringing us closer to the first private mission beyond Earth and the Moon.As Earth's resources become scarcer and the demand for metals soars, asteroids in our solar system hold the key. These celestial bodies are rich in essential metals like cobalt, nickel, and platinum-group metals, often in higher concentrations than on Earth. Astroforge is at the forefront of commercializing space mining.We delve into the challenges, risks, and rewards of this spacefaring endeavor. Join us to explore the cosmos and the future of mining.Chat PDF:Revolutionizing Legal Practices with PDF and AI Welcome to Chat PDF, where we explore how the convergence of PDF uploads and Large Language Models (LLM) is reshaping the legal profession.Discover how the legal landscape is evolving with the power of AI. We'll delve into the impact of using PDF uploads to interact with LLMs, making legal processes more efficient and dynamic. This transformation is not just a game-changer; it's a game-evolver!Join us for insightful discussions, expert interviews, and practical insights into the future of the legal field. Unearth the potential of PDFs and AI as we journey through this dynamic intersection. Subscribe now to stay ahead in the legal tech realm!Hit me up at mylesdhillon@gmail.comGet intouch with Myles at mylesdhillon@gmail.com
① Australian Prime Minister Anthony Albanese will start a four-day visit to China. What will be on his agenda? (00:43) ② Twenty-eight countries, including the US, UK and China, have agreed to cooperate over AI development. How significant is this commitment? (13:57) ③ Japan starts third round of nuclear-contaminated water discharge. But questions remain regarding TEPCO's management and transparency. (24:32) ④ US Federal Reserve holds interest rates at 22-year high. (33:49) ⑤ Flying Tigers veterans are visiting China. We spoke to one of them who just spent his 103rd birthday in Beijing. (50:56)
Bletchley Declaration weist auf Chancen und Risiken von KI OpenAI rollt GPT-4 (All Tools)-Modell aus Open-Source-Sprachmodelle könnten Bioterrorismus erleichtern und Midjourney kann jetzt Custom-Stile generieren heise.de/ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast
At a recent summit held at Bletchley Park in the U.K., leaders from 28 countries, including the U.S. and China, signed a declaration acknowledging the potential risks and dangers of artificial intelligence. The declaration, known as "The Bletchley Declaration", urges international cooperation in ensuring AI technology remains human-centric, trustworthy, and responsible. While no specific policy goals were outlined, follow-up meetings are scheduled to continue the conversation around cooperating for AI safety. --- Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message Support this podcast: https://podcasters.spotify.com/pod/show/tonyphoang/support
Qui in Inghilterra c'è stato il primo AI Safety Summit, un mega incontro con governi da tutto il mondo per discutere di intelligenza artificiale.E per discutere, in modo particolare, le tematiche di rischi, sicurezza, privacy e problematiche varie.È stato redatto un documento chiamato Bletchley Declaration, una sorta di dichiarazione congiunta da parte di 29 Paesi.Lo analizzo in questo video, facendo qualche riflessione sui punti che ritengo più interessanti.Se volete leggerlo per intero lo trovate qui.__________
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Bletchley Declaration on AI Safety, published by Hauke Hillebrandt on November 1, 2023 on The Effective Altruism Forum. The Bletchley Declaration was just released at the At AI Safety Summit. Tl;dr: The declaration underscores the transformative potential and risks of AI. Countries, including major global powers, commit to harnessing AI's benefits while addressing its challenges, especially the dangers of advanced "frontier" AI models. Emphasizing international collaboration, the declaration calls for inclusive, human-centric, and responsible AI development. Participants advocate for transparency, research, and shared understanding of AI safety risks, with plans to reconvene in 2024. Full text: Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community's efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential. AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals. Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them. Particular safety risks arise at the 'frontier' of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today's most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the...