POPULARITY
EU satser €200 mia. på AI "Plug baby, plug!" – sådan lød det fra den franske præsident Emmanuel Macron på den første dag af AI Action Summit i Paris. Hans budskab var klart: Nu skal den europæiske AI-branche for alvor boostes. Grand Palais var fyldt til bristepunktet med tech-direktører og toppolitikere fra hele verden, og blandt dem var også EU-Kommissionens præsident Ursula von der Leyen. Hun kunne afsløre, hvorfor konferencen har skiftet navn fra AI Safety Summit til AI Action Summit – og ja, der var penge med i gaveposen. Macron hælder hele 109 milliarder euro i fransk AI, men han var ikke den eneste, der kom med store beløb. Dagen efter gik von der Leyen på scenen og fulgte trop med europæiske investeringer. Men var det hele bare debat og skåltaler? Hvad skete der egentlig blandt forskerne og virksomhederne? Fik de, hvad de ønskede, eller handlede deres samtaler om helt andre ting end penge og geopolitik? Vi taler med Nikolaj Munch Andersen, in-house AI-specialist i Udenrigsministeriet, som har brugt to dage sammen med forskerne på det franske polytekniske universitet – dér, hvor eliten af den franske tech-industri bliver uddannet. Lyt med i denne episode af Techtopia, hvor vi tager dig med til AI Action Summit i Paris! Medvirkende: Nikolaj Much Andersen, AI-specialist, chefkonsulent, udenrigsministeriet Links: EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence | Shaping Europe's digital future Artificial Intelligence Action Summit
Feds derail Raptor Train Newmark creates Volunteer Network for Civil Cyber Defense US to host global AI safety summit Thanks to today's episode sponsor, Conveyor Does the next security questionnaire that hits your inbox make you want to throw your laptop out the window? If so, don't do it. You should check out Conveyor first. Conveyor is the market-leader in instant, generative AI answers to entire security questionnaires no matter the format they are in. Yes, that's right. Upload any file like excels, word docs and even PDFs for instant processing and tackle any portal-based questionnaire with a browser extension that auto-scrolls and fills in answers for you. Try a free proof of concept today at www.conveyor.com. Get the story behind the headlines at CISOSeries.com.
In this episode of Tech Talks Daily, I'm joined by Dr. Marc Warner, the visionary founder of Faculty, a company dedicated to deploying safe AI systems that merge human expertise with artificial intelligence to deliver exceptional performance. With a rich background that spans over a decade of working with government agencies and leading brands, Marc is at the forefront of helping organizations harness the power of AI to make better decisions. Before founding Faculty, Marc was a Marie Curie Research Fellow in Physics at Harvard University, and his academic work has been featured in prestigious journals like Nature. Recently, he was one of the few London business leaders selected to attend the AI Safety Summit at Bletchley Park. In our conversation, Marc makes a grounded and pragmatic case for the regulation of AI, emphasizing the importance of what he terms “mundane” or sensible AI regulation. He argues that while AI is often overhyped in the short term, it represents the most significant technological transformation of our time. Over the next decade, Marc believes that every business will need to evolve into a tech-driven AI business to survive and thrive. Those who lead in AI safety, he suggests, will not only protect their organizations but also set the standard for the industry, while those who remain on the sidelines risk falling behind. Marc also shares insights into Faculty's innovative AI solutions, which have had a profound impact on various sectors. From enabling large-scale terrorist content moderation, demanded by the UK Prime Minister, to powering NHS pandemic forecasting and optimizing millions of call center interactions, Faculty's AI applications demonstrate the tangible benefits of integrating AI into business strategies. Marc stresses that AI should not be siloed but instead woven into the fabric of an organization, enabling better human decisions and driving measurable business outcomes. Throughout our discussion, Marc underscores the need for humility in engaging AI experts and the boldness required to overcome organizational barriers. He advocates for aligning AI initiatives with core business strategies rather than pursuing disconnected AI strategies, which often lead to wasted resources and missed opportunities. Join us as we explore the future of AI with Dr. Marc Warner and discuss how businesses can effectively integrate AI to not only stay competitive but also lead in this new era of technological advancement. How will your business adapt to the AI-driven future, and what steps can you take today to ensure you're on the right path? Tune in to discover Marc's expert perspective on navigating these challenges.
This episode from Web3 with a16z Crypto, is all about innovation on a global scale, exploring both ecosystem and individual talent levels. We examine what works and what doesn't, how certain regions evolve into startup hubs and economic powerhouses, and what constitutes entrepreneurial talent. We also discuss the nature of ambition, the journey to finding one's path, and broader mindsets for navigating risk, reward, and dynamism across various regions, with a particular focus on London and Europe.Joining us is Matt Clifford, who played a pivotal role in the London entrepreneurial and tech ecosystem since 2011, is the Chair of Entrepreneur First and the UK's Advanced Research and Invention Agency (ARIA). Before this episode was recorded, Matt served as the Prime Minister's representative for the AI Safety Summit at Bletchley Park. Recently, he was appointed by the UK Secretary of Science to deliver an “AI Opportunities Action Plan” to the UK government.This episode was recorded live from Andreessen Horowitz's first international office in London. For more on our efforts and additional content, visit a16zcrypto.com/uk. Resources:Find Matthew on Twitter: https://x.com/matthewcliffordFind Sonal on Twitter: https://x.com/smc90 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Dems Desperate to Stop Musk and Trump Interviewhttps://www.audacy.com/989wordThe Tara Show Follow us on Social MediaJoin our Live StreamWeekdays - 6am to 10am Facebook: https://www.facebook.com/989wordRumble: https://rumble.com/c/c-2031096X: https://twitter.com/989wordInstagram: https://www.instagram.com/989word/ "Red Meat, Greenville." 08/13/24 BLETCHLEY, ENGLAND - NOVEMBER 1: Tesla, X (formerly known as Twitter) and SpaceX's CEO Elon Musk attends the first plenary session on Day 1 of the AI Safety Summit at Bletchley Park at Bletchley Park on November 1, 2023 in Bletchley, England. The UK Government are hosting the AI Safety Summit bringing together international governments, leading AI companies, civil society groups and experts in research to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. (Photo by Leon Neal/Getty Images)
We all know about the potential threats as AI becomes more advanced. They range from spreading disinformation, undermining our democracy (as discussed in our previous episode with Miles Taylor), to completely upending the job market. But could AI be used to radically transform the way bureaucracy works (making it more efficient) and helping us Order the Disorder? To discuss whether AI could be used as an ‘Ordering Force' if regulated properly, Jason is joined by Marc Warner. Marc is the founder of Faculty.AI, and has worked with many UK government agencies and leading multinational brands to implement impactful AI solutions. He sat on the Prime Minister's AI Council, helped the NHS use AI to save thousands of lives during the pandemic, and attended the first ever AI Safety Summit. Jason and Marc discuss: the role AI could play in fixing Britain's broken public services, whether a Starmer-led Britain could become an AI superpower, and how the whole globe desperately needs global co-ordination to keep AI from turning into a Disordering technology. Twitter: @DisorderShow Subscribe to our Substack: https://natoandtheged.substack.com/ Producer: George McDonagh Exec Producer: Neil Fearn Show Notes Links Listen to our previous episode with Miles Taylor, Ep25. Could Artificial Intelligence-powered disinformation campaigns cause electoral mayhem? https://pod.link/1706818264/episode/42c97d2971c72d251b59b92d47d6c0ed Read: ‘Using AI to transform public services' (foreword written by Marc & Tony Blair) https://www.institute.global/insights/politics-and-governance/governing-in-the-age-of-ai-a-new-model-to-transform-the-state Watch: ‘Human Led AI' - a talk by Marc https://www.gresham.ac.uk/watch-now/human-led-ai Read: ‘AI Could Save (the UK) Government £200 Billion Over Five Years' https://www.institute.global/insights/news/ai-could-save-government-gbp200-billion-over-five-years Learn more about your ad choices. Visit podcastchoices.com/adchoices
with @matthewclifford @smc90This special episode is all about regional innovation — at both a systems and people level.We cover what does and doesn't work in making certain places become hubs of innovation and economic growth (aka “innovation ecosystems”). But we also discuss — going back and forth between the structural and individual — when to intervene for entrepreneurial talent; the nature of ambition, yearning, and finding one's path; and more broadly, mindsets for navigating risk/reward and dynamism in different regions including London and Europe. We also discuss new ways of funding breakthrough R&D at a national level, tech trends of interest including crypto, and much more.Our special guest — in conversation with editor in chief Sonal Chokshi, who also brought him to the a16z Podcast over 8 years ago in its first-ever UK roadshow in December 2015 — is Matt Clifford, who's played an important role in the London entrepreneurial and tech ecosystem since 2011. Matt is the Chair of Entrepreneur First (which he co-founded with Alice Bentinck over a decade ago); and is also the Chair of the UK's Advanced Research and Invention Agency (ARIA). [Before this episode was recorded, Matt was also the Prime Minister's representative for the AI Safety Summit — which he helped organize at Bletchley Park (the historic home of computing in the UK); after this episode was recorded, Matt was appointed by the UK secretary of science to deliver an “AI Opportunities Action Plan” to the UK government, which was just announced last week.]Fittingly, this episode was recorded live from Andreessen Horowitz's first international office, in London; for more on our efforts there, and other content from there, please visit a16zcrypto.com/uk.As a reminder: None of the following should be taken as investment, legal, business, or tax advice; please see a16z.com/disclosures for more important information -- including a link to a list of our investments.
Less than one year since the UK hosted the first AI Safety Summit, all eyes turn to France for the next global AI summit. President Macron chose AI leader Anne Bouverot as Special Envoy to organize the AI Action Summit. In this episode, Bouverot, who holds a PhD in AI Research, shares her team's plans including – for the first time – the official dates and goals of the AI Action Summit.
In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France's role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun's influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes TIMESTAMPS: **Concerns about AI Risks in France (00:00:00)** **Optimism in AI Solutions (00:01:15)** **Introduction to the Episode (00:01:51)** **Max Wingo's Powerful Clip (00:02:29)** **AI Safety Summit Context (00:04:20)** **Personal Journey into AI Safety (00:07:02)** **Commitment to AI Risk Work (00:21:33)** **France's AI Sacrifice (00:21:49)** **Impact of Efforts (00:21:54)** **Existential Risks and Choices (00:22:12)** **Underestimating Impact (00:22:25)** **Researching AI Risks (00:22:34)** **Weak Counterarguments (00:23:14)** **Existential Dread Theory (00:23:56)** **Global Awareness of AI Risks (00:24:16)** **France's AI Leadership Role (00:25:09)** **AI Policy in France (00:26:17)** **Influential Figures in AI (00:27:16)** **EU Regulation Sabotage (00:28:18)** **Committee's Risk Perception (00:30:24)** **Concerns about France's AI Development (00:32:03)** **International AI Treaties (00:32:36)** **Sabotaging AI Safety Summit (00:33:26)** **Quality of France's AI Report (00:34:19)** **Misleading Risk Analyses (00:36:06)** **Comparison to Historical Innovations (00:39:33)** **Rhetoric and Misinformation (00:40:06)** **Existential Fear and Rationality (00:41:08)** **Position of AI Leaders (00:42:38)** **Challenges of Volunteer Management (00:46:54)**
In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France's role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun's influence in French society and government? And would France even join an international treaty? Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes TIMESTAMPS Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs. International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty. Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment. Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues. Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments. Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications. Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety. Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits. Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically. Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues. Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions. Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introduction to French AI Policy, published by Lucie Philippon on July 4, 2024 on LessWrong. This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements. Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered. At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France. The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts. My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I'm confident in the facts, but less in the interpretations, as I'm no policy expert myself. Generative Artificial Intelligence Committee The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1] The goals of the committee were: Strengthening AI training programs to develop more AI talent in France Investing in AI to promoting French innovation on the international stage Defining appropriate regulation for different sectors to protect against abuses. This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member: Co-chairs: Philippe Aghion, an influential French economist specializing in innovation. He thinks AI will give a major productivity boost and that the EU should invest in major research projects on AI and disruptive technologies. Anne Bouverot, chair of the board of directors of ENS, the most prestigious scientific college in France. She was later nominated as leading organizer of the next AI Safety Summit. She is mainly concerned about the risks of bias and discrimination from AI systems, as well as risks of concentration of power. Notable members: Joëlle Barral, scientific director at Google Nozha Boujemaa, co-chair of the OECD AI expert group and Digital Trust Officer at Decathlon Yann LeCun, VP and Chief AI Scientist at Meta, generative AI expert He is a notable skeptic of catastrophic risks from AI Arthur Mensch, founder of Mistral He is a notable skeptic of catastrophic risks from AI Cédric O, consultant, former Secretary of State for Digital Affairs He invested in Mistral and worked to loosen the regulations on general systems in the EU AI Act. Martin Tisné, board member of Partnership on AI He will lead the "AI for good" track of the next Summit. See the full list of members in the announcement: Comité de l'intelligence artificielle générative. "AI: Our Ambition for France" In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available. The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute. This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don't think about future capabilities of AI models, and are overly dismissive of AI risks. Some highlights from the report: It dismisses most risks from AI, including catastrophic risks, saying that concerns are overblown. They compare fear of...
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Matt Clifford is the Co-Founder of Entrepreneur First (EF), the leading global talent investor and incubator. EF has incubated startups worth over $10bn, including Cleo, Tractable and Aztec Protocol. Matt is also Chair of ARIA, the UK's Advanced Research and Invention Agency, and advises the UK government on AI and in 2023 served as the Prime Minister's Representative for the AI Safety Summit at Bletchley Park. In Today's Episode with Matt Clifford We Discuss: 1. The Most Important Questions in AI: Are we seeing diminishing returns where more compute does not lead to a significant increase in performance? What is required to reach a new S curve? What do we need to see in GPT 5? Why does Matt believe that search is one of the biggest opportunities in AI today? 2. The Biggest Opportunities in AI Today: How does Matt see the future for society with a world of autonomous agents? What is the single biggest opportunity around agents that no one has solved? Is society ready for agentic behaviours to replace the core of human labour? How does warfare change in a world of AI? Does AI favour states and good actors or criminals and bad actors more favourably when it comes to offence and defence? 3. China and the Race to Win the AI War: Does Matt believe that China are two years behind the US in terms of AI capability? What are Matt's biggest lessons from spending time with the CPP in China working on AI policy? In what way is the CCP more sophisticated in their thinking on AI than people think? What is the bull and the bear case for China in the race for AI? What is the core impact of US export controls on chips for China's ability to build in AI? Does a Trump vs a Biden election change the playing field with China? 4. What Makes Truly Great Founders: Does Matt agree that the best founders always start an entrepreneurial activity when they are young? What is more important the biggest strength of one of the founders or the combined skills of the founding team? What did EF believe about founders and founder chemistry that they no longer believe? Does Matt believe that everyone can be a founder? What are the two core traits required?
For those following the regulation of artificial intelligence, there is no doubt passage of the AI Act in the EU is likely top of mind. But proposed policies, laws and regulatory developments are taking shape in many corners of the world, including in Australia, Brazil, Canada, China, India, Singapore and the U.S. Not to be left behind, the U.K. held a highly touted AI Safety Summit late last year, producing the Bletchley Declaration, and the government has been quite active in what the IAPP Research and Insights team describes as a “context-based, proportionate approach to regulation.” In the upper chamber of the U.K. Parliament, Lord Holmes, a member of the influential House of Lords Select Committee on Science and Technology, introduced a private members' bill late in 2023 that proposes the regulation of AI. The bill also just received a second reading in the House of Lords 22 March. Lord Holmes spoke of AI's power at a recent IAPP conference in London. While there, I had the opportunity to catch up with him to learn more about his Artificial Intelligence (Regulation) Bill and what he sees as the right approach to guiding the powers of this burgeoning technology.
The World Wide Web launched in the public domain on April 30, 1993, a little over 30 years ago. It was a major technological leap forward for humanity. It was a game changer, full of possibility… and uncertainty. Experts are reminding us a lot lately that artificial intelligence (AI) has also been around for many decades. Nevertheless, much like the Internet in the 1990s, ChatGPT becoming publicly available in November 2022 represents another paradigm shift for humanity and its relationship with technology. One billion ChatGPT web visits took place following its launch. According to PwC, AI is predicted to contribute $15.7 trillion to the global economy by 2030. Yes, the stakes are high. Yes, it's a game changer. Yes, it's full of possibility… and uncertainty. Last month, the International Monetary Fund (IMF) released a study predicting that AI will affect close to 40 percent of all jobs. For some, it will be beneficial, boosting their productivity. For almost everyone else, their jobs are at risk. This report was published as business and political leaders from around the world prepared to gather in Davos, Switzerland, for the World Economic Forum, where AI took center stage. Highlighting the apprehension around this “disruptive” technology, the response from governments has been surprisingly swift. A number of countries signed a declaration on the safe development of the technology at an AI Safety Summit hosted by the UK late last year. And we're seeing increased regulation around the world, including in the European Union, China, and the U.S., meaning in the world's largest economies. As businesses across all sectors explore AI's potential, they must also wade through its unknowns and navigate evolving regulation. In other words, they must innovate and use AI responsibly. Our guest today is Jon Iwata. He is an Executive Fellow at the Yale School of Management where he co-leads a program studying the leadership implications of stakeholder capitalism. He also directs the Data & Trust Alliance, a not-for-profit organization established in 2020 by CEOs of major companies including American Express, Johnson & Johnson, Nike, Pfizer, Starbucks, and Walmart. The Alliance develops and promotes the adoption of responsible data and AI practices. Among his various accolades and accomplishments, Mr. Iwata is also the co-inventor of a U.S. patent for a nanotechnology and process for atomic-scale semiconductors. Resources: About Jon Iwata The Data & Trust Alliance AI Will Transform the Global Economy. Let's Make Sure It Benefits Humanity. (IMF, January 2024) AI - artificial intelligence - at Davos 2024: What to know (WEF, January 2024) AI and the Legal World: A Revolution Happening in Real Time (Brand & New, November 2023) Will AI Take Your Job? (INTA Daily News, May 2023) How AI Will Impact Trademarks (INTA Daily News, May 2023)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing StakeOut.AI, published by Harry Luk on February 17, 2024 on The Effective Altruism Forum. We are excited to announce the launch of a new advocacy nonprofit, StakeOut.AI. The mission statement of our nonprofit StakeOut.AI fights to safeguard humanity from AI-driven risks. We use evidence-based outreach to inform people of the threats that advanced AI poses to their economic livelihoods and personal safety. Our mission is to create a united front for humanity, driving national and international coordination on robust solutions to AI-driven disempowerment. We pursue this mission via partnerships (e.g., with other nonprofits, content creators, and AI-threatened professional associations) and media-based awareness campaigns (e.g., traditional media, social media, and webinars). Our modus operandi is to tell the stories of the AI industry's powerless victims, such as: people worldwide, especially women and girls, who have been victimized by nonconsensual deepfake pornography of their likenesses unemployed artists whose copyrighted hard work were essentially stolen by AI companies without their consent, in order to train their economic AI replacements parents who fear that their children will be economically replaced, and perhaps even replaced as a species, by "highly autonomous systems that outperform humans at most economically valuable work" (OpenAI's mission) We connect these victims' stories to powerful people who can protect them. Who are the powerful people? The media, the governments, and most importantly: the grassroots public. StakeOut.AI's motto The Right AI Laws, to Right Our Future. We believe AI has great potential to help humanity. But like all other industries that put the public at risk, AI must be regulated. We must unite, as humans have done historically, to work towards ensuring that AI helps humanity flourish rather than cause our devastation. By uniting globally with a single voice to express our concerns, we can push governments to pass the right AI laws that can right our future. However, StakeOut.AI's Safer AI Global Grassroots United Front movement isn't for everybody. It's not for those who don't mind being enslaved by robot overlords. It's not for those whose first instincts are to avoid making waves, rather than to help the powerless victims tell their stories to the people who can protect them. It's not for those who say they 'miss the days' when only intellectual elites talked about AI safety. It's not for those who insist, even after years of trying, that attempting to solve technical AI alignment while continuing to advance AI capabilities is the only way to prevent the threat of AI-driven human extinction. It's not for those who think the public is too stupid to handle the truth about AI. No matter how much certain groups say they are trying to 'shield' regular folks for their 'own good,' the regular folks are learning about AI one way or another. It's also not for those who are indifferent to the AI industry's role in invading privacy, exploiting victims, and replacing humans. So to help save your time, please stop reading this post if any of the above statements reflect your views. But, if you do want transparency and accountability from the AI industry, and you desire a moral and safe AI environment for your family and for future generations, then the United Front may be for you. By prioritizing high-impact projects over fundraising in our early months, we at StakeOut.AI were able to achieve five publicly known milestones for AI safety: researched a 'scorecard' evaluating various AI governance proposals, which was presented by Professor Max Tegmark at the first-ever international AI Safety Summit in the U.K. (as part of The Future of Life Institute's governance proposal for the Summit), raised awareness, such as by holding a ...
From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems. Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short. Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world's first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems. Are efforts to regulate AI working? What else needs to be done? That's the focus of our show today. It's clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited. There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University's Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI. Show Notes: Robert Trager (@RobertTrager) Brianna Rosen (@rosen_br)Paras Shah (@pshah518) Just Security's Symposium on AI Governance: Power, Justice, and the Limits of the LawJust Security's Artificial Intelligence coverageJust Security's Autonomous Weapons Systems coverageMusic: “The Parade” by “Hey Pluto!” from Uppbeat: https://uppbeat.io/t/hey-pluto/the-parade (License code: 36B6ODD7Y6ODZ3BX)Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)
In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That's a new book on a vitally important subject.The book's front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There's also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I've read.”Calum and David had lots of questions ready to put to the book's author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
For our in-person episode on 2023 with Karin Rudolph we chat about the Future of Life Institute letter, existential risk of AI, TESCREAL, Geoffrey Hinton's resignation from Google, the AI Safety Summit, EU AI act and legislating AI, neural rights and more...
We are honoured to have as our guest in this episode Professor Stuart Russell. Stuart is professor of computer science at the University of California, Berkeley, and the traditional way to introduce him is to say that he literally wrote the book on AI. Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig, was first published in 1995, and the fourth edition came out in 2020.Stuart has been urging us all to take seriously the dramatic implications of advanced AI for longer than perhaps any other prominent AI researcher. He also proposes practical solutions, as in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control.In 2021 Stuart gave the Reith Lectures, and was awarded an OBE. But the greatest of his many accolades was surely in 2014 when a character with a background remarkably like his was played in the movie Transcendence by Johnny Depp. The conversation covers a wide range of questions about future scenarios involving AI, and reflects on changes in the public conversation following the FLI's letter calling for a moratorium on more powerful AI systems, and following the global AI Safety Summit held at Bletchley Park in the UK at the beginning of November.Selected follow-ups:Stuart Russell's page at BerkeleyCenter for Human-Compatible Artificial Intelligence (CHAI)The 2021 Reith Lectures: Living With Artificial IntelligenceThe book Human Compatible: Artificial Intelligence and the Problem of ControlMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Confessions of a Recent GWWC Pledger (Boxing Day Giving?!), published by Harry Luk on December 25, 2023 on The Effective Altruism Forum. TLDR; I pledged to Giving What We Can (GWWC) in early September. But because we transitioned from a dual income to a single income in late June, we had been postponing the 10% tithing. As a result, we also procrastinated on giving to effective charities, even after pledging in September. Black Friday (late November) was when we paid off the "donation debt" to Jesus. We are surrounded by others who sacrificially love and give, and that's why we were empowered to do it too. We encourage others to pledge or give this giving season, perhaps doing the counter-cultural thing and making Boxing Day about giving. Introduction In September of this year, I decided to take the Giving What We Can (GWWC) pledge. As a Christian, I have been tithing 10% for years. With GWWC, I am redirecting these donations to highly effective charities, aiming to support 'the least of these' or interventions that can most cost-effectively improve the world, thereby maximizing the impact of my limited resources. This commitment was more than financial; it was a profound expression of faith. Our family's shift from a stable dual income to a more restrictive single income since late June introduced many uncertainties when I made this pledge. The transition to a single income in an expensive city like Vancouver has been challenging, especially considering that the three co-founders of StakeOut.AI, including myself, have been effectively volunteering - Peter for nearly six months part-time, I for almost 3.5 months full-time, and Amy for 1.5 months full-time. As of this writing, we still haven't fundraised because we have prioritized impact and project advancement. A couple example projects we have completed include: Contributions to researching the 'scorecard' of AI governance proposals (found on page 3 of The Future of Life Institute's proposal) presented at the first ever international AI Safety Summit. Co-hosted a Zoom webinar where we advised Hollywood actors on how AI will likely affect their industry. We also have plans for continued collaboration with Hollywood actors to advocate for banning deepfake pornography, a detrimental issue that has victimized many young schoolgirls. By sharing this journey, I hope to inspire a conversation about faith, stewardship, and the impact of intentional giving. This post is an exploration of faith and trust, and my understanding of Christian giving as a joyful expression of faith. Giving has brought an unexpected peace and a deeper trust in God's provision. Our Financial Challenge is a Fraction of What Many Others Endure "Where do you need God's comfort today?" This question from my Daily Refresh in YouVersion resonated with me, especially after reading 2 Corinthians 1:3-7. This verse speaks volumes about comfort in troubles, a theme that deeply aligns with my current life chapter. [3] Praise be to the God and Father of our Lord Jesus Christ, the Father of compassion and the God of all comfort, [4] who comforts us in all our troubles, so that we can comfort those in any trouble with the comfort we ourselves receive from God. [5] For just as we share abundantly in the sufferings of Christ, so also our comfort abounds through Christ. [6] If we are distressed, it is for your comfort and salvation; if we are comforted, it is for your comfort, which produces in you patient endurance of the same sufferings we suffer. [7] And our hope for you is firm, because we know that just as you share in our sufferings, so also you share in our comfort. As I mentioned earlier, since early September, I have embarked on a journey of starting a grassroots movement, the Safer AI Global Grassroots United Front. Honestly, it's been more than a full-tim...
In today's episode: Following the recent AI Safety Summit hosted by British Prime Minister Rishi Sunak, Bart Hogeveen speaks with the European Union's Senior Envoy for Digital to the United States Gerard de Graaf. They discuss the EU's approach to AI regulation and how it differs from the US and other governments. They also discuss which uses of AI the EU thinks should be limited or prohibited and why, as well as provide suggestions for Australia's efforts to regulate AI. Finally, Alex Caples, speaks to Australian Federal Police (AFP) Commander Helen Schneider. They discuss the AFP and Monash University initiative 'My Pictures Matter', which uses artificial intelligence to help combat child exploitation. They also explore the importance of using an ethically sourced database to train the AI tool that is used in the project, as well as outline how people can get involved in the campaign and help end child exploitation in Australia and overseas. Mentioned in this episode: https://mypicturesmatter.org/ Guests: Bart Hogeveen Gerard de Graaf Alex Caples Helen Schneider Music: "Think Different" by Scott Holmes, licensed with permission from the Independent Music Licensing Collective - imlcollective.uk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On 'Responsible Scaling Policies' (RSPs), published by Zvi on December 6, 2023 on LessWrong. This post was originally intended to come out directly after the UK AI Safety Summit, to give the topic its own deserved focus. One thing led to another, and I am only doubling back to it now. Responsible Deployment Policies At the AI Safety Summit, all the major Western players were asked: What are your company policies on how to keep us safe? What are your responsible deployment policies (RDPs)? Except that they call them Responsible Scaling Policies (RSPs) instead. I deliberately say deployment rather than scaling. No one has shown what I would consider close to a responsible scaling policy in terms of what models they are willing to scale and train. Anthropic at least does however seem to have something approaching a future responsible deployment policy, in terms of how to give people access to a model if we assume it is safe for the model to exist at all and for us to run tests on it. And we have also seen plausibly reasonable past deployment decisions from OpenAI regarding GPT-4 and earlier models, with extensive and expensive and slow red teaming including prototypes of ARC (they just changed names to METR, but I will call them ARC for this post) evaluations. I also would accept as alternative names any of Scaling Policies (SPs), AGI Scaling Policies (ASPs) or even Conditional Pause Commitments (CPCs). For existing models we know about, the danger lies entirely in deployment. That will change over time. I am far from alone in my concern over the name, here is another example: Oliver Habryka: A good chunk of my concerns about RSPs are specific concerns about the term "Responsible Scaling Policy". I also feel like there is a disconnect and a bit of a Motte-and-Bailey going on where we have like one real instance of an RSP, in the form of the Anthropic RSP, and then some people from ARC Evals who have I feel like more of a model of some platonic ideal of an RSP, and I feel like they are getting conflated a bunch. … I do really feel like the term "Responsible Scaling Policy" clearly invokes a few things which I think are not true: How fast you "scale" is the primary thing that matters for acting responsibly with AI It is clearly possible to scale responsibly (otherwise what would the policy govern) The default trajectory of an AI research organization should be to continue scaling ARC evals defines an RSP this way: An RSP specifies what level of AI capabilities an AI developer is prepared to handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities until protective measures improve. I agree with Oliver that this paragraph should include be modified to 'claims they are prepared to handle' and 'they claim it would be too dangerous.' This is an important nitpik. Nate Sores has thoughts on what the UK asked for, which could be summarized as 'mostly good things, better than nothing, obviously not enough' and of course it was never going to be enough and also Nate Sores is the world's toughest crowd. How the UK Graded the Responses How did various companies do on the requests? Here is how the UK graded them. That is what you get if you were grading on a curve one answer at a time. Reality does not grade on a curve. Nor is one question at a time the best method. My own analysis, and others I trust, agree that this relatively underrates OpenAI, who clearly had the second best set of policies by a substantial margin, with one source even putting them on par with Anthropic, although I disagree with that. Otherwise the relative rankings seem correct. Looking in detail, what to make of the responses? That will be the next few sections. Answers ranged from Anthropic's att...
In this episode, Nathan chats with Josh Albrecht, CTO of Imbue. They discuss how to create agents for reasoning, reliability, and robustness. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period. RECOMMENDED PODCAST: Every week investor and writer of the popular newsletter The Diff, Byrne Hobart, and co-host Erik Torenberg discuss today's major inflection points in technology, business, and markets – and help listeners build a diversified portfolio of trends and ideas for the future. Subscribe to “The Riff” with Byrne Hobart and Erik Torenberg: https://www.youtube.com/@TheRiffPodcast SPONSORS: Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive MasterClass https://masterclass.com/zen get two memberships for the price of 1 Learn from the best to become your best. Learn how to negotiate a raise with Chris Voss or manage your relationships with Esther Perel. Boost your confidence and find practical takeaways you can apply to your life and at work. If you own a business or are a team leader, use MasterClass to empower and create future-ready employees and leaders. Moment of Zen listeners will get two memberships for the price of one at https://masterclass.com/zen Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. X/SOCIAL @labenz (Nathan) @eriktorenberg (Erik) @CogRev_Podcast TIMESTAMPS: (00:00:00) – Episode Preview (00:07:14) – What does it mean to be a research company? (00:10:25) – How is the reasoning landscape these days and how might it evolve? (00:11:03) – Data quality is highly important (00:21:15) – What's the difference between good features and a good world model? (00:27:31) – The impact of new modalities on reasoning (00:29:15) – How much can reasoning and knowledge be separated? (00:45:13) – Imbue demo and are they building their own LLMs or using others? (00:49:37) – Does Imbue have a deal with Nvidia? (00:57:48) – Carbs framework (01:12:57) – Imbue's involvement with policy and and AI safety (01:16:23) – Takeaways from AI Safety Summit and Biden's Order
GRU's Sandworm implicated in campaign against Danish electrical power providers. Paris wastewater agency hit by cyberattack. LockBit hits Boeing. Bletchley Declaration represents a consensus starting point for AI governance. The US Executive Order on artificial intelligence is out. Guest Austin Reid of ABS Group discusses Ship and Shore challenges for security and the current and emerging regulatory landscape. On the Learning Lab, Dragos Mark Urban part 1 of 3 discussing building automation systems with Dragos' Daniel Gaeta and Zach Spencer. Control Loop News Brief. GRU's Sandworm implicated in campaign against Danish electrical power providers. The attack against Danish critical infrastructure (SektorCERT) Exclusive: This pizza box-sized equipment could be key to Ukraine keeping the lights on this winter (CNN) Paris wastewater agency hit by cyberattack. Greater Paris wastewater agency dealing with cyberattack (The Record) Cyberattaque D'Ampleur Au SIAAP (SIAAP) Iranian hacktivists claim an attack on a Pennsylvania water utility. Iranian-Linked Cyber Army Had Partial Control Of Aliquippa Water System (BeaverCountian.com) Municipal Water Authority of Aliquippa hacked by Iranian-backed cyber group (CBS News) LockBit hits Boeing. Ransomware groups rack up victims among corporate America (CyberScoop) #StopRansomware: LockBit 3.0 Ransomware Affiliates Exploit CVE 2023-4966 Citrix Bleed Vulnerability (CISA) Bletchley Declaration represents a consensus starting point for AI governance. Can Rishi Sunak's big summit save us from AI nightmare? (BBC) The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Gov.uk) The US Executive Order on artificial intelligence is out. Administration Actions on AI (AI.gov) Control Loop Interview. Guest is Austin Reid of ABS Group discussing ship and shore challenges for security and the current and emerging regulatory landscape. Control Loop Learning Lab. On the Learning, Mark Urban discusses building automation systems in part 1 of 3 with Dragos' Daniel Gaeta, ICS/OT Cybersecurity Senior Solutions Architect, and Zach Spencer. Senior Enterprise Account Executive. Control Loop OT Cybersecurity Briefing. A companion monthly newsletter is available through free subscription and on the CyberWire's website.
Our 142nd episode with a summary and discussion of last week's big AI new. Apologies for this one coming out after a pause, episodes will resume being released regularly as of this week. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Timestamps + Links: (00:00) Intro / Banter Tools & Apps(03:00) Introducing PlayHT 2.0 Turbo ⚡️ - The Fastest Generative AI Text-to-Speech API (07:15) YouTube Music now lets you make your own playlist art with AI (09:23) Sick of meetings? Microsoft's new AI assistant will go in your place (11:54) Anthropic brings Claude AI to more countries, but still no Canada (for now) Applications & Business(14:55) Humanoid robots face a major test with Amazon's Digit pilots (18:40) Figure 01 humanoid takes first public steps (22:31) AI-generating music app Riffusion turns viral success into $4M in funding (23:35) ChatGPT Creator Partners With Abu Dhabi's G42 in Middle East AI Push (25:00) AMD Scores Two Big Wins: Oracle Opts for MI300X, IBM Asks for FPGAs (26:38) Alibaba, Tencent among investors in China's rival to OpenAI with $341 million funding (30:35) AI companies drive demand for office space in tech hubs, new study finds (32:13) OpenAI is in talks to sell shares at an $86 billion valuation Projects & Open Source(35:00) Introducing Video-To-Text and Pegasus-1 (80B) (39:35) Adept Releases Fuyu-8B for Multimodal AI Agents (42:03) MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning (44:53) Meta's Habitat 3.0 simulates real-world environments for intelligent AI robot training (48:22) DeepMind UniSim simulates reality to train robots, game characters (49:13) Jina AI Launches World's First Open-Source 8K Text Embedding, Rivaling OpenAI (51:13) Llemma: An Open Language Model For Mathematics Research & Advancements(53:22) Eliciting Human Preferences with Language Models (57:23) New Nvidia AI agent, powered by GPT-4, can train robots (01:01:38) Unveiling the General Intelligence Factor in Language Models: A Psychometric Approach (01:04:48) AgentTuning: Enabling Generalized Agent Abilities for LLMs (01:09:51) Contrastive Prefence Learning: Learning from Human Feedback without RL (01:11:25) ‘Mind-blowing' IBM chip speeds up AI Policy & Safety(01:14:57) GM Cruise unit suspends all driverless operations after California ban (01:18:52) AI researchers uncover ethical, legal risks to using popular data sets (01:22:22) AI Safety Summit: day 1 and 2 programme (01:25:23) Anthropic's AI chatbot Claude is posting lyrics to popular songs, lawsuit claims (01:26:38) Mike Huckabee says Microsoft and Meta stole his books to train AI (01:27:10) Clearview AI Successfully Appeals $9 Million Fine in the U.K. (01:28:11) North Korea experiments with AI in cyber warfare: US official (01:30:17) OpenAI forms new team to assess ‘catastrophic risks' of AI UK poised to establish global advisory group on AI Synthetic Media & Art(01:32:22) This new data poisoning tool lets artists fight back against generative AI (01:34:32) Amazon now lets advertisers use generative AI to pretty up their product shots (01:36:36) The Beatles: ‘final' song Now and Then to be released thanks to AI technology
This week we dip back into the postbag to look at some more listener questions. First up we return to our episode looking at recent shifts in abortion rates – is the narrative of ‘it's my body and I'll do what I want' what is truly driving increases in abortion figures in recent years, or is that a bit of a myth? We also take a closer look into recent reports that expose how cutting-edge artificial intelligence models are being trained by incredibly underpaid and exploited workers in the developing world. How should we as Christians respond to what is being claimed as the exploitation of workers around the globe in the name of technological advancement that seeks to benefit humanity? Should governments moderate this kind of employment or is there an argument that digital technology is actually positively transforming economic outlook in the third world? Finally we wrap up today's episode considering if the UK government's recent AI Safety Summit is meaningless ‘motherhood and apple pie' platitudes and, if so, how can we actually push for meaningful regulation? - The WIRED article on the underpaid workers from poorer nations helping train AI data sets https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/ - The UK government's Bletchley Declaration on AI safety https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 - Subscribe to the Matters of Life and Death podcast: https://pod.link/1509923173 - If you want to go deeper into some of the topics we discuss, visit John's website: http://www.johnwyatt.com - For more resources to help you explore faith and the big questions, visit: http://www.premierunbelievable.com
Amazon introduces beta of virtual voice narration through KDP as an AI Safety Summit fails to achieve much of substance. Welcome to Self-Publishing News with ALLi News editor Dan Holloway, bringing you the latest in indie publishing news and commentary. Find more author advice, tips and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts, and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
British Prime Minister Rishi Sunak convened a global meeting about regulating AI Safety at Bletchley Park, the iconic stately home North of London where Alan Turing led a team that cracked the German Enigma Code. 29 countries including China attended. What was accomplished ? The US, EU and China have already created their own regulatory regimes. So what was Sunak's end game ? --- Send in a voice message: https://podcasters.spotify.com/pod/show/james-herlihy/message
On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. It was signed by 29 countries, including the United States and China, the global leaders in AI research. In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.
Elon Musk warns about artificial intelligence, Democrats divided over Israel war, and pioneering trans doctor warns about treatments for minors. Get the facts first with Morning Wire.
On 2 November 2023, Rishi Sunak closed his global AI Safety Summit at Bletchley Park by interviewing the richest man on Earth, Elon Musk. The mood was deferential (the PM towards the tech billionaire). Was Sunak eyeing up a post-politics job in San Francisco, some wondered, or calculating that Musk's Twitter might be an effective campaigning tool come 2024?In this week's audio long read, the New Statesman contributing writer Quinn Slobodian examines the origins of Sunak's “fanboy-ish enthusiasm” for the billionaire tech disruptors. These lie in the publication of a 1997 business book, he writes: The Sovereign Individual: How to Survive and Thrive During the Collapse of the Welfare State, by the American venture capitalist James Dale Davidson and William Rees-Mogg, father of Jacob. The book has become cult reading among tech leaders, and influential on the alt-right: its world view of a libertarian internet and the rise of economic freeports and tax havens chimed with a wealthy elite who saw a chance to get much, much richer. In Sunak, Slobodian argues, we see the arrival of the sovereign individual in Downing Street: “a ‘two-fer', as they say in America: both its first Silicon Valley prime minister and its first hedge fund prime minister”.Written by Quinn Slobodian and read by Will Lloyd.This article originally appeared in the 2 November 2022 issue of the New Statesman; you can read the text version here.If you enjoyed this episode, you might also enjoy Sam Bankman-Fried and the effective altruism delusion by Sophie McBain. Hosted on Acast. See acast.com/privacy for more information.
This week, electric vehicle sales are in a slump. Last year, the competition among EV buyers was fierce, with consumers paying premium prices to drive one off the lot. But despite federal tax credits aimed at making them more affordable, the red-hot EV market isn’t so hot anymore. Plus, a year into ads on Netflix, the company is reporting that 15 million monthly active users are watching, and rewards for binging your favorite shows are in the works. But first, we'll dive into the U.K.'s AI Safety Summit at historic Bletchley Park this week. Marketplace's Lily Jamali is joined by Joanna Stern, senior personal technology columnist at The Wall Street Journal, for her take on those stories.
This week, electric vehicle sales are in a slump. Last year, the competition among EV buyers was fierce, with consumers paying premium prices to drive one off the lot. But despite federal tax credits aimed at making them more affordable, the red-hot EV market isn’t so hot anymore. Plus, a year into ads on Netflix, the company is reporting that 15 million monthly active users are watching, and rewards for binging your favorite shows are in the works. But first, we'll dive into the U.K.'s AI Safety Summit at historic Bletchley Park this week. Marketplace's Lily Jamali is joined by Joanna Stern, senior personal technology columnist at The Wall Street Journal, for her take on those stories.
Who is Hamas; US ideas about who governs Gaza; Israel signaling to adversaries; the US signaling to allies; Saudi Arabia may still be seeking normalization with Israel; China's presence at an AI meeting; and Marcus wants to turn the tablesPlease subscribe and leave a review on Apple Podcasts, Spotify, or your podcast player of choiceContribute to a future episode by sending us an email or leaving a voicemailFurther Reading:The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023See all Cheap Talk episodes
Bletchley Declaration represents a consensus starting point for AI governance. Lazarus Group prospects blockchain engineers with KANDYKORN. Boeing investigates ‘cyber incident' affecting parts business. NodeStealer's use in attacks against Facebook accounts. Citrix Bleed vulnerability exploited in the wild. MuddyWater spearphishes Israeli targets in the interest of Hamas. India to investigate alleged attacks on iPhones. Tim Starks from the Washington Post on the SEC's case against Solar Winds. In today's Threat Vector segment David Moulton from Unit 42 is joined by Matt Kraning of the Cortex Expanse Team for a look at Attack Surface Management. And Venomous Bear rolls out some new tools. On the Threat Vector segment, David Moulton, Director of Thought Leadership for Unit 42, is joined by Matt Kraning, CTO of the Cortex Expanse Team. They dive into the latest Attack Surface Management Report. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/210 Threat Vector Read the Attack Surface Management Report. Please share your thoughts with us for future Threat Vector segments by taking our brief survey. To learn what is top of mind each month from the experts at Unit 42 sign up for their Threat Intel Bulletin. Selected reading. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (GOV.UK) US Vice President Harris calls for action on "full spectrum" of AI risks (Reuters) Elastic catches DPRK passing out KANDYKORN (Elastic Security Labs) North Korean Hackers Targeting Crypto Experts with KANDYKORN macOS Malware (The Hacker News) Lazarus used ‘Kandykorn' malware in attempt to compromise exchange — Elastic (Cointelegraph) An info-stealer campaign is now targeting Facebook users with revealing photos (Record) Mass Exploitation of 'Citrix Bleed' Vulnerability Underway (SecurityWeek) MuddyWater eN-Able spear-phishing with new TTPs | Deep Instinct Blog (Deep Instinct) Centre's Cyber Watchdog CERT-In To Probe iPhone "Hacking" Attempt Charges (NDTV.com) Over the Kazuar's Nest: Cracking Down on a Freshly Hatched Backdoor Used by Pensive Ursa (Aka Turla) (Unit 42) Learn more about your ad choices. Visit megaphone.fm/adchoices
Paul Breitbarth of Catawiki and Dr. K Royal connect with Woodrow Hartzog, Professor of Law at the Boston University School of Law. He also has some other academic roles, including at Washington University, Harvard and Stanford. His research focuses on privacy, media, and technology. Recently, professor Hartzog testified before the Judiciary Committee of the U.S. Senate in a hearing on Oversight and Legislation on Artificial Intelligence. Last summer, Serious Privacy released an episode on Artificial Intelligence in the wake of the European Parliament's adoption of the EU AI Act. And although negotiations in Europe are still ongoing, it seems agreement on this new law is close. In recent weeks, the White House has released a blueprint for an AI Bill of Rights, as well as an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. And on the day we record this episode, 1 November 2023, the UK Government hosted an AI Safety Summit at Bletchley Park. If you have comments or questions, find us on LinkedIn, Twitter @podcastprivacy @euroPaulB @heartofprivacy and email podcast@seriousprivacy.eu. Rate and Review us! Proudly sponsored by TrustArc. Learn more about the TRUSTe Data Privacy Framework verification. upcoming webinars.#heartofprivacy #europaulb #seriousprivacy #privacy #dataprotection #cybersecuritylaw #CPO #DPO #CISO
Dara Calleary, Minister for Trade Promotion, Digital and Company Regulation. discusses the challenges and opportunities posed by artificial intelligence from the inaugural AI Safety Summit in the UK.
Heller, Piotrwww.deutschlandfunk.de, Forschung aktuellDirekter Link zur Audiodatei
Rishi Sunak has convened a global summit of world leaders and tech executives to discuss how the power of artificial intelligence can be safely harnessed. Dan Milmo reports. Help support our independent journalism at theguardian.com/infocus
The AI Breakdown: Daily Artificial Intelligence News and Discussions
With the M3 Max chip, Apple is finally pitching how they're going to compete in AI. Also on this episode, a preview of the AI Safety Summit in the UK. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown Interested in the opportunity mentioned in today's show? jobs@breakdown.network ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
At the first day of the AI safety summit 28 countries have agreed to work together to share understanding about the dangers posed by Artificial Intelligence. The agreement has been signed on the first day of the AI safety summit hosted by British Prime Minister, Rishi Sunak. The countries agreed that substantial risks could arise from the use of AI.
Business owners react to the Covid inquiry plus the latest on the UK's AI Safety Summit
UK correspondent Matt Dathan joins Kathryn to talk about the revelations emerging from the Covid inquiry, including some foul-mouthed rants in messages from Dominic Cummings, the former aide to PM Boris Johnson. The target of his ire, former deputy cabinet secretary Helen MacNamara has herself testified today that a "macho" culture harmed the UK's pandemic response. Meanwhile some of the biggest tech companies are attending Prime Minister Rishi Sunak's summit on the risks of artificial intelligence.
Figures at the centre of the government decisions during the coronavirus pandemic give evidence to the UK Covid-19 inquiry. First up was Lee Cain, who was a senior aide to former Prime Minister Boris Johnson. Then it was Dominic Cummings, who was Mr Johnson's chief adviser until he resigned in late 2020. The inquiry heard the pandemic was the wrong crisis for Boris Johnson's skillset, the government had “no plan” to help vulnerable people in lockdown and diary entries also read out suggest Mr Johnson believed old people should get the virus to protect others. To talk us through the latest revelations, Adam is joined by Chris Mason and former Downing Street Director of Communications, Guto Harri. And, Adam meets with Secretary of State for Science, Innovation and Technology, Michelle Donelan, to look ahead to this week's AI Safety Summit. You can join our Newscast online community here: https://tinyurl.com/newscastcommunityhere Today's Newscast was presented by Adam Fleming. It was made by Chris Gray with Alex Collins, Gemma Roper and Sam McLaren. The technical producer was Gareth Jones. The editors are Jonathan Aspinwall and Sam Bonham.
Diplomatic contacts surge ahead of anticipated escalation in the Israel-Hamas war; the United Kingdom (UK) hosts its first artificial intelligence (AI) Safety Summit to develop strategies that mitigate the risks of AI; Chinese Foreign Minister Wang Yi arrives in Washington, DC to speak with U.S. Secretary of State Antony Blinken and U.S. National Security Advisor Jake Sullivan; and Pakistan's former Prime Minister Imran Khan possibly faces the death penalty. Mentioned on the Podcast “The Future of the Israel-Hamas War, With Linda Robinson,” The President's Inbox “The Middle East, Including the Palestinian Question: Vote on Competing Draft Resolutions,” What's In Blue For an episode transcript and show notes, visit The World Next Week at: https://www.cfr.org/podcasts/diplomacy-intensifies-israel-hamas-war-uks-ai-safety-summit-chinas-foreign-minister-visits
Washington AI Network digs into the latest on AI policy. Moderated by host Tammy Haddad, this episode features Paul Rennie OBE, Head of the Global Economy Group at the British Embassy, to discuss UK Prime Minister Rishi Sunak's first-ever AI Safety Summit on Nov. 1-2.
Paul Rennie, the Head of the Global Economy Group at the British Embassy in Washington D.C., joins this week's episode of In AI We Trust? to discuss the upcoming U.K. AI Safety Summit, the U.K.'s approach to AI regulation, and the international regulatory landscape of AI. Tune in to learn more about who is participating in the upcoming Summit, what it means to be a responsible AI actor today, and how AI can be used to promote global good.
This is the second episode in which we discuss the upcoming Global AI Safety Summit taking place on 1st and 2nd of November at Bletchley Park in England.We are delighted to have as our guest in this episode one of the hundred or so people who will attend that summit – Connor Leahy, a German-American AI researcher and entrepreneur.In 2020 he co-founded Eleuther AI, a non-profit research institute which has helped develop a number of open source models, including Stable Diffusion. Two years later he co-founded Conjecture, which aims to scale AI alignment research. Conjecture is a for-profit company, but the focus is still very much on figuring out how to ensure that the arrival of superintelligence is beneficial to humanity, rather than disastrous.Selected follow-ups:https://www.conjecture.dev/https://www.linkedin.com/in/connor-j-leahy/https://www.gov.uk/government/publications/ai-safety-summit-programme/ai-safety-summit-day-1-and-2-programmehttps://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-htmlAn open event at Wilton Hall, Bletchley, the afternoon before the AI Safety Summit starts: https://www.meetup.com/london-futurists/events/296765860/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
The AI Breakdown: Daily Artificial Intelligence News and Discussions
The US has once again tightened restrictions on chip exports and other AI engagements with China. NLW explores the potential consequences, as well as drama around the AI Safety Summit. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Meta has released LLM-for-coding Code Llama in numerous versions. NLW explores the community discussion, including some interesting data around an unreleased version trained on synthetic data that seemed to perform better than any other. Before that on the Brief, Spain starts an AI agency; the UK announces more details of its AI Safety Summit and new AI models out of South Korea and China. Today's Sponsor: Supermanage - AI for 1-on-1's - https://supermanage.ai/breakdown ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI. Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/