POPULARITY
Gabriele Mazzini, Team Leader - AI Act at the European Commission, discusses the risk-based approach they took when crafting specific rules for the Artificial Intelligence Act (versus simply opposing the technology as a whole). He also discusses the complexities involved in regulating emerging technologies that evolve at a much faster pace than the legislation itself. Key Takeaways: Recommendations put forward for regulating emerging technologies within the AI Act What the process has been like for the development of the AI Act, including the key players Where regulation in this space can be most helpful despite the complexities involved Guest Bio: Gabriele Mazzini is the architect and lead author of the proposal on the Artificial Intelligence Act (AI Act) by the European Commission, where he has focused on the legal and policy questions raised by new technologies since August 2017. Before joining the European Commission, Gabriele held several positions in the private sector in New York and served in the European Parliament and the EU Court of Justice. He holds a LLM from Harvard Law School, a PhD in Italian and Comparative Criminal Law from the University of Pavia, and a Law Degree from the Catholic University in Milan. He is qualified to practice law in Italy and New York. --------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
Our 154th episode with a summary and discussion of last week's big AI news! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Correction: Andrey mentioned "State space machines", he meant "State space models" Timestamps + links: (00:00:00) Intro / Banter Tools & Apps(00:02:06) Google Releases Gemini, an A.I.-Driven Chatbot and Voice Assistant (00:05:56) Copilot gets a big redesign and a new way to edit your AI-generated images (00:09:40) Arc Search's AI responses launched as an unfettered experience with no guardrails (00:12:40) Brilliant Labs's Frame glasses serve as multimodal AI assistant (00:15:30) Stability AI launches SVD 1.1, a diffusion model for more consistent AI videos (00:16:18) OpenAI launches ChatGPT app for Apple Vision Pro Applications & Business(00:19:04) A Waymo robotaxi hit a cyclist in San Francisco – here's what happened (00:23:30) Canon plans to disrupt chipmaking with low-cost “stamp” machine (00:27:33) US industry group calls for multilateral chip export controls to address disadvantage over Korea, other allies (00:30:10) U.S. blocks shipment of 24 Nvidia AI GPUs to China over concerns about self-driving truck company (00:32:24) Nvidia reportedly selects Intel Foundry Services for GPU packaging production — could produce over 300,000 H100 GPUs per month Projects & Open Source(00:37:23) Allen Institute for AI launches open and transparent OLMo large language model (00:42:46) Meet ‘Smaug-72B': The new king of open-source AI (00:47:02) Introducing Qwen1.5 (00:50:54) Hugging Face launches open source AI assistant maker to rival OpenAI's custom GPTs (00:53:20) Apple releases ‘MGIE', a revolutionary AI model for instruction-based image editing Research & Advancements(00:54:30) Learning Universal Predictors (01:01:00) Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks (01:04:40) MusicRL: Aligning Music Generation to Human Preferences (01:05:47) FP6-LLM: Efficiently Serving Large Language Models Through FP6-CentricAlgorithm-System Co-Design (01:09:06) AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents (01:12:36) Specialized Language Models with Cheap Inference from Limited Domain Data Policy & Safety(01:13:22) EU's AI Act passes last big hurdle on the way to adoption (01:17:04) Building an early warning system for LLM-aided biological threat creation (01:23:44) FCC votes to ban scam robocalls that use AI-generated voices (01:24:33) Biden administration names a director of the new AI Safety Institute (01:26:23) OpenAI's GPT-4 finally meets its match: Scots Gaelic smashes safety guardrails Synthetic Media & Art(01:28:04) AI poisoning tool Nightshade received 250,000 downloads in 5 days: ‘beyond anything we imagined' (01:30:24) Labeling AI-Generated Images on Facebook, Instagram and Threads (01:33:06) OpenAI is adding new watermarks to DALL-E 3 (01:34:38) Following lawsuit, rep admits “AI” George Carlin was human-written (01:36:20) Outro
Welcome to "AI Lawyer Talking Tech," where we delve into the intersection of artificial intelligence and the legal world. In today's episode, we're exploring the latest innovations and challenges shaping the legal landscape in 2024. From the increasing use of AI tools in estate planning to the implications of generative AI in copyright law, it's clear that technology is rapidly transforming the way legal professionals operate. Join us as we delve into the top 10 legal issues that private companies need to consider, as well as the ethical and practical considerations of AI in legal practice. Stay tuned for an insightful discussion on the impact of AI on the legal profession and the evolving role of technology in shaping the future of law. What Private Companies Need To Consider in 2024: Top 10 Legal Issues06 Feb 2024National Law Review‘It's Been A Very Good Year,' Says the Winner of Last Year's TECHSHOW Startup Alley, Universal Migrator06 Feb 2024LawSitesAbout Last Week06 Feb 2024Mike McBride OnlineThe Family Renovation Project: A Blueprint For Integrating Legal Teams In AI Initiatives06 Feb 2024Above The LawUnitedLex Source Code Analysis Uncovers Crucial Evidence Leading to a 5G Standard Patent Infringement Verdict Awarding $67.5 Million, Plus Ongoing Royalties06 Feb 2024InvestorsObserverArtificial Intelligence and Copyright Law: The NYT v. OpenAI – Fair Use Implications of Generative AI06 Feb 2024JD SupraIf Articling Didn't Exist, Would We Invent It?06 Feb 2024SlawUSC Gould aims to expand expertise in teaching, researching artificial intelligence06 Feb 2024USC NewsB.C. Supreme Court Ruling to Shape Use of AI Models in Legal Sphere – Experts Weigh In06 Feb 2024CryptopolitanPro Bono: Year in Review 202306 Feb 2024Norton Rose FulbrightGeneration tech06 Feb 2024Law Society GazettePhilly sheriff's campaign takes down bogus 'news' stories posted to site that were generated by AI05 Feb 2024Connecticut PostHow to Successfully Implement HubSpot at Your Law Firm05 Feb 2024National Law ReviewSexual Harassment Allegations Pile Up At Annual Legal Tech Conference05 Feb 2024Above The LawMaking the business case for AI: Five starting points05 Feb 2024Financial Thomson ReutersThe Justice Gap in Legal Tech: A Tale of Two Conferences and the Implications for A2J05 Feb 2024LawSitesAffiniPay Adds In-Person Payments to Product Suite05 Feb 2024PYMNTS.comLegal AI tools and AI assistants: must-have for legal professionals05 Feb 2024Financial Thomson ReutersAI in 2024: What Every GC Needs to Know05 Feb 2024National Law ReviewThe new world is here for the legal industry… are you ready or not?05 Feb 2024Financial Thomson ReutersAll Eyes on AI: Regulatory, Litigation, and Transactional Developments – Q4 202305 Feb 2024JD SupraAI for good: How one entrepreneur is tackling the confusion around family leave law05 Feb 2024Thomson Reuters InstituteMIT: What babies can teach AI06 Feb 2024Stephen's LighthouseWhat Private Companies Need To Consider in 2024: Top 10 Legal Issues06 Feb 2024Schiff HardinNew Jersey's Consumer Data Privacy Statute - What You Need to Know06 Feb 2024GibbonsLeading Digital Lawyer Rafi Azim-Khan Joins Crowell's London and San Francisco Offices06 Feb 2024Crowell & MoringThe pre-final text of the EU's AI Act leaked online06 Feb 2024White & CaseNew Hampshire Joins Data Protection Trend, Passes Comprehensive Data Protection Law06 Feb 2024BeneschFTC focused on competition and dominance in generative AI space06 Feb 2024Hogan LovellsPresident Biden's Executive Order Enables Agencies to Address Key Artificial Intelligence Risks, Privacy & Cybersecurity Law Report05 Feb 2024Andrews KurthArtificial Intelligence and Copyright Law: The NYT v. OpenAI – Fair Use Implications of Generative AI05 Feb 2024Baker Donelson Bearman Caldwell & Berkowitz PCNew Jersey Enacts Comprehensive Data Privacy Law05 Feb 2024White & Case
SpaceX will reap the benefits of new Dragon research opportunity; Spotify's podcast exclusive days are over as Joe Rogan's show expands to other platforms; EU's AI Act passes last big hurdle on the way to adoption Learn more about your ad choices. Visit megaphone.fm/adchoices
The world's first major regulation on artificial intelligence should almost be done and dusted – but France is causing trouble. FRANCE 24's Tech Editor Peter O'Brien tells us more.
Paris Marx is joined by Timnit Gebru to discuss the past year in AI hype, how AI companies have shaped regulation, and tech's relationship to Israel's military campaign in Gaza. Timnit Gebru is the founder and executive director of the Distributed AI Research Institute. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry. Also mentioned in this episode:Billy Perrigo reported on OpenAI lobbying to water down the EU's AI Act.Nitasha Tiku wrote about the push to train students in a particular idea of AI.Politico has been doing a lot of reporting on the influences on AI policy in the US and UK.OpenAI made a submission in the UK to try to get permission to train on copyrighted material.Arab workers in the tech industry fear the consequences of speaking out for Palestinian rights.972 Magazine reported on Israel's use of AI to increase its targets in Gaza.Jack Poulson chronicles the growing ties between military and tech.Timnit mentioned No Tech for Apartheid, Antony Loewenstein's The Palestine Laboratory, and Malcolm Harris' Palo Alto.Support the show
Niels Lunde har besøg af sin chef, Bjarne Corydon, ansvarshavende chefredaktør og adm. direktør på Børsen. De to skyder sæsonen i gang med et bud på erhvervslivets vigtigste udfordringer i 2024. 03.17: Geopolitik: Om industripolitik og statsstøtte som et nyt og giftigt vilkår for et lille land. 16.54: Geopolitik: Om balancen mellem at forstå politiske hensyn og at skabe kommerciel værdi. 21.50: Kunstig intelligens: Om risikoen for, at EU's AI Act bliver et europæisk selvmål. 27.00: Kunstig intelligens: Om virksomhedernes næste ryk, som bliver at gå fra fascination til praksis. 32.06: Den grønne omstilling: Om en begyndende global konkurrence på stordriftsfordele, som åbner for et ubehageligt dansk dilemma. Case: Ørsted. 42.35: Nationaløkonomi: Om en recession, som måske, måske ikke kommer. 47.00: Virksomheder: Om hvorfor vi holder ekstra øje med Novo Nordisk, A.P. Møller-Mærsk og Coop i år. 56.30: Christiansborg: Om nøglen til at forstå dansk politik de kommende måneder. 1.08.00: Executive Summary. Producer: Peter-Emil Witt. Få Niels Lundes nyhedsbrev: Tilmelding på borsen.dk/nyhedsbreve.
Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we'll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question I think we'll be asking for the next year is simple: How could an attack like this be introduced without Apple's knowledge and support? We don't get to this question until near the end of the episode, and I don't claim great expertise in exploit design, but it's very hard to see how such an elaborate compromise could be slipped past Apple's security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation. Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can't get patents as “inventors.” But it's quite possible that they'll make a whole lot of technology “obvious” and thus unpatentable. Paul Stephan joins us to note that National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety. Jeffery notes that U.S. lawmakers have finally woken up to the EU's misuse of tech regulation to protect the continent's failing tech sector. Even the continent's tech sector seems unhappy with the EU's AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences. A problem that inspires this week's Cybertoonz. Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a NYTimes article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn't prove the case. Finally, Jeffery notes an SEC “sweep” examining the industry's AI use. Paul explains the competition law issues raised by app stores – and the peculiar outcome of litigation against Apple and Google. Apple skated in a case tried before a judge, but Google lost before a jury and entered into an expensive settlement with other app makers. Yet it's hard to say that Google's handling of its app store monopoly is more egregiously anticompetitive than Apple's. We do our own research in real time in addressing an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters. The FTC has clearly learned Paul's dictum, “The best time to kick someone is when they're down.” And its complaint shows a lack of care consistent with that posture. I criticize the FTC for claiming without citation that Rite Aid ignored racial bias in its facial recognition software. Justin and I dig into the bias data; in my view, if FTC documents could be reviewed for unfair and deceptive marketing, this one would lead to sanctions. The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn't on board with the whole package. We move from government regulation of Silicon Valley to Silicon Valley regulation of government. Apple has decided that it will now require a judicial order to give government's access to customers' “push notifications.” And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India's hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw. Paul and Jeffery decode the EU's decision to open a DSA content moderation investigation into X. We also dig into the welcome failure of an X effort to block California's content moderation law. Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. The U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work. Justin looks at a recent report presenting actual evidence on the question of TikTok's standards for boosting content of interest to the Chinese government. And in quick takes, I celebrate the end of the Reign of Mickey Mouse in copyright law Paul explains why Madison Square Garden is still able to ban lawyers who have sued the Garden I note the new short-term FISA 702 extension Paul predicts that the Supreme Court will soon decide whether police can require suspects to provide police with phone passcodes And Paul and I quickly debate Daphne Keller's amicus brief for Frances Fukuyama in the Supreme Court's content moderation cases Download 486th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
In this episode, we dive into recent news in tech, gaming and all things Miami Art Week.Today's episode is sponsored by Chainalysis.Avery and Sam discuss recent stories in tech regulation, gaming and brands at Miami Art Week. They explore topics like the EU's AI Act, the leaked trailer for Grand Theft Auto 6, brands participating in the metaverse and games and how brands can capture the high-value Art Basel audience.Links mentioned from the podcast: Decrypt Media Inc. and Rug Radio Merge to Create Global Web3 Publishing CompanyThe EU Just Passed Sweeping New Rules to Regulate AIGTA 6: Grand Theft Auto VI could smash revenue recordsFollow us on Twitter: Sam Ewen, Avery Akkineni, CoinDesk, Vayner3-From our sponsor:Chainalysis is the blockchain data platform. We provide data, software, services, and research to government agencies, web3 companies, financial institutions, and insurance and cybersecurity companies. Our data powers investigation, compliance, and business intelligence software that has been used to solve some of the world's most high-profile criminal cases. For more information, visit www.chainalysis.com.-"Gen C" features hosts Sam Ewen and Avery Akkineni, with editing by Jonas Huck. Executive produced by Jared Schwartz and produced by Uyen Truong. Our theme music is "1882” by omgkirby x Channel Tres with editing by Doc Blust. Artwork by Nicole Marie Rincon.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, we dive into recent news in tech, gaming and all things Miami Art Week.Today's episode is sponsored by Chainalysis.Avery and Sam discuss recent stories in tech regulation, gaming and brands at Miami Art Week. They explore topics like the EU's AI Act, the leaked trailer for Grand Theft Auto 6, brands participating in the metaverse and games and how brands can capture the high-value Art Basel audience.Links mentioned from the podcast: Decrypt Media Inc. and Rug Radio Merge to Create Global Web3 Publishing CompanyThe EU Just Passed Sweeping New Rules to Regulate AIGTA 6: Grand Theft Auto VI could smash revenue recordsFollow us on Twitter: Sam Ewen, Avery Akkineni, CoinDesk, Vayner3-From our sponsor:Chainalysis is the blockchain data platform. We provide data, software, services, and research to government agencies, web3 companies, financial institutions, and insurance and cybersecurity companies. Our data powers investigation, compliance, and business intelligence software that has been used to solve some of the world's most high-profile criminal cases. For more information, visit www.chainalysis.com.-"Gen C" features hosts Sam Ewen and Avery Akkineni, with editing by Jonas Huck. Executive produced by Jared Schwartz and produced by Uyen Truong. Our theme music is "1882” by omgkirby x Channel Tres with editing by Doc Blust. Artwork by Nicole Marie Rincon.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Kara and Scott discuss Tucker Carlson's new streaming service, Elon letting Alex Jones back on X, and whether the criticism of Google's Gemini demo video is warranted. Then, UPenn's president resigns after that Congressional testimony last week, will the other presidents follow? Plus, the EU moves forward with its landmark AI Act. Finally, our Friend of Pivot is "Pod Save America" co-host and "Pollercoaster" host, Dan Pfeiffer. Dan gives us insight into the latest election poll numbers and shares some advice for Biden and the Democrats. Follow Dan on X at @danpfeiffer. Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode, we discuss three important updates in the tech industry. Firstly, the tech sector is showing stability, which is good news for the overall industry. Despite a slight dip in tech employment, there is high demand for AI talent. Secondly, we delve into the global shift towards regulating artificial intelligence, with the EU's AI Act and the US's AI policies being notable examples. We also explore how tech is transforming the labor market and creating opportunities for IT services. Lastly, we present the November jobs report, which reveals a robust job market in the US, with strong employment and wage growth. Despite cooler job growth, the economy remains strong, supporting consumer spending during the holidays. The unemployment rate is at its lowest in over 50 years, and workers are seeing real wage gains. These positive indicators may influence the Federal Reserve's decision on future rate hikes.Three things to know today:00:00 Tech Sector Shows Stability: AI Talent in High Demand Despite Overall Tech Employment Dip04:16 From EU's AI Act to US AI Policies: A Global Shift Towards Regulating Artificial Intelligence09:43 Tech's Transformation of the Labor Market: Opportunities for IT ServicesSupported by: https://skykick.com/mspradio/Want to take my class? https://www.itspu.com/all-classes/classes/navigating-emerging-technologies-for-msps/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Support the show on Patreon: https://patreon.com/mspradio/Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.comFollow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftech
In this episode, Steve speaks with Dragos Tudorache, one of the members of the European Parliament who is responsible for writing the EU's AI Act. Dragos explains the thought process that went into developing the new law and tells Steve what organisations can expect and how they can prepare for its implementation. Mentioned in and related to this episode: ISF Podcast: Ellie Pavlick - Balancing the Risk and Reward of AI ISF Podcast: The Ethical Dilemma of AI & Innovation ISF Podcast: Beyond Buzzwords: AI, ML, and the Future of Cyber ISF Podcast: Mo Gawdat: Rethinking the Paradigm of Artificial and Human Intelligence ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
In this episode, Steve speaks with Dragos Tudorache, one of the members of the European Parliament who is responsible for writing the EU's AI Act. Dragos explains the thought process that went into developing the new law and tells Steve what organisations can expect and how they can prepare for its implementation. Mentioned in and related to this episode: ISF Podcast: Ellie Pavlick - Balancing the Risk and Reward of AI ISF Podcast: The Ethical Dilemma of AI & Innovation ISF Podcast: Beyond Buzzwords: AI, ML, and the Future of Cyber ISF Podcast: Mo Gawdat: Rethinking the Paradigm of Artificial and Human Intelligence ISF Analyst Insight Podcast Read the transcript of this episode Subscribe to the ISF Podcast wherever you listen to podcasts Connect with us on LinkedIn and Twitter From the Information Security Forum, the leading authority on cyber, information security, and risk management.
Senior Practice Director Conan D'Arcy and Associate Natasha Dixon discuss the latest developments in the EU AI Act negotiations, including potential compromises on foundation models and biometric identification. They weigh in on the million dollar question: will negotiators be able to overcome their differences and reach an agreement by the end of the year?If you or your business are exposed to the trends discussed in this episode, you can reach out to Conan or the wider Global Counsel team by clicking here. Hosted on Acast. See acast.com/privacy for more information.
Frankfurt Book Fair prompts European writers' groups to call for the EU's AI Act to be tough on transparency. Welcome to Self-Publishing News with ALLi News editor Dan Holloway, bringing you the latest in indie publishing news and commentary. Find more author advice, tips and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts, and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
On this episode of The AI Moment, host Mark Beccue examines two trends in AI regulations and takes a look under the hood of Microsoft Copilot to understand how Microsoft is addressing built-in challenges for LLMs. The discussion covers: The key Generative AI trends – AI regulations. Analysis of the status and arc of the EU's AI Act and copyright/IP issues linked to Adobe's proposed FAIR Act. A company we like doing AI. We take a look under the hood of Microsoft's Copilot to understand how Microsoft is addressing the built-in challenges for LLMs. How will Copilot navigate LLM issues with accuracy, bias, privacy and hallucinations?
In this chapter, we discuss the ethical landscape of using generative AI. Issues around privacy, data ownership, and potential biases are highlighted. We also cover existing guidelines like the EU's AI Act and ACM's Statement of Principles, helping businesses and individuals navigate the ethical complexities of this technology.Download the whitepaper here: https://www.raccoons.be/resources/whitepapers/generative-ai-playbook
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #27: Portents of Gemini, published by Zvi on August 31, 2023 on LessWrong. By all reports, and as one would expect, Google's Gemini looks to be substantially superior to GPT-4. We now have more details on that, and also word that Google plans to deploy it in December, Manifold gives it 82% to happen this year and similar probability of being superior to GPT-4 on release. I indeed expect this to happen on both counts. This is not too long from now, but also this is AI #27 and Bard still sucks, Google has been taking its sweet time getting its act together. So now we have both the UK Summit and Gemini coming up within a few months, as well as major acceleration of chip shipments. If you are preparing to try and impact how things go, now might be a good time to get ready and keep your powder dry. If you are looking to build cool new AI tech and capture mundane utility, be prepared on that front as well. Table of Contents Introduction. Table of Contents. Bold sections seem most relatively important this week. Language Models Offer Mundane Utility. Summarize, take a class, add it all up. Language Models Don't Offer Mundane Utility. Not reliably or robustly, anyway. GPT-4 Real This Time. History will never forget the name, Enterprise. Fun With Image Generation. Watermarks and a faster SDXL. Deepfaketown and Botpocalypse Soon. Wherever would we make deepfakes? They Took Our Jobs. Hey, those jobs are only for our domestic robots. Get Involved. Peter Wildeford is hiring. Send in your opportunities, folks! Introducing. Sure, Graph of Thoughts, why not? In Other AI News. AI gives paralyzed woman her voice back, Nvidia invests. China. New blog about AI safety in China, which is perhaps a thing you say? The Best Defense. How exactly would we defend against bad AI with good AI? Portents of Gemini. It is coming in December. It is coming in December. Quiet Speculations. A few other odds and ends. The Quest for Sane Regulation. CEOs to meet with Schumer, EU's AI Act. The Week in Audio. Christiano and Leahy give talks, Rohit makes his case. Rhetorical Innovation. Some relatively promising attempts. Llama No One Stopping This. Meta to open source all Llamas no matter what. No One Would Be So Stupid As To. Bingo, sir. Aligning a Smarter Than Human Intelligence is Difficult. Davidad has a plan. People Are Worried About AI Killing Everyone. Roon, the better critic we need. Other People Are Not As Worried About AI Killing Everyone. Consciousness? The Wit and Wisdom of Sam Altman. Do you feel lucky? Well, do ya? The Lighter Side. The big time. Language Models Offer Mundane Utility A class on the economics of ChatGPT, complete with podcast recording. More like this, please, no matter my quibbles. I especially don't think survey courses, in economics or elsewhere, are the way to go. Focus on what matters and do something meaningful rather than try to maximize gesturing. If you let me teach students with other majors one economics class, teach them the basics of micro and then use that to explore what matters sounds like a great plan. So is getting students good at using LLMs. Use algorithmic instructions to let LLMs accurately do tasks like 19-digit addition. Summarize writing. It seems GPT-4 summaries are potentially more accurate than humans ones. We encountered two practical problems: Not following instructions. Bigger models were better at following instructions. We had to use another LLM to understand the outputs of the smaller LLMs and work out if it said A or B was the answer. Ordering bias. Given A and B, are you more likely to suggest A simply because it is first? One way to test this is to swap the ordering and see how many times you say A both times or B both times. Once we dealt with these problem we saw: Human: 84% (from past research) gpt-3.5-turbo: 67.0% correct (seemed to h...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #27: Portents of Gemini, published by Zvi on August 31, 2023 on LessWrong. By all reports, and as one would expect, Google's Gemini looks to be substantially superior to GPT-4. We now have more details on that, and also word that Google plans to deploy it in December, Manifold gives it 82% to happen this year and similar probability of being superior to GPT-4 on release. I indeed expect this to happen on both counts. This is not too long from now, but also this is AI #27 and Bard still sucks, Google has been taking its sweet time getting its act together. So now we have both the UK Summit and Gemini coming up within a few months, as well as major acceleration of chip shipments. If you are preparing to try and impact how things go, now might be a good time to get ready and keep your powder dry. If you are looking to build cool new AI tech and capture mundane utility, be prepared on that front as well. Table of Contents Introduction. Table of Contents. Bold sections seem most relatively important this week. Language Models Offer Mundane Utility. Summarize, take a class, add it all up. Language Models Don't Offer Mundane Utility. Not reliably or robustly, anyway. GPT-4 Real This Time. History will never forget the name, Enterprise. Fun With Image Generation. Watermarks and a faster SDXL. Deepfaketown and Botpocalypse Soon. Wherever would we make deepfakes? They Took Our Jobs. Hey, those jobs are only for our domestic robots. Get Involved. Peter Wildeford is hiring. Send in your opportunities, folks! Introducing. Sure, Graph of Thoughts, why not? In Other AI News. AI gives paralyzed woman her voice back, Nvidia invests. China. New blog about AI safety in China, which is perhaps a thing you say? The Best Defense. How exactly would we defend against bad AI with good AI? Portents of Gemini. It is coming in December. It is coming in December. Quiet Speculations. A few other odds and ends. The Quest for Sane Regulation. CEOs to meet with Schumer, EU's AI Act. The Week in Audio. Christiano and Leahy give talks, Rohit makes his case. Rhetorical Innovation. Some relatively promising attempts. Llama No One Stopping This. Meta to open source all Llamas no matter what. No One Would Be So Stupid As To. Bingo, sir. Aligning a Smarter Than Human Intelligence is Difficult. Davidad has a plan. People Are Worried About AI Killing Everyone. Roon, the better critic we need. Other People Are Not As Worried About AI Killing Everyone. Consciousness? The Wit and Wisdom of Sam Altman. Do you feel lucky? Well, do ya? The Lighter Side. The big time. Language Models Offer Mundane Utility A class on the economics of ChatGPT, complete with podcast recording. More like this, please, no matter my quibbles. I especially don't think survey courses, in economics or elsewhere, are the way to go. Focus on what matters and do something meaningful rather than try to maximize gesturing. If you let me teach students with other majors one economics class, teach them the basics of micro and then use that to explore what matters sounds like a great plan. So is getting students good at using LLMs. Use algorithmic instructions to let LLMs accurately do tasks like 19-digit addition. Summarize writing. It seems GPT-4 summaries are potentially more accurate than humans ones. We encountered two practical problems: Not following instructions. Bigger models were better at following instructions. We had to use another LLM to understand the outputs of the smaller LLMs and work out if it said A or B was the answer. Ordering bias. Given A and B, are you more likely to suggest A simply because it is first? One way to test this is to swap the ordering and see how many times you say A both times or B both times. Once we dealt with these problem we saw: Human: 84% (from past research) gpt-3.5-turbo: 67.0% correct (seemed to h...
Resources for this episode available at 505updates.com. From Edwin Kwan in Sydney, Australia: As small households embrace green energy and install solar panels, they could unknowingly be exposing their home networks. Your home solar system could be connecting to the manufacturer's servers to send data that is displayed on the app.From Olimpiu Pop in Transylvania, Romania: The European AI Act is the first of its kind in the world. Being the first, it creates a precedent and a benchmark for those that will follow, but it also imposes explicit obligations on foundational model providers like OpenAI and Google. From Katy Craig in San Diego, California: In a bizarre case of unauthorized access, a 53-year-old man from California has been charged for meddling with a water treatment facility's critical software.From Hillary Coover in Washington, DC: Today, WIRED released an article discussing Discord's new "Family Center" safety settings designed for teens. Discord launched the Family Center, aiming to strike a balance between safety and teen privacy.From Marcel Brown in St. Louis, Missouri: July 12th, 1990. Nintendo releases the original Final Fantasy video game for it's Nintendo Entertainment System in North America. Final Fantasy helped to popularize the genre and has gone on to spawn one of the most well-known RPG franchises in history.From Sourced Network Production in New York City. "It's 5:05". I'm Pokie Huang. Today is Wednesday, July 12th. Here's the full story behind today's cyber security and open source headlines...
Our 129th episode with a summary and discussion of last week's big AI news! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Timestamps: (00:00:00) Intro / Banter Tools & Apps(00:05:40) These 'A.I. humans' are letting gamers modify their voices in real time (00:07:48) New Clippy app gives us a taste of AI in Windows 11 (00:10:40) ChatGPT on iOS now comes with Bing built-in Applications & Business(00:11:50) Junk websites filled with AI-generated text are pulling in money from programmatic ads (00:14:28) AI startups buck funding winter, grow share in '23 (00:18:35) The TSA will use facial recognition in over 400 airports (00:20:18) OpenAI Chooses London for Its First Corporate Office Outside US (00:21:24) Unity shares rise 12% after company announces A.I. marketplace (00:22:40) Fundings Inflection lands $1.3B investment to build more ‘personal' AI Runway, a startup building generative AI for content creators, raises $141M Typeface, which is building generative AI for brands, raises $100M at a $1B valuation Celestial AI raises $100M to transfer data using light-based interconnects Gleamer, which provides AI software for radiologists, raises $29.5M Parrot, an AI-powered transcription platform that turns speech into text, raises $11M Series A (00:26:45) AcquisitionsDatabricks Strikes $1.3 Billion Deal for Generative AI Startup MosaicML Meituan buys founder's months-old ‘OpenAI for China' for $234M As the generative AI craze rages on, Ramp acquires customer support startup Cohere.io Nvidia Acquired AI Startup That Shrinks Machine-Learning Models Projects & Open Source(00:30:51) Unraveling GPU Inference Costs for Fine-tuned Open-source Models V/S Closed Platforms Research & Advancements(00:33:48) Researchers From LinkedIn And UC Berkeley Propose A New Method To Detect AI-Generated Profile Photos (00:35:35) China's Baidu claims its Ernie Bot beats ChatGPT on key tests as A.I. race heats up (00:38:47) LeanDojo: Theorem Proving with Retrieval-Augmented Language Models (00:40:21) Self-Improving Robots: Embracing Autonomy in Robot Learning (00:41:48) RoboCat: A self-improving robotic agent (00:45:05) NVIDIA H100 GPUs Set Standard for Generative AI in Debut MLPerf Benchmark Policy & Safety(00:46:50) From ChatGPT to executive orders: Inside the White House's urgent push to regulate AI (00:53:50) US considering new restrictions on AI chip exports to China, Wall Street Journal reports (00:57:10) 17 fatalities, 736 crashes: The shocking toll of Tesla's Autopilot (01:00:32) European companies claim the EU's AI Act could ‘jeopardise technological sovereignty' Synthetic Media and Art(01:04:22) Adobe Stock creators aren't happy with Firefly, the company's ‘commercially safe' gen AI tool (01:08:27) Adobe indemnity clause designed to ease enterprise fears about AI-generated art (01:12:17) Outro
Thanks for tuning in to Intellicast! In this episode, Brian Lamar and Producer Brian discuss conferences and some news stories from the past few weeks (yes, we know it's been a while since our last episode!) So, without further ado, let's jump right in. The episode kicks off with some feedback we received on our last episode! In it, we had a conversation on the conference in Hilton Head, which was a unique location, but it was expensive, hard to get to, etc. Our listener recommended that maybe we need to go to some second-tier cities (Ex: Charlotte, New Orleans, Nashville, Philadelphia, Cincinnati) for our conferences. The reasoning being that these cities are cheaper, and the airports have less travel hassle/cancellations and are easier to navigate. What do you think? The guys also give an update on the upcoming conferences in the next few months and even some in 2024! In our first news story, the guys discuss Momentive's announcement that they will be returning to its original name, SurveyMonkey. As expected, Brian Lamar is practically giddy over this news. In the next news story, Brian and Brian chat about Cint and Publisher's Clearing House's partnership to increase access to respondents with their new platform, PCH Consumer Insights. This leads Brian Lamar to a bit of a soapbox rant about how things like this change panels. The next news story follows a similar thread with Kantar selling its specialist healthcare business to M3 Global Research. After this, the discussion turns to artificial intelligence. The European Union has approved a draft version of the EU's AI Act which sets out plans to regulate the use of AI technology. The plans will enact bans on many uses of AI and require AI systems to comply with transparency requirements. In the last story, the guys discuss QuetionPro's acquisition of PathosAI, an AI platform that identifies states and non-stated emotions, which leads to another fired-up rant from Brian Lamar about name changes. EMI's annual report on the sample industry, The Sample Landscape: 2023 Edition, is now available! Get your copy here: https://emi-rs.com/the-sample-landscape/ Watch out latest webinar, The Sample Landscape 2023: State of the Industry on-demand! Want to catch up on our blogs? Click here. Missed one of our webinars or want to get some of our whitepapers and reports? You can find it all on our Resources page on our website here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it's no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there's already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU's AI Act. To help get his head around the complicated, evolving ecology of global AI governance, Justin Hendrix spoke to two of the three authors of a recent paper in the Annual Review of Law and Social Science that attempts to take stock of and explore the tensions between different approaches, including Michael Veale, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy; and Robert Gorwa, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany.
The surge in artificial intelligence tools have led to increased calls for regulation. And one place where those rules are beginning to take shape is Europe. WSJ Brussels-based reporter Kim Mackrael sat down with Tech News Briefing's Zoe Thomas to talk about the draft legislation pushed forward by the EU parliament. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
If your startup serves influencers, you may want to update your resume. Plus: America's new number one “draft” pick, the EU's AI Act, and a whole lot more. Join our hosts Jacob Cohen and Juliet Bennett Rylah as they take you through our most interesting stories of the day. Follow us on social media: TikTok: https://www.tiktok.com/@thdspod Instagram: https://www.instagram.com/thdspod/ Thank You For Listening to The Hustle Daily Show. Don't forget to hit Subscribe or Follow us on Apple Podcasts so you never miss an episode! If you want this news delivered to your inbox, join millions of others and sign up for The Hustle Daily newsletter, here: https://thehustle.co/email/ Plus! Your engagement matters to us. If you are a fan of the show, be sure to leave us a 5-Star Review on Apple Podcasts https://podcasts.apple.com/us/podcast/the-hustle-daily-show/id1606449047 (and share your favorite episodes with your friends, clients, and colleagues). “The Hustle Daily Show” is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.
The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. As people realised that GPT technology was a game-changer, they called for the Act to be reconsidered.Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world's most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.John Higgins joins us in this episode to discuss the AI Act. John is the Chair of the Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.Follow-up reading:https://www.globaldigitalfoundation.org/https://artificialintelligenceact.eu/Topics addressed in this episode include:*) How different is generative AI from the productivity tools that have come before?*) Two approaches to regulation compared: a "Franco-German" approach and an "Anglo-American" approach*) The precautionary principle, for when a regulatory framework needs to be established in order to provide market confidence*) The EU's preference for regulating applications rather than regulating technology*) The types of application that matter most - when there is an impact on human rights and/or safety*) Regulations in the Act compared to the principles that good developers will in any case be following*) Problems with lack of information about the data sets used to train LLMs (Large Language Models)*) Enabling the flow, between the different "providers" within the AI value chain, of information about compliance*) Two potential alternatives to how the EU aims to regulate AI*) How an Act passes through EU legislation*) Conflicting assessments of the GDPR: a sledgehammer to crack a nut?*) Is it conceivable that LLMs will be banned in Europe?*) Why are there no tech giants in Europe? Does it matter?*) Other metrics for measuring the success of AI within Europe*) Strengths and weaknesses of the EU single market*) Reasons why the BCS opposed the moratorium proposed by the FLI: impracticality, asymmetry, benefits held back*) Some counterarguments in favour of the FLI position*) Projects undertaken by the Global Digital Foundation*) The role of AI in addressing (as well as exacerbating) hate speech*) Growing concerns over populism, polarisation, and post-truth*) The need for improved transparency and improved understandingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
As generative AI tools, like ChatGPT, continue to gain popularity, concerns about privacy and data protection have also arisen. Our panel features three distinguished experts in the field of privacy and generative AI who will explore the legal, technical, and ethical challenges related to the use of AI systems.Throughout the discussion, our panelists will share their experiences and insights into the complex landscape of privacy, data protection, and generative AI. We will explore the legal and regulatory requirements that must be considered, as well as the technical and ethical considerations of implementing AI systems.Our panelists will also discuss the efforts being made by regulators around the world to address the challenges related to the use of AI systems. We will discuss the EU's AI Act, the US National Institute of Standards and Technology's AI Risk Management Framework, and the California Privacy Protection Agency's upcoming rulemaking on automated decision-making under state law.
Consensus Gentium is took him the Grand Jury prize for SXSW Immersive 2023, and it's the most immersive phone-based experience that I've ever had. The Latin title translated means that “If everyone believes it, it must be true,” and it's a near-term speculative sci-fi piece that explores what China's social scoring system might look like in the context of the United States where mobility is restricted by algorithms but can be expanding if citizens agree to be surveilled by a phone app. The experience shifts automatically shifts between mobile app videos onboarding you into a surveillance state juxtaposed with Facetime calls where your face appears in the lower right corner, and text messages are seamlessly popping throughout the experience as you jump in between different tasks and cut scenes that build up the world and immerse you deeper into what director Karen Palmer describes as a “reality simulator” that feels entirely plausible near future. The piece is also tagged with the logline of “the film that watches you back” as it integrates facial tracking technologies and an eye gaze mechanic that allows you make a few conscious and unconscious choices throughout the piece. Some of the themes of the piece are around algorithmic bias in facial detection algorithms and how that impacts marginalized communities (and why the EU's AI Act bans facial detection in certain contexts like police enforcement). It also explores agency, self-determination, and biometric threats to freedom of thought as you are scored on a spectrum between how compliant you are to the state versus any deviant or resistant behaviors. Overall, using the multi-channel communication affordances of the phone is able to create an entirely plausible portal into this near-future world that Palmer is creating for us which I found deeply, deeply immersive. It's no wonder that this piece took home the top prize as the SXSW jurors seemed to agree with that sentiment, while this piece also presciently covers many relevant topics around AI, bias, and threats to our cognitive liberty — notably I spoke to Palmer on the same day that Nita Farahany's book officially released titled The Battle for Your Brain, which I had a chance to unpack with Farahany ahead of SXSW.
This episode of the Cyberlaw Podcast is dominated by stories about possible cybersecurity regulation. David Kris points us first to an article by the leadership of the Cybersecurity and Infrastructure Security Administration in Foreign Affairs. Jen Easterly and Eric Goldstein seem to take a tough line on “Why Companies Must Build Safety Into Tech Products.“ But for all the tough language, one word, “regulation,” is entirely missing from the piece. Meanwhile, the cybersecurity strategy that the White House has been reportedly drafting for months seems to be hung up over how enthusiastically to demand regulation. All of which seems just a little weird in a world where Republicans hold the House. Regulation is not likely to be high on the GOP to-do list, so calls for tougher regulation are almost certainly more symbolic than real. Still, this is a week for symbolic calls for regulation. David also takes us through an National Telecommunications and Information Administration (NTIA) report on the anticompetitive impact of Apple's and Google's control of their mobile app markets. The report points to many problems and opportunities for abuse inherent in their headlock on what apps can be sold to phone users. But, as Google and Apple are quick to point out, they do play a role in regulating app security, so breaking the headlock could be bad for cybersecurity. In any event, practically every recommendation for action in the report is a call for Congress to step in—almost certainly a nonstarter for reasons already given. Not to be outdone on the phony regulation beat, Jordan Schneider and Sultan Meghji explore some of the policy and regulatory proposals for AI that have been inspired by the success of ChatGPT. The EU's AI Act is coming in for lots of attention, mainly from parts of the industry that want to be regulation-free. Sultan and I trade observations about who'll be hollowed out first by ChatGPT, law firms or investment firms. Sultan also tells us why the ION ransomware hack matters. Jordan and Sultan find a cybersecurity angle to The Great Chinese Balloon Scandal of 2023. And I offer an assessment of Matt Taibbi's story about the Hamilton 68 “Russian influence” reports. If you have wondered what the fuss was about, do not expect mainstream media to tell you; the media does not come out looking good in this story. Unfortunately for Matt Taibbi, he does not look much better than the reporters his story criticizes. David thinks it is a balanced and moderate take, for which I offer an apology and a promise to do better next time.