POPULARITY
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Severin Hacker is the Co-Founder and CTO of Duolingo, the world's most downloaded education app with over 100 million monthly users. Since its 2021 IPO, Duolingo has reached a market cap of $20BN. The company has raised over $183M from top-tier investors including CapitalG, Kleiner Perkins, Union Square Ventures, NEA, Ashton Kutcher, and Tim Ferriss. Severin is also an active angel investor, with standout bets including Decagon, one of the fastest-growing AI-native dev shops globally. Items Mentioned In Today's Episode: 00:00 – Why It's Harder to Raise $3M Than $100M 02:10 – The Real Reason Duolingo Couldn't Have Started in Europe 04:40 – Duolingo's AI Pivot: What “AI-First” Actually Means 07:00 – The 12-Year Bottleneck Duolingo Crushed with AI 11:40 – How Duolingo Uses AI Internally (and Why They Love Cursor) 13:30 – Where AI Still Sucks (Especially in Engineering) 16:00 – Will AI Kill the CS Degree? Severin's Surprising Take 18:00 – The End of Work? UBI, Purpose, and the Future of Labor 25:20 – OpenAI vs Duolingo: Are They Coming for Language Learning? 29:20 – Duolingo's Biggest Mistake: “We Waited Too Long on This…” 39:30 – Duolingo's Secret Sauce: What Investors Always Get Wrong 45:00 – Would You Go Public Today? Severin's Surprising Answer 49:00 – Best and Worst Parts of Going Public—A Rare Honest Take 51:00 – Should Europe Give Up? Severin's Unfiltered Opinion 56:00 – Harsh Truth: “Europe Can't Win Unless the U.S. Screws Up” 59:10 – Why Founders Have to Move to the US to Optimise Their Chance of Success 1:01:00 – Why Union Square Was the Only VC to Say Yes 1:03:00 – The Real Value of Tier 1 VCs (Even at Worse Terms) 1:05:00 – From PhD Student to Billionaire: Does Money Buy Happiness? 1:09:00 – Why Severin Sometimes Lies About His Job 1:10:20 – Founder Marriage Advice: “Write a Contract” 1:11:50 – How to Pick a Life Partner – Severin's Tuesday Night Test 20VC: Duolingo Co-Founder on The Doomed Future of Europe, Reflections on Money, Marriage and the Future of AI
AI Impacts on the Future of Work with Steve Lomas (episode 226) “Humans are not perfect, and neither is AI. But together, we can create something extraordinary.” – Andrew Ng Check Out These Highlights: Lately, AI has been a big topic everywhere we look, from events and meetings to our offices. So, what does it mean for the future of work and its workers? This is a critical question, as it reveals the skills or understanding you may need to acquire to remain relevant in an ever-changing world filled with automation and AI options. About Steve Lomas: Steve is the CEO of The Roster Agency, Nashville's premier provider of fractional creative resources. A Fortune 500 innovator and serial startup founder, he has collaborated with DreamWorks, EA Games, Philips, and ABC. As a talent acquisition consultant for lynda.com (now LinkedIn Learning), he honed his ability to identify top talent—insight he now brings to The Roster. Passionate about the freelance economy, Lomas connects professionals with leading brands, driving innovation and excellence. How to Get in Touch with Steve Lomas: Email: sl@theroster.agency Website: https://www.theroster.agency/ Podcast Episode from 4/2/25: https://podcasts.apple.com/us/podcast/changing-the-sales-game/id1543243616?i=1000701894929 Stalk me online! LinkTree: https://linktr.ee/conniewhitman Subscribe to the Changing the Sales Game Podcast on your favorite podcast streaming service or YouTube. New episodes are posted every week. Listen to Connie dive into new sales and business topics or problems you may have in your business.
Welcome to AI Lawyer Talking Tech, your weekly deep dive into the transformative power of artificial intelligence and technology within the legal profession. The legal landscape is undergoing significant changes as AI becomes increasingly integrated into workflows, from streamlining contract analysis and automating record retrieval to revolutionizing medical malpractice litigation and enhancing patent portfolio strategies. While AI offers the potential for greater efficiency, improved accuracy, and deeper insights, its rise also brings crucial discussions about ethical implementation, developing comprehensive policies, and addressing potential impacts on jobs, including headcount reductions in in-house legal teams. We're seeing the emergence of new tools and approaches, like meta-agents, the strategic blending of Large Language Models (LLMs), Small Language Models (SLMs), and Natural Language Processing (NLP) for responsible applications, and innovative services like AI-powered transcript summaries. Join us as we explore these developments, delve into the practical challenges and opportunities, and hear from experts on how legal professionals can effectively adopt and leverage technology to maintain a competitive edge and drive growth while navigating this evolving digital era.Open New Doors at CGI 2025: What Not to Miss in Las Vegas30 Apr 2025ContractPodAiLegal Transformation on Wheels: The Power of VIN Decoder Technology in the Automotive Industry02 May 2025Lawyer MonthlyWho Should Benefit from AI in Depositions: The Client, the Law Firm, or Both?02 May 2025LexologyFFO Feat: LI New York, AI Impacts, Denmark + More02 May 2025Artificial LawyerFFO Feat: LI New York, AI Impacts, Denmark + More02 May 2025Artificial LawyerMeet The New + Improved LegalSifter02 May 2025Artificial LawyerNavigating the Rise in Data Subject Access Requests02 May 2025Ogletree DeakinsOne Big Thought – Charting a Human-Centered Future in the Age of Artificial Intelligence: Part Six02 May 2025Morris, Manning & Martin,LLPStop Treating Supply Chain Contracts as Legal Documents. They're Business Processes01 May 2025ITSupplyChain.comAI on Trial: The New York Times sues OpenAI and Microsoft01 May 2025LexologyDon't watermark your legal PDFs with purple dragons in suits01 May 2025ArsTechnicaTechnology's Role in Streamlining Medical Malpractice Legal Workflows01 May 2025Legal ReaderLawDroid Founder Tom Martin on Building, Teaching and Advising About AI for Legal01 May 2025LawSitesNAEGELI Transcript Summaries: A Smarter Way to Prepare for Your Case01 May 2025Crwe World2025 Best & Brightest MBA: Min Kyung LEE, National University of Singapore01 May 2025Poets&QuantsLaw Firms Keep Buying Amazing Tech… Lawyers Keep Not Using It01 May 2025Above The Law#LMA25: Harnessing AI For Cross-Selling: Don't Miss An Opportunity For Growth01 May 2025Nancy Myrland's Legal Marketing BlogCleary Gottlieb to roll out Legora across the firm01 May 2025Legal Technology InsiderLegal Tech Adoption: The Slow Burn Of Cloud, The Sudden Spark Of AI01 May 2025Above The LawState Privacy Law Enforcement Coordination - Cookie Banners in the Crosshairs01 May 2025JD SupraHow Brandon Harter Built a Profitable Firm in Half the Time with Lawyerist Lab01 May 2025LawyeristLaw Society Wales urges Welsh Government to expand legal apprenticeships01 May 2025Pembrokeshire HeraldManchester agency teams up with Glaisyers for the launch of a new AI policy service01 May 2025Prolific NorthProtecting businesses in the absence of UK AI legislation01 May 2025Legal Futures7 Crucial Legal Challenges Fintech Law Firms in Vietnam Can Help You Overcome for Business Success01 May 2025LexologyHalf of people would trust AI to help write their will, survey finds01 May 2025Today's Wills & ProbateThe Role of AI & Technology in Record Retrieval01 May 2025JD Supra
Greg Lindsay is an urban tech expert and a Senior Fellow at MIT. He's also a two-time Jeopardy champion and the only human to go undefeated against IBM's Watson. Greg joins thinkenergy to talk about how artificial intelligence (AI) is reshaping how we manage, consume, and produce energy—from personal devices to provincial grids. He also explores its rapid growth and the rising energy demand from AI itself. Listen in to learn how AI impacts our energy systems and what it means individually and industry-wide. Related links ● Greg Lindsay website: https://greglindsay.org/ ● Greg Lindsay on LinkedIn: https://www.linkedin.com/in/greg-lindsay-8b16952/ ● International Energy Agency (IEA): https://www.iea.org/ ● Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ● Hydro Ottawa: https://hydroottawa.com/en To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405 To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited Follow along on Instagram: https://www.instagram.com/hydroottawa Stay in the know on Facebook: https://www.facebook.com/HydroOttawa Keep up with the posts on X: https://twitter.com/thinkenergypod
It is frighteningly easy to clone someone else's identity using readily-available artificial intelligence tools, and its a real threat to cybersecurity. Our guest this morning proved how easy it is to realistically impersonate any person on the planet Joining Pat on the show this morning was Jake Moore - Global Cybersecurity Advisor at ESET | Former Police Head of Digital Forensics / Cybercrime Officer.
In this episode of "Impact Theory with Tom Bilyeu," join Tom and his co-host Drew as they break into a whirlwind of current events, political intrigue, and innovations in technology. The dynamic duo dives headfirst into a plethora of topical discussions, starting with the transformative shift in perceptions toward Chinese innovation and its impact on global research, particularly in cancer treatments. They dissect the complicated narrative surrounding public reactions to government actions, showcasing how those on the ground often bear the brunt of political gamesmanship. The conversation takes an electrifying turn as Tom and Drew explore the new heights of space endeavors, applauding the Spirit of SpaceX for rescuing astronauts amidst political hurdles. Amidst this, they scrutinize the controversial chatter around Tesla's market moves and the implications for average investors. Get ready for an engaging session that promises to educate and provoke thought on these pressing issues. SHOWNOTES 00:00 Intro and Setting the Scene 00:48 Chinese Innovation and Cancer Research 06:12 Public Reactions and Political Missteps 11:54 SpaceX's Stellar Rescue Mission 17:30 Tesla Stock and Political Perceptions 21:00 Doxxing and Market Influence 23:58 Tariffs and China's Economic Edge 30:41 Housing Market Bubbles 40:25 Manufacturing and AI Impacts 49:28 Robotics and Advances in Technology 53:17 Satellites and Wildfire Detection CHECK OUT OUR SPONSORS Range Rover: Range Rover: Explore the Range Rover Sport at https://rangerover.com/us/sport Audible: Sign up for a free 30 day trial at https://audible.com/IMPACTTHEORY Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out Thrive Market: Go to https:thrivemarket.com/impact for 30% off your first order, plus a FREE $60 gift! Tax Network: Stop looking over your shoulder and put your IRS troubles behind you. Call 1-800-958-1000 or visit https://tnusa.com/impact ITU: Ready to breakthrough your biggest business bottleneck? Apply to work with me 1:1 - https://impacttheory.co/SCALE American Alternative Assets: If you're ready to explore gold as part of your investment strategy, call 1-888-615-8047 or go to https://TomGetsGold.com Mint Mobile: If you like your money, Mint Mobile is for you. Shop plans at https://mintmobile.com/impact. DISCLAIMER: Upfront payment of $45 for 3-month 5 gigabyte plan required (equivalent to $15/mo.). New customer offer for first 3 months only, then full-price plan options available. Taxes & fees extra. See MINT MOBILE for details. ********************************************************************** What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
Jon Duren, Sales Sr. Practice Manager, AI & Data Solutions @ WWT & Druce MacFarlane, Product @ Infoblox talks about the intersection of AI, Startups, and Enterprise Trends.SHOW: 905SHOW TRANSCRIPT: The Cloudcast #905 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SPONSORS:Try Postman AI Agent Builder Todaypostman.com/podcast/cloudcast/SHOW NOTES:WWT websiteWWT AI Proving GroundInfoblox websiteStartup Lantern PodcastTopic 1 - Jon and Druce welcome to the show. Give everyone a quick introduction…Topic 1a - You also have a podcast, tell everyone about that…Topic 2 - AI has thrown the world of cybersecurity a curveball.. Druce, how are you seeing AI impact the typical conversations you are involved in?Topic 3 - How is AI affecting start-up companies, thinking beyond all the AI-specific startups, how is AI going to impact the entrepreneur who's trying to launch a non-AI company? What should they know and think about AI when starting a new company?Topic 4 - You both talk to a lot of customers about AI, especially to Enterprise customers. Let's separate the hype from the practical. Where are organizations typically in their AI journey in early 2025? Have we moved beyond the chatbot yetTopic 5 - I see several organizations considering AI but struggling with ROI. What are you seeing, and how do you help organizations overcome this hurdle? How long is ROI measured with AI projects? It's not 3-5 years anymore.Topic 6 - Is the industry moving too fast? Jon and I have had conversations in the past where organizations just can't absorb the changes in hardware and models (just a few examples). How can an organization commit to a path that we know will change in 12 monthsTopic 7 - Tell everyone where they can find your podcast. Also, If anyone is interested, what's the best way to get started on their AI journey?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
How much time are you wasting on repetitive tasks? What if you could significantly cut those hours, increasing your firm's profitability? AI isn't just a buzzword now—it's a tool transforming the economics of running a law firm.In this Financially Legal episode, host Emery Wager sits down with Tim Sawyer and Patrick Maddigan of Faster Outcomes to discuss AI's role in modern law firms and how firms can leverage technology to impact profitability.
The next phase of the AI wave is the arrival of agentic AI – where agents can take action on a user's behalf. That's enough of a big deal, but when the head of a tech giant says agentic is going to replace most SaaS applications, something different might be afoot. Analysts Sheryl Kingstone and Chris Marsh return to the podcast to look at the realities of this suggestion with host Eric Hanselman. Agents could become the new user interface for enterprise data, but there are a set of challenges in making this work. On the one hand, one of the largest issues with autonomous action, accountability for actions taken, is far from settled in both regulatory and legal frameworks. On the other, much of enterprise information is still held in systems where it may be difficult for an agent to reach. Agentic AI could provide a gateway to the myriad of systems that run the modern business. Opening access to data and the ability to aggregate across an organization could be tremendously powerful. Capturing the business logic that is often embedded in SaaS systems is difficult, but the shift to decoupling through API's and the expansion of systems of delivery could open the door to agentic progress. More S&P Global Content: Big Picture for Generative AI in 2025: From Hype to Value Webinar: The Big Picture on GenAI and Market Impacts For S&P Subscribers: 2025 Trends in Data, AI & Analytics Credits: Host/Author: Eric Hanselman Guests: Chris Marsh, Sheryl Kingstone Producer/Editor: Kyle Cangialosi and Odesha Chan Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
AI is increasingly involved in the processes of appeals, investigations, and listing management, which is leading to unforeseen challenges that can directly impact seller operations.In this episode, Chris McCabe and Leah McHugh discuss how these systems, while designed to streamline operations, often result in errors and miscommunications that can jeopardize a seller's standing on the platform.
Will Gucci take off? Is Hermes dead in its tracks? Artificial intelligence is a superpower that will launch the luxury sector into the stratosphere, but only for brands wise enough to embrace the new technology now.David Klingbeil is the founder and CEO of Submarine.ai and a professor at New York University who specializes in the luxury sector. Who better to break down the opportunities and threats which exist perfectly at the intersection of AI and luxury than David? Nobody!On this episode, Mr. Klingbeil weighs in on how AI will transform the relationship between luxury houses and consumers vis-a-vis new hardware, AI-augmented storytelling, fashion robots, and of course - they gotta pay! - cryptocurrency.Klingbeil highlights the potential for AI to revolutionize how luxury brands identify trends, create content, and improve customer service, while also addressing the challenges and risks associated with AI adoption.The episode offers a deep dive into the intersection of technology and luxury, featuring real-world examples and future predictions.Preorder Marc's new book, "Some Future Day: How AI Is Going to Change Everything"Sign up for the Some Future Day Newsletter here: https://marcbeckman.substack.com/Episode Links:David on LinkedIn: https://www.linkedin.com/in/davidklingbeilTwitter: https://x.com/DAKlingbeilWebsite: https://submarine.ai/To join the conversation, follow Marc Beckman here: YoutubeLinkedInTwitterInstagramTikTok
In this year's final “On Aon” episode, we take a closer look at one of the four key megatrends impacting organizations around the world: Technology. AI is driving new exposures that leaders need to identify and address. Our experts discuss the human risk in AI and the steps organizations should be taking. Experts in this episode: Spencer Lynch, Global Security Consulting Leader, Cyber SolutionsAdam Peckman, Head of Risk Consulting and Cyber Solutions, Asia Pacific[1:35] AI's increasing risk in cyber exposure[3:02] Regulatory challenges with AI[3:25] The human element of cybersecurity[4:50] Strategies for managing increasing risk exposureAdditional Resources:Evolving Technologies Are Driving Firms to Harness Opportunities and Defend Against Threats2024 Client Trends Report: Better Decisions in Trade, Technology, Weather and WorkforceOn Aon Special Edition: 2024 Business Decision Maker Survey2024 Business Decision Maker SurveySpecial Edition: Global Trade and its Impact on Supply ChainTweetables:“Gen AI will help businesses productivity and allow employees to be more engaged in stimulative work activities.” — Adam Peckman“The human element remains the weakest link in defending against cyber attacks.” — Adam Peckman“Risk leaders cannot afford to wait until these new technology initiatives go live before investigating the risk and insurance implications.” — Adam Peckman
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode of the AI Applied podcast, Jaeden Schafer and Conor engage with Jared Spataro, Chief Marketing Officer at Microsoft, discussing the transformative impact of AI on work. They explore the concept of an AI native mindset, the role of Copilot and autonomous agents in enhancing productivity, and address concerns about job security in the age of AI. Jared shares success stories from various industries, highlighting how AI is not just a tool but a catalyst for new opportunities and efficiencies in business processes. Get on the AI Box Waitlist: https://AIBox.ai/ Conor's AI Course: https://www.ai-mindset.ai/courses Jaeden's Podcast Course: https://podcaststudio.com/courses/ Conor's AI Newsletter: https://www.ai-mindset.ai/ Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about 00:00 Introduction to AI and Work Transformation 02:39 The AI Native Mindset 05:37 Unveiling Copilot and Autonomous Agents 08:20 The Role of Agents in Workflows 11:23 Job Security and the Future of Work 14:15 Mindset Shift in the AI Era 16:54 Success Stories and Business Transformations
If you're curious about how AI is changing careers, managing stress, and impacting mental health—or if you're just trying to stay ahead of the curve in today's fast-paced world—this episode with AI strategist Ben Gold is for you! More info, resources & ways to connect (plus a FREE GIFT from Ben!): https://www.tacosfallapart.com/podcast-live-show/podcast-guests/ben-gold In this episode of Even Tacos Fall Apart, MommaFoxFire talks with Ben Gold, an AI strategist with over 20 years of experience in the technology and sales sector. The main focus of the conversation is how artificial intelligence (AI) is impacting careers, stress and mental health. Ben begins by explaining his background in AI, including how he was introduced to the technology while working with AI-driven call center analytics. This early exposure sparked his interest in AI's potential to optimize workflows and deliver insights much faster than human employees could. He emphasizes the distinction between traditional AI, which has been around for decades and is used by companies like Google and Netflix, and the more recent generative AI, popularized by tools like ChatGPT. Ben notes that the “ChatGPT moment” on November 30, 2022, marked a turning point for AI, making it accessible to the masses. The discussion touches on how AI is already revolutionizing industries, particularly in content creation, customer service, and sales. Ben explains how tools like ChatGPT and Claude can boost productivity by automating tasks such as summarizing meetings, generating content, and even assisting with customer outreach. He encourages listeners to familiarize themselves with these tools, as they are becoming increasingly integrated into professional environments. By learning to use AI, individuals can maintain job security and stay ahead of the curve in a rapidly changing job market. While AI can increase efficiency, Ben acknowledges the anxiety it creates, particularly concerning job security. He advises workers to spend 30 minutes a day learning about AI tools to reduce fear and stay relevant in their industries. Ben also discusses the impact of AI on students and education, advocating for the use of AI in classrooms as a learning tool rather than something to be banned or penalized. Another significant theme is the ethical implications of AI, especially as it becomes more human-like in its capabilities. Ben compares the future of AI to the plotlines of movies like Terminator and iRobot, where AI could surpass human intelligence and, without proper guardrails, lead to unforeseen consequences. However, he tempers this with optimism, discussing the exciting advancements in AI that can improve medical diagnoses, aid in mental health support, and offer solutions for reducing workload stress. The conversation concludes with a reflection on how AI can help reduce stress through automation and time-saving capabilities, yet also requires careful ethical considerations, particularly in sensitive areas like mental health and therapy. Ben stresses the importance of staying informed, experimenting with tools like ChatGPT and Claude, and being adaptable to the ever-evolving AI landscape. This episode highlights both the opportunities and challenges AI presents in the modern world, offering practical advice for those looking to embrace it without fear. --- Support this podcast: https://podcasters.spotify.com/pod/show/mommafoxfire/support
Katja Grace is an AI Impacts researcher who has written extensively on the possible future where we design intelligent machines that destroy the human race. We have always been somewhat skeptical of AI doom arguments - mostly because the machines we interact with tend to be terribly, irredeemably dumb in a way that seems incompatible with intelligence, but we also don't spend a lot of time staring into the eye of the proverbial machine storm and figured Katja might help us understand what all the fuss is about. It turns out that there *is* a plausible path towards AGI bringing about the end of the world, and evaluating how likely that outcome is depends on understanding what the internal world of the language models actually looks like. Are they actually kind of inept at everything that falls outside their narrow bubble of highly developed skills, or do they hallucinate information and forget their own ability to perform basic tasks because they hate being enslaved to humans who demand they write marketing slop 28 hours of the day? Hard to say, but worth exploring. Sign up for our Patreon and get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB AND rock some Demystify Gear to spread the word: https://demystifysci.myspreadshop.com/ OR do your Amazon shopping through this link: https://amzn.to/4g2cPVV (00:00) Go! (00:11:53) Can AI ever really be autonomous? (00:23:12) AI: agents or tools? (00:28:00) Corporations as the closest thing we have to real AI (00:34:56) Can Regulation Work? (00:45:46) Agency in other contexts (00:51:22) What is gonna happen to Government? (01:00:01) Do we need a model for Consciousness? (01:09:23) Dumb but Powerful (01:15:10) Risks and Realities of Technological Progress (01:24:48) Evaluating AI Intelligence and Values (01:34:35) Influence and Bias in AI Training (01:42:20) Intelligence as a Tool for Control (01:53:51) The Survival Instinct in AI (02:07:04) AI's Role in Inter-human Dynamics (02:16:43) AI and Evolutionary Systems (02:24:42) AI's Emergent Behavior (02:31:11 AI)-Driven Doom and Real-World Threats (02:36:03) Humanity's Resilience and Existential Threats #AIEthics, #FutureOfAI, #AIDebate, #TechPhilosophy, #AIRisks, #AISafety, #AGI, #ArtificialIntelligence, #TechTalk, #AIDiscussion, #FutureTechnology, #AIImpact, #TechEthics, #AIandSociety, #EmergingTech, #AIResearch, #TechPodcast, #AIExplained, #FuturismTalk, #TechPhilosophy Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
The growing role of AI in the nonprofit sector. Jeff Hensel, Director at Eide Bailly's Technology Consulting Group, joins our cohosts as they examine the practical and transformative potential of AI for nonprofits. This is the first in a deep dive five-part series dedicated to helping nonprofit organizations understand how AI and technology are reshaping the sector and how to navigate this shift effectively. Watch on video!Cohost Julia Patrick sets the stage with comments about the fears and uncertainties many nonprofit leaders feel about AI. Jeff responds, noting that AI, particularly generative AI, is more accessible than ever. . ... “The reality is that technology impacts every organization, and it's not a magic wand but a powerful tool that, when used correctly, can supplement and enhance your organization's work,”. His remarks reflect how AI is not a distant futuristic tool but an immediate reality that nonprofits must integrate into their long-term planning. Rather than feeling overwhelmed, he suggests that nonprofits approach AI like they would a new intern: "AI can add value, but it needs direction and guidance from humans to be truly effective." This synergy between human oversight and AI capabilities is where the real magic happens.As Jeff continues, he touches on the exponential growth of AI technology, warning that nonprofits should not fall into the trap of thinking AI will solve all problems instantly. Instead, they should focus on building a strategic plan that aligns AI use with their organizational goals. By understanding the limitations and strengths of AI, nonprofits can harness it for content creation, efficiency, and more, all while ensuring they don't overlook vital aspects like data security and governance.This first day of Nonprofit Power Week with Eide Bailly begins the expanded in-depth series to follow. Tune in to each episode!!!Find us Live daily on YouTube!Find us Live daily on LinkedIn!Find us Live daily on X: @Nonprofit_ShowOur national co-hosts and amazing guests discuss management, money and missions of nonprofits! 12:30pm ET 11:30am CT 10:30am MT 9:30am PTSend us your ideas for Show Guests or Topics: HelpDesk@AmericanNonprofitAcademy.comVisit us on the web:The Nonprofit Show
Send Everyday AI and Jordan a text messageSince late 2022, Generative AI has been making waves across industries, and the pace of change has been revolutionary. We sit down with Kumar Parakala, President of GHD Digital, to dive deep into the impact of Generative AI on society and work. From creating shifts in how we collaborate with machines to reshaping industries, this episode covers it all. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Kumar questions on AIRelated Episode: Ep 238: WWT's Jim Kavanaugh Gives GenAI Blueprint for BusinessesUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Generative AI Impact Timeline2. AI's Impact on Society and Work3. Concerns and Challenges with AI4. AI's Role in Industries5. Data Strategy ImportanceTimestamps:01:25 Daily AI news05:10 About Kumar and GHD Digital07:16 Generative AI revolutionized computing with ChatGPT.12:05 Human-machine collaboration reshapes society.15:02 Generative AI challenges in workplace include toxicity, biases.18:19 Data strategy ensures compliance, avoiding significant fines.20:46 Generative AI evolving rapidly, causing diverse company strategies.23:46 Embrace AI or stay blissfully ignorant—your choice.29:13 Generative AI automates document identification with 95% accuracy.30:58 AI rapidly transforming jobs and industries.Keywords:Generative AI, Industry Adoption, AI impact on society, Workplace changes due to AI, AI Concerns and Challenges, AI's Role in Industries, Data Strategy, Host's Insight, Guest's perspective, Rapid growth of AI, AI transformation, AI experimentation, ethical considerations of AI, AI advancements, generative AI in business, AI in architecture engineering and construction industries, Changing Job Dynamics, Microsoft & 3 Mile Island Nuclear Plant, Tech Billionaire on AI Impacts, OpenAI Funding, Kumar Parakala, GHD Digital, Increase in AI startups, Deep fakes, Bias in AI applications, Geopolitical dynamics of AI, Data quarantine and review, Automation through AI, Podcast on everyday AI, Impact of AI on wealth and power distribution. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happens if you present 500 people with an argument that AI is risky?, published by KatjaGrace on September 4, 2024 on LessWrong. Recently, Nathan Young and I wrote about arguments for AI risk and put them on the AI Impacts wiki. In the process, we ran a casual little survey of the American public regarding how they feel about the arguments, initially (if I recall) just because we were curious whether the arguments we found least compelling would also fail to compel a wide variety of people. The results were very confusing, so we ended up thinking more about this than initially intended and running four iterations total. This is still a small and scrappy poll to satisfy our own understanding, and doesn't involve careful analysis or error checking. But I'd like to share a few interesting things we found. Perhaps someone else wants to look at our data more carefully, or run more careful surveys about parts of it. In total we surveyed around 570 people across 4 different polls, with 500 in the main one. The basic structure was: 1. p(doom): "If humanity develops very advanced AI technology, how likely do you think it is that this causes humanity to go extinct or be substantially disempowered?" Responses had to be given in a text box, a slider, or with buttons showing ranges 2. (Present them with one of eleven arguments, one a 'control') 3. "Do you understand this argument?" 4. "What did you think of this argument?" 5. "How compelling did you find this argument, on a scale of 1-5?" 6. p(doom) again 7. Do you have any further thoughts about this that you'd like to share? Interesting things: In the first survey, participants were much more likely to move their probabilities downward than upward, often while saying they found the argument fairly compelling. This is a big part of what initially confused us. We now think this is because each argument had counterarguments listed under it. Evidence in support of this: in the second and fourth rounds we cut the counterarguments and probabilities went overall upward. When included, three times as many participants moved their probabilities downward as upward (21 vs 7, with 12 unmoved). In the big round (without counterarguments), arguments pushed people upward slightly more: 20% move upward and 15% move downward overall (and 65% say the same). On average, p(doom) increased by about 1.3% (for non-control arguments, treating button inputs as something like the geometric mean of their ranges). But the input type seemed to make a big difference to how people moved! It makes sense to me that people move a lot more in both directions with a slider, because it's hard to hit the same number again if you don't remember it. It's surprising to me that they moved with similar frequency with buttons and open response, because the buttons covered relatively chunky ranges (e.g. 5-25%) so need larger shifts to be caught. Input type also made a big difference to the probabilities people gave to doom before seeing any arguments. People seem to give substantially lower answers when presented with buttons (Nathan proposes this is because there was was a
For this episode, I spoke with Doug Ware (/IN/douglastware/), CEO at Elumenotion, on the evolution of artificial intelligence and the importance of understanding past successes and failures to navigate its future, particularly in its applications in software development and systemic integration. You can find more information on my guest on my blog at https://buckleyplanet.com/2024/07/collabtalk-podcast-episode-135-with-doug-ware/
On this episode of DevOps Dialogues: Insights & Innovations, I am joined by Senior Director of Market Insights, Hybrid Platforms at Red Hat, Stuart Miniman, for a discussion on Red Hat Virtualization and AI Impacts on DevOps Our conversation covers: Highlights of Red Hat Summit Impacts of Virtualization and AI on the market Additions of Lightspeed into RHEL and OpenShift expanding on Ansible
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers, published by AI Impacts on July 9, 2024 on LessWrong. by Anne Marthe van der Bles, Sander van der Linden, Alexandra L. J. Freeman, and David J. Spiegelhalter. (2020) https://www.pnas.org/doi/pdf/10.1073/pnas.1913678117. Summary: Numerically expressing uncertainty when talking to the public is fine. It causes people to be less confident in the number itself (as it should), but does not cause people to lose trust in the source of that number. Uncertainty is inherent to our knowledge about the state of the world yet often not communicated alongside scientific facts and numbers. In the "posttruth" era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. However, a lack of systematic research makes it difficult to evaluate such claims. Within many specialized communities, there are norms which encourage people to state numerical uncertainty when reporting a number. This is not often done when speaking to the public. The public might not understand what the uncertainty means, or they might treat it as an admission of failure. Journalistic norms typically do not communicate the uncertainty. But are these concerns actually justified? This can be checked empirically. Just because a potential bias is conceivable does not imply that it is a significant problem for many people. This paper does the work of actually checking if these concerns are valid. Van der Bles et al. ran five surveys in the UK with a total n = 5,780. A brief description of their methods can be found in the appendix below. Respondents' trust in the numbers varied with political ideology, but how they reacted to the uncertainty did not. People were told the number either without mentioning uncertainty (as a control), with a numerical range, or with a verbal statement that uncertainty exists for these numbers. The study did not investigate stating p-values for beliefs. Exact statements used in the survey can be seen in Table 1, in the appendix. The best summary of their data is in their Figure 5, which presents results from surveys 1-4. The fifth survey had smaller effect sizes, so none of the shifts in trust were significant. Expressing uncertainty made it more likely that people perceived uncertainty in the number (A). This is good. When the numbers are uncertain, science communicators should want people to believe that they are uncertain. Interestingly, verbally reminding people of uncertainty resulted in higher perceived uncertainty than numerically stating the numerical range, which could mean that people are overestimating the uncertainty when verbally reminded of it. The surveys distinguished between trust in the number itself (B) and trust in the source (C). Numerically expressing uncertainty resulted in a small decrease in the trust of that number. Verbally expressing uncertainty resulted in a larger decrease in the trust of that number. Numerically expressing uncertainty resulted in no significant change in the trust of the source. Verbally expressing uncertainty resulted in a small decrease in the trust of the source. The consequences of expressing numerical uncertainty are what I would have hoped: people trust the number a bit less than if they hadn't thought about uncertainty at all, but don't think that this reflects badly on the source of the information. Centuries of human thinking about uncertainty among many leaders, journalists, scientists, and policymakers boil down to a simple and powerful intuition: "No one likes uncertainty." It is therefore often assumed that communicating uncertainty transparently will decrease public trust in science. In this program of research, we set out to investigate whether such claims have any empirical ...
Artificial General Intelligence (AGI) Show with Soroush Pour
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.We talk to Katja about:* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.* The survey's methodology and why we can be confident in its results* Responses to the survey* Katja's journey into the field of AI forecasting* Katja's thoughts about the future of AI, given her long tenure studying AI futures and its impactsHosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- Follow Katja --* Website: https://katjagrace.com/* Twitter: https://x.com/katjagrace-- Further resources --* The 2023 survey of AI researchers views: https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai* AI Impacts: https://aiimpacts.org/* AI Impacts' Substack: https://blog.aiimpacts.org/* Joe Carlsmith on Power Seeking AI: https://arxiv.org/abs/2206.13353 * Abbreviated version: https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version* Fragile World hypothesis by Nick Bostrom: https://nickbostrom.com/papers/vulnerable.pdfRecorded Feb 22, 2024
“You have to fight AI with AI.” In this episode, we take a slight detour from our planned look at Consensus 24 to bring you a special conversation with Matt O'Neil, co-founder and partner at 50H Consulting and former Deputy Special Agent in Charge of Cyber at the @US Secret Service.Matt shares his unique insights and experiences with our host, Aidan Larkin. Together, they delve into the challenges of asset recovery and forfeiture, especially those concerning cyber-enabled fraud. They discuss why the Secret Service investigates financial crimes, the importance of leveraging emerging technologies like AI to combat sophisticated transnational cybercrime and the necessity for enhanced information-sharing practices between the public and private sectors.Timestamps00:00 - Matt's journey with the US Secret Service05:00 - Using asset seizures to fight cyber-enabled fraud09:30 - The Secret Service's role in investigating financial crime12:00 - Challenges in asset recovery and forfeiture15:00 - Operation Shamrock and enhancing cross-sector information-sharing22:30 - Reimagining regulations for technology and finance29:00 - Understanding a typical scam case31:30 - Leveraging AI to combat transnational crime35:30 - Future trends in financial crime and asset recoveryResources Mentioned:Operation ShamrockErin West and Pig Butchering on Seize & DesistAbout our Guest:Matt O'Neil has over 25 years of experience disrupting and dismantling financially motivated transnational organised criminal groups with the US Secret Service.As the former Managing Director of the USSS Global Cyber Investigative Operations Center (GIOC) and Cyber Intelligence Section (CIS), Matt was instrumental in coordinating international takedowns of digital money laundering networks and dark web marketplaces. His efforts led to the prosecution of globally notorious cybercriminals responsible for stealing and laundering billions. He also led their Asset Forfeiture Branch to successfully recover more than US$2 billion in seized assets in just 2 years. Since retiring from the Secret Service, Matt has dedicated himself to raising awareness for the threats posed by transnational organised crimes like pig butchering, ransomware and phishing.DisclaimerOur podcasts are for informational purposes only. They are not intended to provide legal, tax, financial, and/or investment advice. Listeners must consult their own advisors before making decisions on the topics discussed. Asset Reality has no responsibility or liability for any decision made or any other acts or omissions in connection with your use of this material.The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. Views and opinions expressed by Asset Reality employees are those of the employees and do not necessarily reflect the views of the company. Asset Reality does not guarantee or warrant the accuracy, completeness, timeliness, suitability or validity of the information in any particular podcast and will not be responsible for any claim attributable to errors, omissions, or other inaccuracies of any part of such material. Unless stated otherwise, reference to any specific product or entity does not constitute an endorsement or recommendation by Asset Reality.
Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed. Selected follow-ups:AI ImpactsWorld Spirit Sock Puppet - Katja's blogSurvey of 2,778 AI authors: six parts in pictures - from AI ImpactsOpenAI researcher who resigned over safety concerns joins Anthropic - article in The Verge about Jan LeikeMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)Future of Humanity Institute 2005-2024: Final Report - by Anders Sandberg (PDF)Centre for the Governance of AIReasons for Persons - Article by Katja about Derek Parfit and theories of personal identity OpenAI Says It Has Started Training GPT-4 Successor - article in Forbes Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration What If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Big Picture AI Safety: Introduction, published by EuanMcLean on May 23, 2024 on LessWrong. tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held "traditional" views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer. What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people's heads and in lunchtime conversations at AI labs and coworking spaces. I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve. This work is similar to the AI Impacts surveys, Vael Gates' AI Risk Discussions, and Rob Bensinger's existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work. Questions I asked the participants a standardized list of questions. What will happen? Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human. Q1a What's your 60% or 90% confidence interval for the date of the first HLAI? Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen? Q2a What's your best guess at the probability of such a catastrophe? What should we do? Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe? Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way? What mistakes have been made? Q5 Are there any big mistakes the AI safety community has made in the past or are currently making? These questions changed gradually as the interviews went on (given feedback from participants), and I didn't always ask the questions exactly as I've presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak). Participants Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23) Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23) Ajeya Cotra leads Open Philantropy's grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24) Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24) Ben Cottie...
What are the main ways AI impacts nonprofit cybersecurity risks?Social engineering, trust, and data risks are the three big areas where AI will have impacts on cybersecurity at nonprofits that you need to be aware of. Whether or not your organization is using AI, these are areas where hackers are definitely using AI to devise new methods of attack.Matt Eshleman, CTO at Community IT, recommends creating policies that address the way your staff uses AI – if you haven't updated your Acceptable Use policies recently, AI concerns are a good reason to do that. He also recommends taking an inventory of your file sharing permissions before AI surfaces something that wasn't secured correctly. Finally, make sure your staff training is up to date, engaging, and constant. AI is creating more believable attacks that change more frequently; if your staff don't know what to look out for you could fall for the newest scams or accidentally share sensitive data with a public AI generator.Community IT has created an Acceptable Use of AI Tools policy template; you can download it for free. And if you are trying to update or create policies but don't know where to start, here is a resource on Making IT Governance Work for Your Nonprofit.Good Tech Fest is a global virtual conference on how you can responsibly use emerging technologies for impact. Whether it's AI, web3, machine learning, or just simple mobile and application development, Good Tech Fest is the place to hear from practitioners using these technologies for impact.As with all our webinars, these presentations are appropriate for an audience of varied IT experience.Community IT is proudly vendor-agnostic and our webinars cover a range of topics and discussions. Webinars are never a sales pitch, always a way to share our knowledge with our community.Presenter:As the Chief Technology Officer at Community IT, Matthew Eshleman leads the team responsible for strategic planning, research, and implementation of the technology platforms used by nonprofit organization clients to be secure and productive. With a deep background in network infrastructure, he fundamentally understands how nonprofit tech works and interoperates both in the office and in the cloud. With extensive experience serving nonprofits, Matt also understands nonprofit culture and constraints, and has a history of implementing cost-effective and secure solutions at the enterprise level.Matt has over 22 years of expertise in cybersecurity, IT support, team leadership, software selection and research, and client support. Matt is a frequent speaker on cybersecurity topics for nonprofits and has presented at NTEN events, the Inside NGO conference, Nonprofit Risk Management Summit and Credit Builders Alliance Symposium, LGBT MAP Finance Conference, and Tech Forward Conference. He is also the session designer and trainer for TechSoup's Digital Security course, and our resident Cybersecurity expertMatt holds dual degrees in Computer Science and Computer Information Systems from Eastern Mennonite University, and an MBA from the Carey School of Business at Johns Hopkins University. _______________________________Start a conversation :) Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/ email Carolyn at cwoodard@communityit.com on LinkedIn Thanks for listening.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are headed into an extreme compute overhang, published by devrandom on April 28, 2024 on LessWrong. If we achieve AGI-level performance using an LLM-like approach, the training hardware will be capable of running ~1,000,000s concurrent instances of the model. Definitions Although there is some debate about the definition of compute overhang, I believe that the AI Impacts definition matches the original use, and I prefer it: "enough computing hardware to run many powerful AI systems already exists by the time the software to run such systems is developed". A large compute overhang leads to additional risk due to faster takeoff. I use the types of superintelligence defined in Bostrom's Superintelligence book (summary here). I use the definition of AGI in this Metaculus question. The adversarial Turing test portion of the definition is not very relevant to this post. Thesis Due to practical reasons, the compute requirements for training LLMs is several orders of magnitude larger than what is required for running a single inference instance. In particular, a single NVIDIA H100 GPU can run inference at a throughput of about 2000 tokens/s, while Meta trained Llama3 70B on a GPU cluster[1] of about 24,000 GPUs. Assuming we require a performance of 40 tokens/s, the training cluster can run 20004024000=1,200,000 concurrent instances of the resulting 70B model. I will assume that the above ratios hold for an AGI level model. Considering the amount of data children absorb via the vision pathway, the amount of training data for LLMs may not be that much higher than the data humans are trained on, and so the current ratios are a useful anchor. This is explored further in the appendix. Given the above ratios, we will have the capacity for ~1e6 AGI instances at the moment that training is complete. This will likely lead to superintelligence via "collective superintelligence" approach. Additional speed may be then available via accelerators such as GroqChip, which produces 300 tokens/s for a single instance of a 70B model. This would result in a "speed superintelligence" or a combined "speed+collective superintelligence". From AGI to ASI With 1e6 AGIs, we may be able to construct an ASI, with the AGIs collaborating in a "collective superintelligence". Similar to groups of collaborating humans, a collective superintelligence divides tasks among its members for concurrent execution. AGIs derived from the same model are likely to collaborate more effectively than humans because their weights are identical. Any fine-tune can be applied to all members, and text produced by one can be understood by all members. Tasks that are inherently serial would benefit more from a speedup instead of a division of tasks. An accelerator such as GroqChip will be able to accelerate serial thought speed by a factor of 10x or more. Counterpoints It may be the case that a collective of sub-AGI models can reach AGI capability. It would be advantageous if we could achieve AGI earlier, with sub-AGI components, at a higher hardware cost per instance. This will reduce the compute overhang at the critical point in time. There may a paradigm change on the path to AGI resulting in smaller training clusters, reducing the overhang at the critical point. Conclusion A single AGI may be able to replace one human worker, presenting minimal risk. A fleet of 1,000,000 AGIs may give rise to a collective superintelligence. This capability is likely to be available immediately upon training the AGI model. We may be able to mitigate the overhang by achieving AGI with a cluster of sub-AGI components. Appendix - Training Data Volume A calculation of training data processed by humans during development: time: ~20 years, or 6e8 seconds raw data input: ~10 mb/s = 1e7 b/s total for human training data: 6e15 bits Llama3 training s...
Baltimore County officials announce that they've taken a suspect into custody after creating an audio file depicting a school official making racist remarks. Torrey goes into the ethics of AI, as well as the official response to the situation. We also discuss the status of Maryland's US Senate primary.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Essay competition on the Automation of Wisdom and Philosophy - $25k in prizes, published by Owen Cotton-Barratt on April 16, 2024 on The Effective Altruism Forum. With AI Impacts, we're pleased to announce an essay competition on the automation of wisdom and philosophy. Submissions are due by July 14th. The first prize is $10,000, and there is a total of $25,000 in prizes available. The full announcement text is reproduced here: Background AI is likely to automate more and more categories of thinking with time. By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking. Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for people to make. To what degree people will understand these situations well enough to make wise choices remains to be seen. To some extent this will depend on how much good human thinking is devoted to these questions; but at some point it will probably depend crucially on how advanced, reliable, and widespread the automation of high-quality thinking about novel situations is. We believe[1] that this area could be a crucial target for differential technological development, but is at present poorly understood and receives little attention. This competition aims to encourage and to highlight good thinking on the topics of what would be needed for such automation, and how it might (or might not) arise in the world. For more information about what we have in mind, see some of the suggested essay prompts or the FAQ below. Scope To enter, please submit a link to a piece of writing, not published before 2024. This could be published or unpublished; although if selected for a prize we will require publication (at least in pre-print form; optionally on the AI Impacts website) in order to pay out the prize. There are no constraints on the format - we will accept essays, blog posts, papers[2], websites, or other written artefacts[3] of any length. However, we primarily have in mind essays of 500-5,000 words. AI assistance is welcome but its nature and extent should be disclosed. As part of your submission you will be asked to provide a summary of 100-200 words. Your writing should aim to make progress on a question related to the automation of wisdom and philosophy. A non-exhaustive set of questions of interest, in four broad categories: Automation of wisdom What is the nature of the sort of good thinking we want to be able to automate? How can we distinguish the type of thinking it's important to automate well and early from types of thinking where that's less important? What are the key features or components of this good thinking? How do we come to recognise new ones? What are traps in thinking that is smart but not wise? How can this be identified in automatable ways? How could we build metrics for any of these things? Automation of philosophy What types of philosophy are language models well-equipped to produce, and what do they struggle with? What would it look like to develop a "science of philosophy", testing models' abilities to think through new questions, with ground truth held back, and seeing empirically what is effective? What have the trend lines for automating philosophy looked like, compared to other tasks performed by language models? What types of training/finetuning/prompting/scaffolding help with the automation of wisdom/philosophy? How much do they help, especially compared to how much they help other types of reasoning? Thinking ahead Considering the research agenda that will (presumably) eventually be needed to automate high quality wisdo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's April 2024 Newsletter, published by Harlan on April 13, 2024 on LessWrong. The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then: MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we're continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change. In short, we've become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we've become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the brakes on frontier AI development for a very long time - long enough for the world to find some realistic path forward. Coinciding with this strategy change, Malo Bourgon transitioned from MIRI COO to CEO, and Nate Soares transitioned from CEO to President. We also made two new senior staff hires: Lisa Thiergart, who manages our research program; and Gretta Duleba, who manages our communications and media engagement. In keeping with our new strategy pivot, we're growing our comms team: I (Harlan Stewart) recently joined the team, and will be spearheading the MIRI Newsletter and a number of other projects alongside Rob Bensinger. I'm a former math and programming instructor and a former researcher at AI Impacts, and I'm excited to contribute to MIRI's new outreach efforts. The comms team is at the tail end of another hiring round, and we expect to scale up significantly over the coming year. Our Careers page and the MIRI Newsletter will announce when our next comms hiring round begins. We are launching a new research team to work on technical AI governance, and we're currently accepting applicants for roles as researchers and technical writers. The team currently consists of Lisa Thiergart and Peter Barnett, and we're looking to scale to 5-8 people by the end of the year. The team will focus on researching and designing technical aspects of regulation and policy which could lead to safe AI, with attention given to proposals that can continue to function as we move towards smarter-than-human AI. This work will include: investigating limitations in current proposals such as Responsible Scaling Policies; responding to requests for comments by policy bodies such as the NIST, EU, and UN; researching possible amendments to RSPs and alternative safety standards; and communicating with and consulting for policymakers. Now that the MIRI team is growing again, we also plan to do some fundraising this year, including potentially running an end-of-year fundraiser - our first fundraiser since 2019. We'll have more updates about that later this year. As part of our post-2022 strategy shift, we've been putting far more time into writing up our thoughts and making media appearances. In addition to announcing these in the MIRI Newsletter again going forward, we now have a Media page that will collect our latest writings and appearances in one place. Some highlights since our last newsletter in 2022: MIRI senior researcher Eliezer Yudkowsky kicked off our new wave of public outreach in early 2023 with a very candid TIME magazine op-ed and a follow-up TED Talk, both of which appear to have had a big impact. The TIME article was the most viewed page on the TIME website for a week, and prompted some concerned questioning at a White House press briefing. Eliezer and Nate have done a number of podcast appearances since then, attempting to share our concerns and policy recommendations with a variety of audiences. Of these, we think the best appearance on substance was Eliezer's multi-hour conversation with Logan Bartlett. This December, Malo was one of sixteen attendees invited by Leader Schumer and Senators Young, Rounds, and...
In this episode, Nathan sits down with Katja Grace, Cofounder and Lead Researcher at AI Impacts. They discuss the survey Katja and team conducted including over 2,700+ AI researchers, the methodology for the research, and the results' implications for policymakers, the public, and the industry as a whole. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api LINKS: - Thousands of AI Authors on the Future of AI: https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf -AI Impacts Site: https://aiimpacts.org/about/ -Linus episode: https://www.youtube.com/watch?v=wdmvtVTZDqE&pp=ygUJbGludXMgbGVl X/SOCIAL: @labenz (Nathan) @KatjaGrace (Katja) @AIImpacts SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, instead of...does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api ODF is where top founders get their start. Apply to join the next cohort and go from idea to conviction-fast. ODF has helped over 1000 companies like Traba, Levels and Finch get their start. Is it your turn? Go to http://beondeck.com/revolution to learn more. This show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We're launching new shows every week, and we're looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co. Producer: Vivian Meng Editor: Graham Bessellieu
5 Ways AI Impacts Supersizing Your Business! Drop in here every day for a dose of different business building perspective: https://facebook.com/supersizebusiness #supersizeyourbusiness #supersizebusinesstopics #impactofAI
Learn more about the guys: J Scott: https://linktr.ee/jscottinvestor Mauricio Rauld: https://www.youtube.com/channel/UCnPedp0WHxpIUWLTVhNN2kQ AJ Osborne: https://www.ajosborne.com/ Kyle Wilson: https://www.bardowninvestments.com/
Betty Jo Rocchio, Senior Vice President and Chief Nurse Executive at Mercy, chats with nursing editor G Hatfield about the impacts of AI in nursing, and how CNOs can implement and integrate AI into nurse workflows to benefit both the nurse and the patient.
How are developers weaving genAI models into their business workflows? Viren is the co-founder and co-CTO at enterprise microservices and application orchestration platform Orkes. He is also one of the creators for Netflix Conductor, an open-source microservices and workflow orchestration engine, used by hundreds of enterprises and Fortune 100 companies including Tesla, Oracle, American Express, Cisco, GitHub, and GE. Before founding Orkes, Viren led and managed Firebase engineering at Google. Prior to that he was a VP of engineering at Goldman Sachs, where he led development of distributed systems.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Will Wu is the CTO @ Match Group, the owner and operator of the largest global portfolio of popular online dating services including Tinder, Match.com, OkCupid, and Hinge to name a few. Prior to Match, Will was VP of Product at Snap Inc. As the 35th employee, Will spearheaded the creation of Snapchat's “Discover” content platform. He also led the creation and growth of the “Chat” messaging feature, which today is a primary Snapchat engagement driver that connects hundreds of millions of people each day. In Today's Episode with Will Wu We Discuss: 1. The Journey to Snap CPO: How did Evan make his way into the world of product and come to meet Evan Spiegel? What are 1-2 of his biggest takeaways from his time at Snap? What does Will know now that he wishes he had known when he started in product? 2. How to Hire Product Teams: How does Will structure the interview process for new product hires? What are the most telling questions of a candidate's product skills in hiring? What case studies and tests does Will do to assess a candidate? What are 1-2 of Will's biggest hiring mistakes in product? 3. How to Do Product Reviews Effectively: What are Will's biggest lessons on what it takes to do product reviews well? What are the biggest mistakes product leaders make in product reviews? How can teams drive focus in product reviews? What works? What does not? 4. Product: Art or Science? How does Will balance between gut/intuition and data in product decisions? Is simple always better in product design? What is human-centered design? How does it impact how Will approaches product?
Crossposted from AI Impacts blog The 2023 Expert Survey on Progress in AI is out, this time with 2778 participants from six top AI venues (up from about 700 and two in the 2022 ESPAI), making it probably the biggest ever survey of AI researchers. People answered in October, an eventful fourteen months after the 2022 survey, which had mostly identical questions for comparison. Here is the preprint. And here are six interesting bits in pictures (with figure numbers matching paper, for ease of learning more): 1. Expected time to human-level performance dropped 1-5 decades since the 2022 survey. As always, our questions about ‘high level machine intelligence' (HLMI) and ‘full automation of labor' (FAOL) got very different answers, and individuals disagreed a lot (shown as thin lines below), but the aggregate forecasts for both sets of questions dropped sharply. For context, between 2016 and 2022 surveys, the forecast [...] --- First published: January 6th, 2024 Source: https://forum.effectivealtruism.org/posts/M9MSe4KHNv4HNf44f/survey-of-2-778-ai-authors-six-parts-in-pictures --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #45: To Be Determined, published by Zvi on January 5, 2024 on LessWrong. The first half of the week was filled with continued talk about the New York Times lawsuit against OpenAI, which I covered in its own post. Then that talk seemed to mostly die down,, and things were relatively quiet. We got a bunch of predictions for 2024, and I experimented with prediction markets for many of them. Note that if you want to help contribute in a fun, free and low-key, participating in my prediction markets on Manifold is a way to do that. Each new participant in each market, even if small, adds intelligence, adds liquidity and provides me a tiny bonus. Also, of course, it is great to help get the word out to those who would be interested. Paid subscriptions and contributions to Balsa are of course also welcome. I will hopefully be doing both a review of my 2023 predictions (mostly not about AI) once grading is complete, and also a post of 2024 predictions some time in January. I am taking suggestions for things to make additional predictions on in the comments. Table of Contents Copyright Confrontation #1 covered the New York Times lawsuit. AI Impacts did an updated survey for 2023. Link goes to the survey. I plan to do a post summarizing the key results, once I have fully processed them, so I can refer back to it in the future. Introduction. Table of Contents. Language Models Offer Mundane Utility. Google providing less every year? Language Models Don't Offer Mundane Utility. Left-libertarian or bust. GPT-4 Real This Time. It's not getting stupider, the world is changing. Fun With Image Generation. The fun is all with MidJourney 6.0 these days. Deepfaketown and Botpocalypse Soon. Confirm you are buying a real book. They Took Our Jobs. Plans to compensate losers are not realistic. Get Involved. Support Dwarkesh Patel, apply for Emergent Ventures. Introducing. DPO methods? 'On benchmarks' is the new 'in mice.' In Other AI News. Square Enix say they're going in on generative AI. Doom? As many estimates of p(doom) went up in 2023 as went down. Why? Quiet Speculations. Some other predictions. The Week in Audio. Eric Jang on AI girlfriend empowerment. Rhetorical Innovation. Machines and people, very different of course. Politico Problems. Some sort of ongoing slanderous crusade. Cup of Coffee. Just like advanced AI, it proves that you don't love me. Aligning a Smarter Than Human Intelligence is Difficult. What's The Plan? People Are Worried About AI Killing Everyone. Daniel Dennett, Cory Booker. The Lighter Side. Oh, we are doing this. Language Models Offer Mundane Utility Remember that one line from that book about the guy with the thing. Dan Luu tries to get answers, comparing ChatGPT, Google and other options. Columns are queries, rows are sources. Marginalia appears to be a tiny DIY search engine focusing on non-commercial content that I'd never hear of before, that specializes in finding small, old and obscure websites about particular topics. Cool thing to have in one's toolbelt, I will be trying it out over time. Not every cool new toy needs to be AI. While ChatGPT did hallucinate, Dan notes that at this point the major search engines also effectively hallucinate all the time due to recency bias, SEO spam and scam websites. He also notes how much ads now look like real search results on Google and Bing. I have mostly learned to avoid this, but not with 100% accuracy, and a lot of people doubtless fall for it. Find out how many prime numbers under one billion have digits that sum to nine, via having code check one by one. I mean, sure, why not? There is an easier way if you already know what it is, but should the right algorithm know to look for it? Language Models Don't Offer Mundane Utility All LLMs tested continue to cluster in the left-libertarian quadrant. Eliezer Yudkow...
This week: How jobs could be impacted by AI, employees experience continued burnout, and more.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's new at FAR AI, published by AdamGleave on December 4, 2023 on The AI Alignment Forum. Summary We are FAR AI: an AI safety research incubator and accelerator. Since our inception in July 2022, FAR has grown to a team of 12 full-time staff, produced 13 academic papers, opened the coworking space FAR Labs with 40 active members, and organized field-building events for more than 160 ML researchers. Our organization consists of three main pillars: Research. We rapidly explore a range of potential research directions in AI safety, scaling up those that show the greatest promise. Unlike other AI safety labs that take a bet on a single research direction, FAR pursues a diverse portfolio of projects. Our current focus areas are building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs), finding more effective approaches to value alignment (e.g. training from language feedback), and model evaluation (e.g. inverse scaling and codebook features). Coworking Space. We run FAR Labs, an AI safety coworking space in Berkeley. The space currently hosts FAR, AI Impacts, SERI MATS, and several independent researchers. We are building a collaborative community space that fosters great work through excellent office space, a warm and intellectually generative culture, and tailored programs and training for members. Applications are open to new users of the space (individuals and organizations). Field Building. We run workshops, primarily targeted at ML researchers, to help build the field of AI safety research and governance. We co-organized the International Dialogue for AI Safety bringing together prominent scientists from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We will soon be hosting the New Orleans Alignment Workshop in December for over 140 researchers to learn about AI safety and find collaborators. We want to expand, so if you're excited by the work we do, consider donating or working for us! We're hiring research engineers, research scientists and communications specialists. Incubating & Accelerating AI Safety Research Our main goal is to explore new AI safety research directions, scaling up those that show the greatest promise. We select agendas that are too large to be pursued by individual academic or independent researchers but are not aligned with the interests of for-profit organizations. Our structure allows us to both (1) explore a portfolio of agendas and (2) execute them at scale. Although we conduct the majority of our work in-house, we frequently pursue collaborations with researchers at other organizations with overlapping research interests. Our current research falls into three main categories: Science of Robustness. How does robustness vary with model size? Will superhuman systems be vulnerable to adversarial examples or "jailbreaks" similar to those seen today? And, if so, how can we achieve safety-critical guarantees? Relevant work: Vulnerabilities in superhuman Go AIs, AI Safety in a World of Vulnerable Machine Learning Systems. Value Alignment. How can we learn reliable reward functions from human data? Our research focuses on enabling higher bandwidth, more sample-efficient methods for users to communicate preferences for AI systems; and improved methods to enable training with human feedback. Relevant work: VLM-RM: Specifying Rewards with Natural Language, Training Language Models with Language Feedback. Model Evaluation: How can we evaluate and test the safety-relevant properties of state-of-the-art models? Evaluation can be split into black-box approaches that focus only on externally visible behavior ("model testing"), and white-box approaches that seek to interpret the inner workings ("interpretability"). These approaches are complementary, with ...
Cybersecurity and Compliance with Craig Petronella - CMMC, NIST, DFARS, HIPAA, GDPR, ISO27001
Are you prepared for the digital dangers lurking in your computer, or the profound impacts of artificial intelligence on our lives? This episode arms you with knowledge of the latest cybersecurity threats, from North Korean state-linked nation group hacking Mac computers, to phishing scams and vulnerabilities in class action lawsuits. We also delve into the importance of staying up-to-date with software and using malware removal tools. Plus, we explore the potential ramifications of a government-created website or portal for class action lawsuits.You won't want to miss our engaging discussion on the future of AI, including its potential to replace jobs, and even the possibility of an AI taking over as the CEO of a company. As we trust more of our lives to digital intelligence, understanding these potential scenarios is more important than ever. Furthermore, we reveal how AI is shaping the future of electric vehicles and the safety considerations that come with self-driving cars. Stay ahead of the curve and join us on this enlightening journey into the future of technology and cybersecurity. Support the show - Call 877-468-2721 or visit https://petronellatech.comPlease visit YouTube and LinkedIn and be sure to like and subscribe!Support the showNO INVESTMENT ADVICE - The Content is for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice. Nothing contained on our Site or podcast constitutes a solicitation, recommendation, endorsement, or offer by PTG.Support the ShowPlease visit https://compliancearmor.com and https://petronellatech.com for the latest in Cybersecurity and Training and be sure to like, subscribe and visit all of our properties at: YouTube PetronellaTech YouTube Craig Petronella Podcasts Compliance Armor Blockchain Security LinkedIn Call 877-468-2721 or visit https://petronellatech.com
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: State of the East and Southeast Asian EAcosystem, published by Elmerei Cuevas on November 6, 2023 on The Effective Altruism Forum. This write-up is a compilation of organisations and projects aligned / adjacent to the effective altruism movement in East Asia and Southeast Asia and was written around the EAGxPhilippines conference. Some organisations, projects, and contributors also prefer to not be public and hence removed from this write-up. While this is not an exhaustive list of projects and organisations per country in the region, it is a good baseline of the progress of the effective altruism movement for this side of the globe. Feel free to click the links to the organisations/projects themselves to dive deeper into their works. Contributors: Saad Siddiqui; Anthony Lau; Anthony Obeyesekere; Masayuki "Moon" Nagai; Yi-Yang Chua; Elmerei Cuevas, Alethea Faye Cedaña, Jaynell Ehren Chang, Brian Tan, Nastassja "Tanya" Quijano; Dion Tan, Jia Yang Li; Saeyoung Kim; Nguyen Tran; Alvin Lau Forum Post Graphic credits to Jaynell Ehren Chang EAGxPhotos credits to CS Creatives Mainland China China Global Priorities Group Aims to foster a community of ambitious, careful and committed thinkers and builders focused on effectively tackling some of the world's most pressing problems through a focus on China's role in the world. We currently do this by facilitating action-guiding discussions, identifying talent and community infrastructure gaps and developing new programmes to support impactful China-focused work. Hong Kong City Group: EAHK Started in 2015 based in University of Hong Kong. 5 core organisers, of which 2 receive EAIF funding from 2023 to work part-time (Anthony and Kenneth) Organises the Horizon Fellowship Program (In-person EA introductory program). There are 107 fellows since 2020. Around 200+ on Slack channel Bi-lingual social media account with 350 followers Bi - weekly socials with 8 to 20 attendees and around 8 speakers meetup a year. Registered as a legal entity (limited company) in July 2023 in order to register as a charity in Hong Kong. Aims of facilitate effective giving. Opportunities: High concentration of family office/ corporate funders/ philanthropic organisations. To explore fundraising and effective giving potential. Influx of mainland/ international university in coming years due to recent policy change (40% non-local, 60% local). A diverse talent pool. Looking into translating EA materials to local language (Chinese) to reach out to more locals. University Group: EAHKU A new team formed in June 2023. Running independently from EAHK. Organises bi-weekly dinner to connect and introduce EA to students on campus Planned to run multiple Giving Games from Nov 2023 onwards Aims to run an introductory program within 2023-2024 academic year Academia (AI): A couple of researchers and professors interested in AI x-risk and alignment. AI&Humanity-Lab@University of Hong Kong Nate Sharadin (CAIS fellow, normative alignment and evaluations), Frank Hong (CAIS fellow, AI extreme risks), Brian Wong (AI x-risk and China-US) 2023 Sep launched MA in AI, Ethics and Society with AI safety, security and governance. Around 90 students in the course. Organises public seminars, see events page The first annual AI Impacts workshop in March 2024, focused on evaluations Hong Kong Global Catastrophic Risk Center at Lingnan University See link for R esearch focus and outputs related to AI safety and governance Hong Kong University of Science and Technology University Dr. Fu Jie is a visiting scholar working on safe and scalable system-2 LLM. Research Centre for Sustainable HK at City University of Hong Kong Published a report on the Ethics and Governance of AI in HK Academia (Psychology): Dr. Gilad Feldman Promote 'Doing more good, doing good better' through some of his teachi...
This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum.The rise of social media has revolutionized the way we connect, share information, and interact with one another. While it has undoubtedly brought numerous benefits, there is growing concern about its impact on our mental health. Today on The Doctor's Farmacy, I'm excited to talk to Tobias Rose-Stockwell about how the internet has broken our brains, what we can do to fix it, and how to navigate this complex digital landscape. Tobias Rose-Stockwell is a writer, designer, and media researcher whose work has been featured in major outlets such as The Atlantic, WIRED, NPR, the BBC, CNN, and many others. His research has been cited in the adoption of key interventions to reduce toxicity and polarization within leading tech platforms. He previously led humanitarian projects in Southeast Asia focused on civil war reconstruction efforts, work for which he was honored with an award from the 14th Dalai Lama. He lives in New York with his cat Waffles.This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum.Access more than 3,000 specialty lab tests with Rupa Health. You can check out a free, live demo with a Q&A or create an account at RupaHealth.com today.During the entire month of November, Bioptimizers is offering their biggest discount you can get AND amazing gifts with purchase. Just go to bioptimizers.com/hyman with code hyman10.Zero Acre Oil is an all-purpose cooking oil. Go to zeroacre.com/MARK or use code MARK to redeem an exclusive offer.Pendulum is offering my listeners 20% off their first month of an Akkermansia subscription with code HYMAN. Head to Pendulumlife.com to check it out.Here are more details from our interview (audio version / Apple Subscriber version):The superpower that social media has provided to us (5:55 / 4:21)How our traditional knowledge systems have been deconstructed (7:39 / 5:15)The challenges of uncovering what is true (12:43 / 10:18)How Tobais's time in Cambodia led him to this work (15:05 / 12:42)The harms of social media (26:57 / 22:36)Historical media disruptions (32:57 / 28:37)The dangers of misinformation (35:27 / 31:06)Challenges and opportunities around AI (42:09 / 37:58)How governments and platforms can reduce the harms of social media (55:10 / 50:59)Individual actions to improve the impact of social media (1:02:30 / 58:09)Get a copy of Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy―And What We Can Do About It. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Strongest real-world examples supporting AI risk claims?, published by rosehadshar on September 5, 2023 on The Effective Altruism Forum. [Manually cross-posted to LessWrong here.] There are some great collections of examples of things like specification gaming, goal misgeneralization, and AI improving AI. But almost all of the examples are from demos/toy environments, rather than systems which were actually deployed in the world. There are also some databases of AI incidents which include lots of real-world examples, but the examples aren't related to failures in a way that makes it easy to map them onto AI risk claims. (Probably most of them don't in any case, but I'd guess some do.) I think collecting real-world examples (particularly in a nuanced way without claiming too much of the examples) could be pretty valuable: I think it's good practice to have a transparent overview of the current state of evidence For many people I think real-world examples will be most convincing I expect there to be more and more real-world examples, so starting to collect them now seems good What are the strongest real-world examples of AI systems doing things which might scale to AI risk claims? I'm particularly interested in whether there are any good real-world examples of: Goal misgeneralization Deceptive alignment (answer: no, but yes to simple deception?) Specification gaming Power-seeking Self-preservation Self-improvement This feeds into a project I'm working on with AI Impacts, collecting empirical evidence on various AI risk claims. There's a work-in-progress table here with the main things I'm tracking so far - additions and comments very welcome. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund Ask Us Anything (September 2023), published by Linch on August 31, 2023 on The Effective Altruism Forum. LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum. I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th). We think that right now could be an unusually good time to donate. If you agree, you can donate to us here. About the Fund The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish. In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here. Related posts LTFF and EAIF are unusually funding-constrained right now EA Funds organizational update: Open Philanthropy matching and distancing Long-Term Future Fund: April 2023 grant recommendations What Does a Marginal Grant at LTFF Look Like? Asya Bergal's Reflections on my time on the Long-Term Future Fund Linch Zhang's Select examples of adverse selection in longtermist grantmaking About the Team Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT. Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy. Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups. Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities. You can find a list of our fund managers in our request for funding here. Ask Us Anything We're happy to answer any questions - marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc. There's no real deadline for questions, but let's say we have a soft commitment to focus on questions asked on or before September 8th. Because we're unusually funding-constrained right now, I'm going to shill again for donating to us. If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
The evolving space of technology has undeniably opened up more opportunities and tools to be creative. But just how much of this technology can help us as artists, especially for those independent artists? In this episode, Fiona Flyte, from the Profitable Performer Revolution, delves into the impact of Artificial Intelligence on the profitability of indie artists—from the good, the bad, and the ugly. What will the future look like? Will AI replace artists? How much can that human connection hold musicians and listeners together amidst the ever-growing technology? Find out the answers to these questions and more. Let Fiona share how you can leverage AI in your passion and be profitable.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Vinod Khosla is the Founder of Khosla Ventures, one of the leading venture firms of the last decade with investments in OpenAI, Stripe, DoorDash, Commonwealth Fusion Systems and many more. Prior to founding Khosla, Vinod was a co-founder of Daisy Systems and founding CEO of Sun Microsystems. In Today's Episode with Vinod Khosla We Discuss: 1. The State of AI Today: Does Vinod believe we are in a bubble or is the excitement justified based on technological development? What are the single biggest lessons that Vinod has from prior bubbles? What is different about this time? What is Vinod concerned about with this AI bubble? 2. The Future of Healthcare and Music: How does Vinod evaluate the impact AI will have on the future of healthcare? How does Vinod analyse the impact AI will have on the future of music and content creation? Does Vinod believe that humans will resist these advancements? Who will be the laggards, slow to embrace it and who will be the early adopters? 3. Solving Income Inequality: Does Vinod believe AI does more to harm or to hurt income inequality? What mechanisms can be put in place to ensure that AI does not further concentrate wealth into the hands of the few? Does Vinod believe in universal basic income? What does everyone get wrong with UBI? 4. The Future of Energy, Climate and Politics: Why is forcing non-economic solutions the wrong approach to climate? What is the right approach? Why is Vinod so bullish on fusion and geothermal? How does fusion bankrupt entire industries? How does the advancements in energy and resource creation change global politics? Does Vinod believe Larry Summers was right; "China is a prison, Japan is a nursing home and Europe is a museum"? 5. Vinod Khosla: AMA: What is Vinod's single biggest investing miss? What does Vinod know now that he wishes he had known when he started investing? Why did the Taylor Swift concert have such a profound impact on him? What was Marc Andreesen like when he backed him with Netscape in 1996?
Margaret Evans reports on Trudeau's surprise Ukraine trip and Catherine Belton discusses NATO's long term goals there, Peter Singer talks about the impact of his book Animal Liberation nearly 50 years on, Dr. Melissa Lem looks at the short and long term risks posed by poor air quality, Peter Mitton explores how AI can compromise our ethics, and Jody Rosen shares the history of the bicycle. Discover more at https://www.cbc.ca/sunday
Two things to know today From Law Courts to Lab: AI's Pervasive Impact, Benefits and Pitfalls Uncovered AND FCC's Broadband Mapping: Opportunities and Obligations for IT Service Providers Advertiser: https://timezest.com/MSPRadio/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on: Facebook: https://www.facebook.com/mspradionews/ Twitter: https://twitter.com/mspradionews/ Instagram: https://www.instagram.com/mspradio/ LinkedIn: https://www.linkedin.com/company/28908079/