Podcasts about artificial superintelligence

Hypothetical immensely superhuman agent

  • 87PODCASTS
  • 124EPISODES
  • 50mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 30, 2025LATEST
artificial superintelligence

POPULARITY

20172018201920202021202220232024


Best podcasts about artificial superintelligence

Latest podcast episodes about artificial superintelligence

Financial Sense(R) Newshour
Dr Alan D. Thompson: Unlocking the Next Era of Artificial Superintelligence (Preview)

Financial Sense(R) Newshour

Play Episode Listen Later May 30, 2025 4:13


May 29, 2025 – What happens when AI systems start inventing new materials, running billion-dollar companies, and making decisions in government? In this insightful conversation, Cris Sheridan interviews Dr. Alan D. Thompson, renowned AI...

CFO 4.0
227. AI in Finance | Preparing for the AI Revolution: Abundance or Catastrophe? with David Wood

CFO 4.0

Play Episode Listen Later May 20, 2025 49:36


Send us your thoughtsIn this episode of CFO 4.0, host Hannah Munro speaks with David Wood, Chair of London Futurists, about the accelerating pace of AI and its profound implications for business, society, and the future of work. Together, they explore the near-term possibilities and longer-term consequences of artificial intelligence—from transformation to potential turmoil.In this episode, we cover:Why some companies struggle with AI adoption The rise of neuro-symbolic AI and the combination of logic-based and intuitive systemsBreakthroughs in video generation, self-prompting AI, and the future of AI-generated mediaWhat Artificial General Intelligence (AGI) could mean for jobs, skills, and human relevanceHow emotional intelligence and adaptability will be essential skills for leaders in the AI eraLinks mentioned:David's Linkedin Learn more about London FuturistsLondon Futurists Meetups The Coming Wave: AI, Power and Our Future by  Mustafa SuleymanSupremacy: AI, ChatGPT and the race that changed the world by Parmy Olson Uncontrollable : The Threat of Artificial Superintelligence by Darren McKee Explore other CFO 4.0 Podcast episodes here. Subscribe to our Podcast!

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

On April 7th, 2025, the AI landscape saw significant advancements and strategic shifts, evidenced by Meta's launch of its powerful Llama 4 AI models, poised to compete with industry leaders. Simultaneously, DeepSeek and Tsinghua University unveiled a novel self-improving AI approach, highlighting China's growing AI prowess, while OpenAI considered a hardware expansion through the potential acquisition of Jony Ive's startup. Microsoft enhanced its Copilot AI assistant with personalisation features and broader application integration, aiming for a more intuitive user experience. Furthermore, a report projected potential existential risks from Artificial Superintelligence by 2027, prompting discussions on AI safety, as Midjourney released its advanced version 7 image generator and NVIDIA optimised performance for Meta's new models.

Bob Enyart Live
A.I. 2025: An Update with Daniel Hedrick

Bob Enyart Live

Play Episode Listen Later Apr 5, 2025


Listen in as your host Fred Williams and co-host Doug McBurney welcome RSR's resident A.I. expert Daniel Hedrick, of godisnowhere fame for an update on where we are with Artificial Intelligence, (and where A.I. is with us)!   *Welcome:  Daniel Hedrick, discussing Co-Pilot, LM Studio, Deepseek, Perplexity, Chat GPT, Grok 3, Midjourney, Agentic AI, AGI, ASI, and all things Artificial Intelligence.   *The Gospel & Dan Bongino:  Hear how Dan Bongino fundamentally agrees with Doug McBurney that A.I. has the potential, if programmed in an unbiased manner, and with access to everything ever written, to be a tool for telling the truth, including confirming the Gospel!   *Luddites of the World: Relax! AI is not on the verge of replacing programmers and coders. But it has become an essential tool.   *Motivation, Awareness & Experience: AI lacks all 3, but humans don't, so even Artificial Super Intelligence will always need us.   *Maximum Problems: How do we constrain AI from going off the rails? like in the paperclip maximizer problem. The answer lies in our connection to God's reality.   *The Energy Question: While The human brain uses at most 30 Watts to make over 100 trillion connections, no one's even sure what modern AI platforms are consuming... But it's a lot and growing!

Real Science Radio
A.I. 2025: An Update with Daniel Hedrick

Real Science Radio

Play Episode Listen Later Apr 5, 2025


Listen in as your host Fred Williams and co-host Doug McBurney welcome RSR's resident A.I. expert Daniel Hedrick, of godisnowhere fame for an update on where we are with Artificial Intelligence, (and where A.I. is with us)!   *Welcome:  Daniel Hedrick, discussing Co-Pilot, LM Studio, Deepseek, Perplexity, Chat GPT, Grok 3, Midjourney, Agentic AI, AGI, ASI, and all things Artificial Intelligence.   *The Gospel & Dan Bongino:  Hear how Dan Bongino fundamentally agrees with Doug McBurney that A.I. has the potential, if programmed in an unbiased manner, and with access to everything ever written, to be a tool for telling the truth, including confirming the Gospel!   *Luddites of the World: Relax! AI is not on the verge of replacing programmers and coders. But it has become an essential tool.   *Motivation, Awareness & Experience: AI lacks all 3, but humans don't, so even Artificial Super Intelligence will always need us.   *Maximum Problems: How do we constrain AI from going off the rails? like in the paperclip maximizer problem. The answer lies in our connection to God's reality.   *The Energy Question: While The human brain uses at most 30 Watts to make over 100 trillion connections, no one's even sure what modern AI platforms are consuming... But it's a lot and growing!

Blockchain DXB

Unfiltered chat Blockchain DXB & Society X - LinkedIn Live: Weekly Crypto, Blockchain & AI Review Date & Time: January 23rd, 2025, 11:00 AM GST Hosts: RA George (Blockchain DXB) Markose Chentittha (Oort Foundation + Society X) Guest: Neil Fitzhugh, Head of Marketing at Trac Systems/TAP Protocol Contact details LinkedIn https://short-link.me/OkAJ Website: https://trac.network/ Twitter: Neil: https://x.com/fitzyOG Twitter/ X https://x.com/trac_btc?mx=2 Discord: https://discord.com/invite/trac Telegram: TAP Protocol - https://t.me/tap_protocol GitHub: https://github.com/BennyTheDev Note: This entire episode was created, scripted, and reviewed 100% by AI using Notebook LM by Google, showcasing the power of AI-driven content creation. This week's LinkedIn Live session explored groundbreaking developments in crypto, blockchain, and AI. Below is a streamlined recap of the AI-generated discussion between the hosts and guest Neil Fitzhugh. Discussion on its market significance and the broader implications for investors. Mention of Trump Meme Coin, which surged to a $13B market cap before dropping to $7.3B, underlining meme coin volatility. Neil Fitzhugh explained how the TAP Protocol is transforming Bitcoin's ecosystem with features like smart contracts, AMMs, swaps, and cross-chain bridges. Key Features: Tokenomics and Validator Licenses: Security Audits: Rigorous measures ensure platform reliability. Stargate Investment: A $500B initiative led by industry giants to develop Artificial Super Intelligence infrastructure. Dubai's AI Seal: A certification system to regulate trusted AI companies working with UAE authorities. Analysis of the U.S. District Court's decision and its impact on privacy protocols and Alexey Pertsev's ongoing legal challenges. 2025 is poised to be the "Year of Tokenization," potentially achieving $500B in market value, with real-world asset tokenization leading the charge. Republican-led initiatives in Texas, Pennsylvania, Ohio, and Wyoming highlight growing interest in state-level Bitcoin reserves. CLS Global's admission to wash trading and its repercussions. Solana's rise in transaction volume, signaling stablecoin growth. Ross Ulbricht's Campaign: Renewed focus on decentralization and privacy. FOMC Meeting Predictions: Interest rate changes and their crypto impact. Larry Fink's $700K Bitcoin Prediction: Bitcoin as a hedge against inflation. Neil addressed questions on: Aligning Bitcoin maximalist ideals with TAP Protocol's innovations. Trac Systems' upcoming milestones. Beginner-Level Sessions: Now live on LinkedIn. Spartan Race Discount: Use code George20 for the January 25–26 event. This episode demonstrated the capabilities of AI in analyzing and presenting complex topics. Using Notebook LM by Google, Blockchain DXB and Society X delivered an engaging, AI-driven conversation covering the latest in crypto, blockchain, and AI. To support this channel: https://www.patreon.com/BlockchainDXB ⚡ Buy me Coffee ☕ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.buymeacoffee.com/info36/w/6987⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⚡ Advanced Media ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.amt.tv/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⚡⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spartan Race Trifecta⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ in Dubai https://race.spartan.com/en/race/detail/8646/overview For 20% Discount use code: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠George20⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⚡ The Race Space Podcast

AI For Humans
OpenAI Plans Super Intelligence, NVIDIA's Tiny Image Model & Reddit Launches GPT & More AI News

AI For Humans

Play Episode Listen Later Jan 16, 2025 52:17


OpenAI is prepping for Artificial Super Intelligence, Sam Altman says AI fast takeoff is likely, Luma Labs' new Ray 2 AI video model looks good and Reddit goes all GPT on us. Plus, an amazing new small AI model from NVIDIA, executive AI orders for more power and chips, ChatGPT Tasks kind of blows and a whole lotta Shrek (more so than you might want). BRB, WE GOTTA PREP FOR THE SINGULARITY Y'ALL! Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // OpenAI's Economic Blueprint  https://openai.com/global-affairs/openais-economic-blueprint/ Sam Altman Says Fast Take Off More Likely https://x.com/tsarnick/status/1879100390840697191 OpenSource $450 Dollar o1 Model https://x.com/LiorOnAI/status/1878876546066506157 MiniMax-01 Launch: Lightning Attention https://x.com/i/trending/1879318547861582090 Runway Prompt-To-Character https://x.com/IXITimmyIXI/status/1878088929330491844 Executive Order For Gigawatt Datacenters https://www.reuters.com/technology/artificial-intelligence/biden-issue-executive-order-ensure-power-ai-data-centers-2025-01-14/ New AI Chip Rules  https://www.theinformation.com/articles/why-bidens-final-ai-chip-move-caused-an-uproar?rc=c3oojq&shared=160dd16ac575f520 ChatGPT Tasks https://x.com/OpenAI/status/1879267276291203329 Custom Reddit GPT For Answers https://www.reddit.com/answers/ LumaLabs Ray 2:  https://lumalabs.ai/ray Nvidia Lauches Sana  https://nvlabs.github.io/Sana/ AI Slop Distorting Wildfire News https://www.fastcompany.com/91260442/ai-slop-has-is-still-distorting-news-about-the-l-a-wildfires French Woman Scammed By AI Brad Pitt https://www.nbcnews.com/news/world/ai-brad-pitt-woman-romance-scam-france-tf1-rcna187745 Fashn Web App: Try-on + Video https://x.com/ayaboch/status/1878888737603830081 New AI or Die https://youtu.be/cAjUy896SOE?si=BpJhqV_oOwvov01q My Swamp https://x.com/andr3_ai/status/1878110156887638380  

In 20xx Scifi and Futurism
In 2055 A Hacker is Abducted

In 20xx Scifi and Futurism

Play Episode Listen Later Dec 14, 2024 62:02


A production center makes everything locally. It even has grow towers where the produce needs of nearby citizens can be met. After the World Storm, a production center goes into sentry mode when the Internet breaks. The entire insides of the production center become a death trap as all the robots will attack people who try to get in.Merch, a world class hacker, is abducted by a gang who make him try to hack the production center so they can get in and get enough food, water, and goods to have all they need for years.In the course of the story Merch finds an ASI, an Artificial Super Intelligence. This machine is hundreds of times smarter than a human. If he gains access, he could change the course of humanity. As an example, with the ASI in control of the production center, it could build a robot army."Medusa Net" (peer-to-peer internet system)Links/AR glasses with features like: Night vision, "Target Conversation" (allows distant conversation between people who can see each other), "Assist" (AI assistant), Multiple AR feeds/displays.Lutin Two Bot (subscription-locked)Tri-legged bot with lamp (described as spider-like)Double high botRobot with tiny arms for microscale workHologram shell robots (with curved screens for human-like appearance)Autono-cart (autonomous cart)"Follow cart" (presumably autonomous)"G. silk" (advanced fabric that: Never wrinkles, Never stains under normal conditions, Can filter water, Lets through only water and air.Cooling tents (double-walled with air inflation)Temperature-controlled shoes (powered to cool soles)Production center with automated security/defense systemsEngineered microbe medicine (tooth care chewables)"Rig gloves" (for controlling micro-scale robots)I'll extract the technology mentioned in this dystopian/post-apocalyptic story:3D navigation maps in field of viewNight vision capabilitiesJob's Navigator AIDates Navigator AIAI-based hacking systemsSimulation software for mimicking online consumersHome sentry botSex botSolar panel cleaning robotSwarm drone controllerAutono-cab (autonomous taxi)VR worldsFirst-person VR moviesLive Movie CreatorGiantess Center networkGeo-thermal power plantAutomated grow rooms/farming systemsDelivery tubesCard table computersDNA simulatorsRight-to-repair softwareSkills for Lutins (some kind of digital skill system)Automated vending machines (pizza and pasta)Digital door locksPest trap softwarePublic talk lineGroup talkI'll compile a list of the technology mentioned in this post-apocalyptic story:E-paper/E-paper screensMastodon (social network)Security cards with changing QR codesBio-sampler pensThrive Navigator (upgraded survival-focused AI)Metis/Matis (ASI - Artificial Super Intelligence)AI hackers (subordinate AIs used for hacking)AI jury systems (to keep ASI in check)Giantess guard botsBuilder botsConstruction botsMaintenance botsBattle droidsCleaning bots"Half-high bots" (climbing capable)Sentry mode botsGiantess Production towers/centerProduction equipmentAutomated facilitiesSurvival bunkersUnderground utility tunnelsChemical weaponsSound assault weaponsMicrowave weaponsBattle droidsSmell scanner/smell visionEmbedded technology ("embeds")Many of the characters in this project appear in future episodes. Using storytelling to place you in a time period, this series takes you, year by year, into the future. From 2040 to 2195. If you like emerging tech, eco-tech, futurism, perma-culture, apocalyptic survival scenarios, and disruptive science, sit back and enjoy short stories that showcase my research into how the future may play out. This is Episode 1 of the podcast "In 20xx Scifi and Futurism." The companion site is https://in20xx.com These are works of fiction. Characters and groups are made-up and influenced by current events but not reporting facts about people or groups in the real world.Copyright © Cy Porter 2024. All rights reserved.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 393: When will we achieve AGI? And one secret aspect holding us back

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Nov 1, 2024 48:54


Send Everyday AI and Jordan a text messageIt's the trillion dollar AI question. When will we achieve Artificial General Intelligence? (And what the heck is it, anyway?) We'll give you the 101 on what you need to know, and one secret that could be holding the official discovery back.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AGIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition of AI, AGI and ASI2. Evolution of AGI3. Impacts of Advancements in AGI4. Future of AGITimestamps:02:00 When is AGI coming?09:54 Machines performing tasks requiring human intelligence.13:10 Generative AI brings impressive outputs through language.14:04 Generative AI democratizes US AI capabilities; narrow compared.19:00 Correct prompting with PPP course at podpp.com.20:52 Big tech companies openly working toward AGI now.25:37 AGI development inevitable, desirable, surpassing human capabilities.28:16 Pre-2020, experts said AGI was 80 years away.31:25 Criticism of experts in generative AI misunderstandings.33:49 Has AI's definition of AGI changed?37:21 Definition of AGI has evolved over time.40:45 Partnership between Microsoft and OpenAI pivotal.45:21 OpenAI benefits from important Microsoft partnership changes.47:08 Tech companies must focus on AGI development.Keywords:AGI, Artificial General Intelligence, OpenAI, Microsoft, partnership, AI development, AI startup, Anthropic, copyright infringement, Google's Gemini team, Token offering, AI models, USAID, ChatGPT Enterprise, AI pace estimation, AI evolution, Artificial Superintelligence, AI prediction chart, ARK Invest, GPT 3 technology, Traditional AI, Generative AI, AI democratization, AGI benchmark, ChatGPT course, AI startups, Big tech companies, AGI cost reduction, Future of work, Everyday AI podcast. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Oct 21, 2024 55:52


Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, “Artificial Intelligence: 10 Things You Should Know.” We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim's recent research projects such as “Promptbreeder,” “Debating with More Persuasive LLMs Leads to More Truthful Answers,” and more. The complete show notes for this episode can be found at https://twimlai.com/go/706.

Art Throb
No. 40: LEXI BASS - MEANDER

Art Throb

Play Episode Listen Later Oct 15, 2024 30:06


Lexi Bass has a BA in Arts Administration from the University of Kentucky,  a MA in Art from the University of Louisville and an MFA in Experimental and Documentary Arts from Duke University. ​She is currently a lecturer in Animation and Digital Art at the University of Kentucky School of Arts and Visual Studies and an experimental filmmaker and artist.  Her films have screened widely in London, Amsterdam, and other European cities and Los Angeles, Philadelphia, Minneapolis, and various locations across KY.  Her new film Meander will air on Tuesday 15 Oct at the Lyric Theatre and Cultural Arts Center at 6:30pm. Meander (2024) Lexi BassAs artificial intelligence replaces workers in our increasingly elderly global population, companies engineering AI race robots amidst human inequities and emerging problems of AI sentience. The sum spells disaster for the human race in the dystopian world of Meander, which evokes both ancient Greek mythology and near-future science fiction. Meander finds herself destitute in the Underworld with no way to finance escape other than offering her biological potential for surrogate pregnancy up to dubious experimentation in an underground facility. Meanwhile, filmmaker/narrator, Lexi Bass, recounts her experiences of pregnancy and motherhood at the precipice of age 40 and the loss of her own mother shortly after, questioning the future of humanity at the precipice of Artificial General Intelligence and Artificial Super-Intelligence. For more and to connect with us, visit https://www.artsconnectlex.org/art-throb-podcast.html

In 20xx Scifi and Futurism
In 2054 The Colony that Starts UBI

In 20xx Scifi and Futurism

Play Episode Listen Later Oct 15, 2024 59:33


Safe underground, a colony of 9000 are cared for by robots and automated services. But people are agitated. Youth run wild. How will the citizens adjust to free necessities but little else? Questions of post work, UBI, and life purpose arise.Artificial Super Intelligence may save them. A combination of capitalism and needs met for all may save them.T-Line: An underground train systemRobots: Various types for construction, cleaning, and other tasksArtificial Intelligence (AI): Used for various purposesAugmented Reality (AR) glassesVirtual Reality (VR) systemsLutin Bots: Advanced robotic assistants (Lutin One and Lutin Two models)Robot baby (Taylor): Used for data collection on parentingHooded tunics with cooling and air filtering systemsGyro clogs: Some type of self-balancing footwearInduction stovesGeothermal power plantAquaponic systemsCentral cooling systemsLocal internetArtificial Superintelligence (ASI) systemGene therapy for addiction treatmentBuilder AI: For planning and constructing flood tunnelsBone-mounted AR glassesAll-sensor night vision technologyPhage cream: Possibly for protection against pathogensOnline education services and AI teachersVR classrooms and schoolsDigital currency systemsAdvanced medical technology (e.g., growing new skin)VR windows for wall mountingMany of the characters in this project appear in future episodes.   Using storytelling to place you in a time period, this series takes you, year by year, into the future. From 2040 to 2195. If you like emerging tech, eco-tech, futurism, perma-culture, apocalyptic survival scenarios, and disruptive science, sit back and enjoy short stories that showcase my research into how the future may play out.   This is Episode 59 of the podcast "In 20xx Scifi and Futurism." The companion site is https://in20xx.com where you can find a timeline of the future, descriptions of future development, and printed fiction.These are works of fiction. Characters and groups are made-up and influenced by current events but not reporting facts about people or groups in the real world.Copyright © Leon Horn 2024. All rights reserved.

Generative Now | AI Builders on Creating the Future
PART 2: Generative Quarterly with Semil Shah | ASI, AI Agents and The Future of Work

Generative Now | AI Builders on Creating the Future

Play Episode Listen Later Oct 10, 2024 21:10


This week, we are back with part two of Generative Quarterly with Semil Shah and Lightspeed Partner and host Michael Mignano. Semil is a founding General Partner of Haystack and a Venture Partner at Lightspeed. Semil and Mike pick up their conversation on consumer AI technology, starting with innovative consumer tech like Friend AI by Avi Schiffmann. Mike and Semil consider the impact of Artificial Super Intelligence on the future of work, debate the future evolution of software on demand, and ask if we need AI agents to help us solve our boredom?  Episode Chapters (00:00) Introduction (00:31) Consumer AI Tech (03:05) Autonomous AI Agents Versus Copilots  (04:19) Matt Levine: Robots Make Good AI Junior Analysts  (05:53) Future of Training Entry Level Consultants (07:55) Artificial Super Intelligence as a Drop in Coworker (09:38) Will We Have Our Own Agentic Consultants? (11:50) Software On Demand (16:32) AI Generated Music and Content  (20:31) Conclusion  Stay in touch: www.lsvp.com X: https://twitter.com/lightspeedvp LinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/ Instagram: https://www.instagram.com/lightspeedventurepartners/ Subscribe on your favorite podcast app: generativenow.co Email: generativenow@lsvp.com The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 365: Sam Altman's New Take on Superintelligence: What it means and are we that close?

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Sep 24, 2024 43:31


Send Everyday AI and Jordan a text messageAre we really 'a few thousand' days from Superintelligence? Also, what the heck does Superintelligence even mean? We're breaking down the latest hot takes from Sam Altman and simplifying superintelligence. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition and Discussion of AI Levels2. Defining superintelligence and its implications3. Major Historical Transitions and Their Relevance to AI4. Review and Analysis of Sam Altman's Perspective5. Sam Altman's Influence on AITimestamps:00:00 Are we thousands of days from superintelligence?04:12 AI advancements amplify human capabilities and innovation.09:06 Superintelligence theoretical; AGI practical; Altman's rising influence.10:02 Sam Altman became globally significant in AI.15:40 Share your predictions about utopian or dystopian futures.19:55 Generative AI creates multimodal outputs from inputs.22:06 AI systems excel in narrow, specific tasks.25:45 I'd choose large language models for problem solving.29:00 WorkLab podcast: insights for evolving work leaders.32:03 AI likened to internet's skeptical early reception.34:10 Is AI progress genuine or just marketing?38:08 Advancing AI redefines AGI achievement; ASI unclear.40:57 Superintelligence may be achieved within our lifetime.Keywords:Jordan Wilson, advancements in AI, superintelligence, safe superintelligence, everyday AI, everydayai.com, generative AI, chat GPT, Artificial Narrow Intelligence, ANI, Artificial General Intelligence, AGI, Artificial Superintelligence, ASI, WorkLab Podcast, Microsoft, historical periods, technological transitions, Sam Altman, Intelligence Age, AI in business, OpenAI, skepticism about AI, AI definitions, timeline for superintelligence, utopian superintelligence, dystopian superintelligence, audience interaction, superintelligence debate, AI integration

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 340: When Will We Achieve AGI? One secret aspect holding us back.

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 20, 2024 54:11


Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.It's the trillion dollar AI question. When will we achieve Artificial General Intelligence? (And what the heck is it, anyway?) We'll give you the 101 on what you need to know, and one secret that could be holding the official discovery back. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AGIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition of AI, AGI and ASI2. Evolution of AGI3. Impacts of Advancements in AGI4. Future of AGITimestamps:02:00 Daily AI news06:15 When is AGI coming?09:54 Machines performing tasks requiring human intelligence.13:10 Generative AI brings impressive outputs through language.14:04 Generative AI democratizes US AI capabilities; narrow compared.19:00 Correct prompting with PPP course at podpp.com.20:52 Big tech companies openly working toward AGI now.25:37 AGI development inevitable, desirable, surpassing human capabilities.28:16 Pre-2020, experts said AGI was 80 years away.31:25 Criticism of experts in generative AI misunderstandings.33:49 Has AI's definition of AGI changed?37:21 Definition of AGI has evolved over time.40:45 Partnership between Microsoft and OpenAI pivotal.45:21 OpenAI benefits from important Microsoft partnership changes.47:08 Tech companies must focus on AGI development.52:37 Future work, business, career with AI impact.Keywords:AGI, Artificial General Intelligence, OpenAI, Microsoft, partnership, AI development, AI startup, Anthropic, copyright infringement, Google's Gemini team, Token offering, AI models, USAID, ChatGPT Enterprise, AI pace estimation, AI evolution, Artificial Superintelligence, AI prediction chart, ARK Invest, GPT 3 technology, Traditional AI, Generative AI, AI democratization, AGI benchmark, ChatGPT course, AI startups, Big tech companies, AGI cost reduction, Future of work, Everyday AI podcast. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Clearer Thinking with Spencer Greenberg
Concrete actions anyone can take to help improve AI safety (with Kat Woods)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jul 3, 2024 60:21


Why should we consider slowing AI development? Could we slow down AI development even if we wanted to? What is a "minimum viable x-risk"? What are some of the more plausible, less Hollywood-esque risks from AI? Even if an AI could destroy us all, why would it want to do so? What are some analogous cases where we slowed the development of a specific technology? And how did they turn out? What are some reasonable, feasible regulations that could be implemented to slow AI development? If an AI becomes smarter than humans, wouldn't it also be wiser than humans and therefore more likely to know what we need and want and less likely to destroy us? Is it easier to control a more intelligent AI or a less intelligent one? Why do we struggle so much to define utopia? What can the average person do to encourage safe and ethical development of AI?Kat Woods is a serial charity entrepreneur who's founded four effective altruist charities. She runs Nonlinear, an AI safety charity. Prior to starting Nonlinear, she co-founded Charity Entrepreneurship, a charity incubator that has launched dozens of charities in global poverty and animal rights. Prior to that, she co-founded Charity Science Health, which helped vaccinate 200,000+ children in India, and, according to GiveWell's estimates at the time, was similarly cost-effective to AMF. You can follow her on Twitter at @kat__woods; you can read her EA writing here and here; and you can read her personal blog here.Further reading:Robert Miles AI Safety @ YouTube"The AI Revolution: The Road to Superintelligence", by Tim UrbanUncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World, by Darren McKeeThe Nonlinear NetworkPauseAIDan Hendrycks @ Manifund (AI regrantor)Adam Gleave @ Manifund (AI regrantor)StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Tech Update | BNR
Artificial Super Intelligence (ASI) moet het voor SoftBank en Arm helemaal worden

Tech Update | BNR

Play Episode Listen Later Jun 21, 2024 5:32


Artificial Super Intelligence is het nieuwe focuspunt van techinvesteerder SoftBank, als het aan topman Masayoshi Son. Daarover vertelt Joe van Burik in deze Tech Update. Is Artificial Super Intelligence nu al hét nieuwe buzzword in tech? Eerder deze week haalde ook de bij OpenAI vertrokken medeoprichter Ilya Sutskever het aan, die heeft Safe Superintelligence Inc. opgezet. Bij SoftBank, bekend van onder meer grote investeringen in Alibaba en recenter het geflopte WeWork, is dat ook het nieuwe focuspunt. Dat heeft topman Masayoshi Son gezegd tijdens een event rond Arm, de chips-ontwerper die grotendeels in handen is van SoftBank en vorig jaar voor dé grote beursgang zorgde met een valuering van dik 50 miljard dollar. Dat heeft Son duidelijk geinspireerd, want hij wil nu degene zijn die zorgt dat we 'A S I' krijgen. Concreet moet dat gaan gebeuren met een plan van 100 miljard dollar waarbij het vooral gaat om chips die AI mogelijk maken. Dat zal dus via ontwerpen van Arm moeten gaan, schreef Bloomberg dit voorjaar al over, onder de projectnaam Izanagi. Verder in deze Tech Update: TikTok claimt dat de Amerikaanse overheid niet echt wil kijken naar alternatieven voor het verbod (als de app niet overgenomen wordt door een Amerikaanse bedrijf), blijkt uit documenten ingediend bij de rechtbank Het grote gamebedrijf Embracer wil volop AI gaan inzetten voor de ontwikkeling voor games, wat grote zorgen met zich meebrengt bij makers van gamers zelf See omnystudio.com/listener for privacy information.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 296: First Ever - AI clone tries to stump human counterpart live on AI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 18, 2024 47:40


Send Everyday AI and Jordan a text messageEnter to win a FREE Custom Avatar from Hour One as part of their #HourOneChallenge - Go find out more hereFor the first time ever....(As far as our research shows) This will be the first time an AI clone interviews its human counterpart live. Is my AI clone smarter than me? Will I crumble under the pressure? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIRelated Episodes: Ep 258: Will AI Take Our Jobs? Our answer might surprise you.Ep 200: 200 Facts, Stats, and Hot Takes About GenAI – Celebrating 200 EpisodesUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Potential Impact of AI on Careers2. Leaders in AI Technology3. Discussion on Global AI Innovation4. Privacy and Security Concerns with AI5. Creative Potential of AI6. AI Integration into SocietyTimestamps:02:25 Daily AI news07:55 OpenAI excels as business system, surpasses others.12:15 Train AI, use tools, future of creativity.16:00 Traditional web search outdated, replaced by alternatives.16:40 Internet use becoming unbearable, predicts shift to AI.21:49 Altman and Huang predict AGI within 5 years.25:14 NVIDIA's rise to most valuable company predicted.28:30 AI will automate and oversee business decisions.32:27 Hour One offers AI avatar video technology.36:18 Personal data privacy and security concerns with AI.38:57 Companies need humans to weed out bias.43:00 Flying cars may flop at first, but future potential.44:35 AI technology may replace human work graduallyKeywords:Jordan Wilson, AI technology, career disruption, Artificial General Intelligence, Artificial Superintelligence, NVIDIA, Amazon, AI digital avatar, generative AI, McDonald's, IBM, AI drive-through technology, Google DeepMind, v2a model, Runway Gen 3 Alpha, AI video creator, OpenAI, ChatGPT4O, AI integration into workforce, future of work, AI as co-workers or bosses, US AI innovation, China AI competition, Hour 1 communication technology, AI companionship industry, data privacy, AI bias and stereotypes, self-replicating AI, humanoid robots, AI in day-to-day life.

21st Talks
#22 - Superintelligence, AI, and Extinction with Darren McKee

21st Talks

Play Episode Listen Later May 3, 2024 86:51


On this episode of the podcast, coleman sits down with Darren McKee to discuss his book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. The two discuss the central case for concern surrounding AI risk, deep fakes, and Darren's approach to understanding the rapidly changing AI risk landscape.  --- Support this podcast: https://podcasters.spotify.com/pod/show/xriskpodcast/support

Thinking Crypto Interviews & News
Humayun Sheikh Interview - Fetch ai, Singularitynet & Ocean Protocol Merge to Artificial Superintelligence Alliance (ASI Token)

Thinking Crypto Interviews & News

Play Episode Listen Later Apr 24, 2024 53:22


Humayun Sheikh is the CEO & Founder of Fetch AI and the chairman of the Artificial Superintelligence Alliance. We discuss:Fetch AI and its missionThe merging of Fetch AI, Singularitynet & Ocean Protocol to form the Artificial Superintelligence AllianceHow the new ASI token will workThe combination of AI and BlockchainAI's impact on societyCrypto market outlook

Daily Crypto Report
"Fetch, SingularityNET, Ocean propose "Artificial Superintelligence Alliance" Mar 27, 2024

Daily Crypto Report

Play Episode Listen Later Mar 27, 2024 4:56


Bitcoin is up .5% at $70,288 Eth is up .5% at $3,588 Binance Coin, up slightly at $579 Those are your leaders by market cap. Top gainers in the last 24 hours.  Mantle, up 40% Coinbase to store more USDC on BASE Fetch.ai, SingularityNET, and Ocean Protocol propose merger to create "Artificial Superintelligence Alliance" HSBC introduces tokenized gold in Hong Kong KuCoin sees big outflows in response to DOJ action. 0G labs raises $35M Learn more about your ad choices. Visit megaphone.fm/adchoices

Voices of VR Podcast – Designing for Virtual Reality
#1359: Landmark Anthropological Field Study of VR with “In the Land of the Unreal” author Lisa Messeri

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Mar 7, 2024 103:27


Yale Anthropologist Lisa Messeri spent a year doing field work in Los Angeles in 2018 studying the political ecology of the VR community, and will be releasing her landmark book called In the Land of the Unreal: Virtual and Other Realities in Los Angeles on Friday, March 8th. It's the best book about the culture of VR that I've read so far as it is pulling in many insights from Science and Technology Studies (STS), anthropology, social sciences, sci fi, pop culture, and philosophy. Making claims about reality is daunting for any working scholar in the 21st Century, and Messeri uses the feeling of "unreality" as a analytical tool to analyze not only virtual reality, but also the fracturing nature of our political context, but also the unreality of Los Angeles as the factory of dreams and façade-like architecture that blurs the boundary for what's deeply real vs what's surface scaffolding enough to transport you into another reality. Messeri uses the framing of fantasy to interrogate a number of claims being made by the VR community circa 2018. Fantasy by her definition could include both positive aspirational dreams, but they could also turn out to be deluded illusions. I personally prefer the using the phrase of potential since it is a bit more neutral for me, and includes both the promising positive potentials as well as the more perilous negative potentials. But she splits her book into three parts the Fantasy of Location exploring the unreality of Los Angeles as well as how VR transports you into another world per Mel Slaters place illusion. The second part is the Fantasy of Being deconstructs the VR as the ultimate empathy machine per Chris Milk's infamous 2015 TED Talk. Then the third part explores the Fantasy of Representation with the aspirations of the LA VR community to create a more diverse and equitable ecosystem that transcends the bias and power dynamics of Silicon Valley. In each one of these three sections, Messeri uses case studies and follows specific individuals over time to see whether or not some of these aspirations and potentials end up becoming grounded into physical reality, or whether they end up collapsing into a more deluded illusion. I was inspired to dig into my backlog of 800+ unpublished Voices of VR podcast episodes to publish some interviews that I conducted between 2017-2019 featuring some of the main characters and protagonists featured in Messeri's book: Marci Jastrow is featured in Chapter 3 letting Messeri become a scholar-in-residence at Technicolor Experience Center Carrie Shaw of Embodied Labs is featured in Chapter 5, and radically opens up her business to Messeri to study Jackie Morie is featured in Chapter 6 as Messeri deconstructs some of the gender essentialist claims that VR is a medium that's a natural fit for women. And Joanna Popper is featured in Chapter 7 as Messeri breaks down the unique pathways into emerging technology that she was noting as an interesting trend from an anthropological perspective. I had a chance to read through an advanced copy of In the Land of the Unreal: Virtual and Other Realities in Los Angeles, and it's already started to make a huge impact on the way that I think about the many dimensions of unreality in our present day realities ranging from the surreal experiences of VR presence to the fractured reality bubbles of our political discourse to the ways in which techno-utopian solutionism can impact the philosophies that are driving how technologies like AI are developed aspiring towards speculations of Artificial General Intelligence or Artificial Superintelligence. I even started applying Messeri's unreality analytic to make sense of some of what Alvin Wang Graylin was saying in our discussion about Our Next Reality. I said, "I found myself is this kind of unreality of a potential imaginal future of this post-scarcity, post-labor context where all of our problems have been solved,

Voices of VR Podcast – Designing for Virtual Reality
#1353: “Our Next Reality” Book Debates Future of XR + AI, and Speculations of Superintelligence Promises & Perils

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Mar 3, 2024 110:00


The book Our Next Reality: How the AI-powered Metaverse Will Reshape the World is structured as a debate between Alvin Wang Graylin and Louis Rosenberg, who each have over 30 years of experience in XR and AI. Graylin embodies the eternal optimist and leans towards techno-utopian views while Rosenberg voices the more skeptical perspectives while leaning more towards cautious optimism and acknowledging the privacy hazards, control and alignment risks, as well as the ethical and moral dilemmas. The book is the strongest when it speaks about the near-term implications of how AI will impact XR in specific contexts, but starts to go off the rails for me when they start exploring the more distant-future implications of Artificial Superintelligence at the economic and political scales of society. At the same time, both sides acknowledge the positive and negative potential futures, and that neither path are necessarily guaranteed as it will be up to the tech companies, governments, and broader society which path of the future we go down. What I really appreciated about the book is that both Graylin and Rosenberg reference many personal examples and anecdotes around the intersection of XR and AI throughout each of their three decades of experience working with emerging technologies. Even though the book is structured as a debate, they also both agree on some fundamental premises that the Metaverse is inevitable (or rather spatial computing, XR, or mixed reality), and that AI has been and will continue to be a critical catalyst for it's growth and evolution. They both also wholeheartedly agree that it is a matter of time before we achieve either an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but they differ on the implications of these technologies. Graylin believes that ASI has the potential to lead humanity into post-labor, post-scarcity, techno-utopian future reality where all of humanity has willingly given up all cultural, political, and economic control over to our ASI overlords who become these perfectly rationally-driven philosopher kings, but yet still see humans as their ancestors via an uncharacteristically anthropomorphized emotional connection with compassionate affinity. Rosenberg dismisses this as a sort of wishful thinking that humans would be able to exert any control over ASI, and that ASI would be anything other than cold-hearted, calculating, ruthless, and unpredictably alien. Rosenberg also cautions that humanity could be headed towards cultural stagnation if the production of all art, media, music, and creative endeavors is ceded over to ASI, and that unaligned and self-directed ASI could be more dangerous than nuclear weapons. Graylin acknowledges the duality of possible futures within the context of this interview, but also tends to be biased towards the more optimistic future within the actual book. There is also a specific undercurrent of ideas and philosophies about AI that are woven throughout Graylin's and Rosenberg's book. Philosopher and historian Dr. Émile P. Torres has coined the acronym "TESCREAL" in collaboration with AI Ethicist Dr. Timnit Gebru that stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. Torres wrote an article in Truthdig elaborating on these interconnected bundle of TESCREAL ideologies are the underpinnings of many of the debates about ASI and AGI (with links included in the original quote): At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.

The Jim Rutt Show
EP 222 Trent McConaghy on AI & Brain-Computer Interface Accelerationism (bci/acc)

The Jim Rutt Show

Play Episode Listen Later Feb 7, 2024 68:11


Jim talks with Trent McConaghy about the ideas in his recent essay "bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence." They discuss the meaning of BCI (brain-computer interfaces) and acc (accelerationism), categories of AI, how much room there is for above-human intelligence, whether AI is achieving parallelism, the risks of artificial superintelligence (ASI), problems with deceleration, AI intelligences balancing each other, decentralized approaches to AI, problems with the "pull the plug" idea, humans as the weak security link, the silicon Midas touch, competing with AI using BCIs, the need for super-high bandwidth, the noninvasive road to BCIs, realistic killer apps, eye tracking, pragmatic telepathy, subvocalization, reaching adoption-level quality, the arc between noninvasive and full silicon, near-infrared sensors, issues around mass adoption of implants, maintaining cognitive liberty, the risk of giving malevolent ASIs the keys to the kingdom, whether humans plus ASIs might compete with ASIs, and much more. Episode Transcript JRS EP13 - Blockchain, AI, and DAOs "bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence," by Trent McConaghy Ocean Protocol "Nature 2.0: The Cradle of Civilization Gets an Upgrade," by Trent McConaghy Trent McConaghy on Twitter Trent McConaghy is founder of Ocean Protocol. He has 25 years of deep tech experience with a focus on AI and blockchain. He co-founded Analog Design automation Inc. in 1999, which built AI-powered tools for creative circuit design. It was acquired by Synopsys in 2004. He co-founded Solido Design Automation in 2004, using AI to mitigate process variation and help drive Moore's Law. Solido was later acquired by Siemens. He then went on to launch ascribe in 2013 for NFTs on Bitcoin, then Ocean Protocol in 2017 for decentralized data markets for AI. He currently focuses on Ocean Predictoor for crowd-sourced AI prediction feeds.

Bob Enyart Live
Artificial Intelligence 2024 & Beyond with Daniel Hedrick

Bob Enyart Live

Play Episode Listen Later Feb 3, 2024


*A.G.I., A.S.I. & Y.O.U. Hear Daniel's thoughts on the existence of  Artificial General Intelligence, Artificial Super Intelligence, and the odds an AI bot will be sitting at your desk one morning anytime soon. *The Power Problem: The power necessary to run artificial intelligence computers is so enormous that it appears it may only be solved by integrating biological systems similar to the one's God made, (and He made them using real intelligence). *Father Knows Best? Who is credited as the "father" of Artificial Intelligence? Of course it's Alan Turing. (And A.I. was all he fathered, being a self professed atheist, and convicted pervert). Turing is best known as a code breaker at Bletchley Park in England, who, along with at least 10,000 nearly forgotten others, and aided by the brave capture of code books and enigma machines from German U-Boats in combat, helped the Allies Defeat the AXIS in WWII. But, does either Alan, or A.I. actually know best? *First Principles (& Teachers): The Bible warns those teaching AI how to "think," : "...be not many of you teachers, knowing that we shall receive the greater condemnation." *Worm GPT & AI Warfare: From silly poetry to blowing up the world, AI has a little something for everyone... "AI, the FBI, Elections, Deep Fakes Etc.: Hear all about the potential future of AI systems, democracy, and how we might bring back so much of the past!

Real Science Radio
Artificial Intelligence 2024 & Beyond with Daniel Hedrick

Real Science Radio

Play Episode Listen Later Feb 3, 2024


*A.G.I., A.S.I. & Y.O.U. Hear Daniel's thoughts on the existence of  Artificial General Intelligence, Artificial Super Intelligence, and the odds an AI bot will be sitting at your desk one morning anytime soon. *The Power Problem: The power necessary to run artificial intelligence computers is so enormous that it appears it may only be solved by integrating biological systems similar to the one's God made, (and He made them using real intelligence). *Father Knows Best? Who is credited as the "father" of Artificial Intelligence? Of course it's Alan Turing. (And A.I. was all he fathered, being a self professed atheist, and convicted pervert). Turing is best known as a code breaker at Bletchley Park in England, who, along with at least 10,000 nearly forgotten others, and aided by the brave capture of code books and enigma machines from German U-Boats in combat, helped the Allies Defeat the AXIS in WWII. But, does either Alan, or A.I. actually know best? *First Principles (& Teachers): The Bible warns those teaching AI how to "think," : "...be not many of you teachers, knowing that we shall receive the greater condemnation." *Worm GPT & AI Warfare: From silly poetry to blowing up the world, AI has a little something for everyone... "AI, the FBI, Elections, Deep Fakes Etc.: Hear all about the potential future of AI systems, democracy, and how we might bring back so much of the past!

For Humanity: An AI Safety Podcast
"Uncontrollable AI" For Humanity: An AI Safety Podcast, Episode #13 , Darren McKee Interview

For Humanity: An AI Safety Podcast

Play Episode Listen Later Jan 30, 2024 100:08


In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don't often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Apologies for the laggy cam on Darren! Darren's book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it's not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Darren's Book https://www.amazon.com/Uncontrollable... My Dad's Favorite Messiah Recording (3:22-6:-55 only lol!!) https://www.youtube.com/watch?v=lFjQ7... Sample letter/email to an elected official: Dear XXXX- I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close. Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war." Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them? Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo. It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply. Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation. I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level. I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible? Thanks very much. XXXXXX Address Phone

Contra Radio Network
The Jeffers Brief for 29 Jan 2024

Contra Radio Network

Play Episode Listen Later Jan 29, 2024 75:42


This episode should be of special interest to anyone who is human.  There is only one topic for this episode.  That topic is A.I. and A.S.I.  Artificial Intelligence and Artificial Super Intelligence.   I am not trying to scare you only prepare you for what our future holds for us.  

For Humanity: An AI Safety Podcast
"Uncontrollable AI" For Humanity: An AI Safety, Podcast Episode #13, Author Darren McKee Interview

For Humanity: An AI Safety Podcast

Play Episode Listen Later Jan 29, 2024 1:55


In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don't often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Darren's book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it's not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Darren's Book

London Futurists
What is your p(doom)? with Darren McKee

London Futurists

Play Episode Listen Later Jan 18, 2024 42:17


In this episode, our subject is Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. That's a new book on a vitally important subject.The book's front cover carries this endorsement from Professor Max Tegmark of MIT: “A captivating, balanced and remarkably up-to-date book on the most important issue of our time.” There's also high praise from William MacAskill, Professor of Philosophy at the University of Oxford: “The most accessible and engaging introduction to the risks of AI that I've read.”Calum and David had lots of questions ready to put to the book's author, Darren McKee, who joined the recording from Ottawa in Canada.Topics covered included Darren's estimates for when artificial superintelligence is 50% likely to exist, and his p(doom), that is, the likelihood that superintelligence will prove catastrophic for humanity. There's also Darren's recommendations on the principles and actions needed to reduce that likelihood.Selected follow-ups:Darren McKee's websiteThe book UncontrollableDarren's podcast The Reality CheckThe Lazarus Heist on BBC SoundsThe Chair's Summary of the AI Safety Summit at Bletchley ParkThe Statement on AI Risk by the Center for AI SafetyMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Minding the Brain
#69 AI & Existential Risk

Minding the Brain

Play Episode Listen Later Jan 1, 2024 35:12


Jim interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. What are the long-term dangers of [...]

The Reality Check
TRC #679: A Chat With Author Darren McKee: Uncontrollable

The Reality Check

Play Episode Listen Later Dec 23, 2023 30:33


The crew gets to interview TRC's very own Darren McKee, author of the critically acclaimed book, ‘Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World'. We chat about the challenges of writing a first book, some of the key takeaways, and have a few laughs along the way.

AI For Humans
Google & OpenAI Prep For Superintelligence, New AI Tools & Twitch Streamer Gina Darling | Ep36

AI For Humans

Play Episode Listen Later Dec 21, 2023 79:13


This week… OpenAI lays out safety measures for dealing with Artificial Super Intelligence, Google's Deepmind solved a previously impossible math problem & then we made Guy Fieri cartoons with Domo's AI animation software. These are all of equal importance!  Plus, Gavin dove into Digi.AI a new AI “companion” app, ChatGPT turned a Chevy dealership's chatbot into a hilarious nightmare & Google Labs has some incredibly cool new music tools you can play with right now. AND THEN… It's an A4H Interview with Twitch Steamer & Podcaster Gina Darling whom Kevin got to know well at G4. We talk about AI companionship, get AI to help her buy gifts for her boyfriends parents and introduce her to AI Gina Darling (surprise!) Oh and don't forget our AI co-host this week, we're actually visited by AI Santa Claus and his lil head elf Max. Santa tells us about how they're using AI to automate the North Pole but, unfortunately, he forgot to tell Max and the rest of the elves.  It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you. Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow Join our vibrant community on TikTok @aiforhumansshow For more info, visit our website at https://www.aiforhumans.show/ /// Show links /// New Prepared-ness Team at OpenAI https://openai.com/safety/preparedness Google Deepmind Does New Math https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/ Finals Dev Talks AI https://www.gamedeveloper.com/audio/embark-studios-ai-let-devs-do-more-with-less-when-making-the-finals GPT-4.5? Nah https://twitter.com/AiBreakfast/status/1736392167906574634?s=20 https://x.com/rowancheung/status/1736616840510533830?s=20 Sentient Chevy Bot https://twitter.com/ChrisJBakke/status/1736533308849443121 https://www.autoevolution.com/news/chatgpt-powered-customer-support-at-chevrolet-dealership-hilariously-recommended-tesla-226253.html Google Labs Music FX https://aitestkitchen.withgoogle.com/tools/music-fx Domo AI Discord https://discord.com/invite/domoai Digi.AI AI Companion App https://digi.ai/ Gina Darling @GinaDarlingChannel https://www.twitch.tv/missginadarling The Spill It Podcast: https://www.youtube.com/@ShowBobas  

Audio Mises Wire
Can Government Regulate Artificial Super Intelligence?

Audio Mises Wire

Play Episode Listen Later Dec 8, 2023


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?

Mises Media
Can Government Regulate Artificial Super Intelligence? | George Ford Smith

Mises Media

Play Episode Listen Later Dec 8, 2023 4:57


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Narrated by Millian Quinteros.

Mises Media
Can Government Regulate Artificial Super Intelligence?

Mises Media

Play Episode Listen Later Dec 8, 2023


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?

Mises Media
Can Government Regulate Artificial Super Intelligence?

Mises Media

Play Episode Listen Later Dec 8, 2023


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?

Mises Media
Can Government Regulate Artificial Super Intelligence?

Mises Media

Play Episode Listen Later Dec 8, 2023


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?

Audio Mises Wire
Can Government Regulate Artificial Super Intelligence?

Audio Mises Wire

Play Episode Listen Later Dec 7, 2023


The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?

Clearer Thinking with Spencer Greenberg
We can't mitigate AI risks we've never imagined (with Darren McKee)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Dec 6, 2023 71:54


Read the full transcript here. How can we find and expand the limitations of our imaginations, especially with respect to possible futures for humanity? What sorts of existential threats have we not yet even imagined? Why is there a failure of imagination among the general populace about AI safety? How can we make better decisions under uncertainty and avoid decision paralysis? What kinds of tribes have been forming lately within AI fields? What are the differences between alignment and control in AI safety? What do people most commonly misunderstand about AI safety? Why can't we just turn a rogue AI off? What threats from AI are unique in human history? What can the average person do to help mitigate AI risks? What are the best ways to communicate AI risks to the general populace?Darren McKee (MSc, MPA) is the author of the just-released Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. He is a speaker and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check, a top 0.5% podcast on Listen Notes with over 4.5 million downloads. Learn more about him on his website, darrenmckee.info, or follow him on X / Twitter at @dbcmckee. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

The Reality Check
TRC #678: Darren's AI Book is Out!

The Reality Check

Play Episode Listen Later Nov 26, 2023 6:25


Wondering what the heck is going on with AI?  Why are some people so concerned? Darren's new beginner-friendly book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World addresses exactly those questions. In an engaging and easy-to-read style, it explores the promise and peril of advanced AI, why it might be a threat, and what we can do about it. No technical or science background required!  Available on: Amazon US  Amazon Canada and many other Amazon marketplaces as well.  

The Conspiracy Skeptic
Conspiracy Skeptic Episode 103 - The Risks and Rewards of AI with Darren McKee

The Conspiracy Skeptic

Play Episode Listen Later Nov 25, 2023


If that voice sounds familiar it is because Darren McKee is the regular intro voice of Canada's long running and well regarded Reality Check podcast. When he's not podcasting, he is a policy advisor in Ottawa. Darren has recently published a very timely book on AI called Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World (US Amazon link). We talk about his book and AI's past, present, and possible futures. Is AI like a more efficient tractor that will destroy some jobs but create more? Or is it something dot dot dot else?

The Nonlinear Library
EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee

The Nonlinear Library

Play Episode Listen Later Nov 25, 2023 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum. Concisely, I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others. Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful. Why does it exist? There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.) This book is meant to fill that gap and could be useful outreach or introductory materials. If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing) It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow. It was a hard journey. I hope it is of value to the community. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Reality Check
TRC #673: UFO Whistleblower? + Will We Make Great Pets For An Artificial Super Intelligence?

The Reality Check

Play Episode Listen Later Aug 9, 2023 30:52


Adam digs into recent headlines about UFOs and whistleblower David Grusch.  Does this give any promising evidence that aliens may be among us?  Next Darren ponders how our future AI overlords might treat us. Assuming they don't eliminate us, will we make great pets?  It's a new TRC!  

SuperDataScience
697: The (Short) Path to Artificial General Intelligence, with Dr. Ben Goertzel

SuperDataScience

Play Episode Listen Later Jul 18, 2023 87:12


AI visionary and CEO of SingularityNET Dr. Ben Goertzel provides a deep dive into the possible realization of Artificial General Intelligence (AGI) within 3-7 years. Explore the intriguing connections between self-awareness, consciousness, and the future of Artificial Super Intelligence (ASI) and discover the transformative societal changes that could arise. This episode is brought to you by AWS Inferentia (https://go.aws/3zWS0au), the AWS Insiders Podcast (https://pod.link/1608453414), and by Modelbit (https://modelbit.com), for deploying models in seconds. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information. In this episode you will learn: • Decentralized and benevolent AGI [03:13] • The SingularityNET ecosystem [13:10] • Dr. Goertzel's vision for realizing AGI - combining DL with neuro-symbolic systems, genetic algorithms and knowledge graphs [25:50] • How reaching AGI will trigger Artificial Super Intelligence [38:51] • Dr. Goertzel's approach to AGI using OpenCog Hyperon [42:34] • Why Dr. Goertzel believes AGI will be positive for humankind [53:07] • How to ensure the AGI is benevolent [1:06:43] • How AGI or ASI may act ethically [1:13:50] Additional materials: www.superdatascience.com/697

Today in Focus
How to develop artificial super-intelligence without destroying humanity

Today in Focus

Play Episode Listen Later Jun 7, 2023 33:35


Sam Altman, the founder of the revolutionary application Chat-GPT, is touring Europe with a message: AI is changing the world and there are big risks, but also big potential rewards. Help support our independent journalism at theguardian.com/infocus

Making Sense with Sam Harris
Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Making Sense with Sam Harris

Play Episode Listen Later Nov 22, 2022 67:53


Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We'll listen in on Sam's conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We'll then be introduced to philosopher Nick Bostrom's “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We'll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We'll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we're building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.