Hypothetical immensely superhuman agent
POPULARITY
Last week, several Nobel laureates and high-profile celebrities cautioned that the threat of artificial intelligence is real, particularly regarding what's known as artificial superintelligence. Max Tegmark, head of The Future of Life Institute and a professor doing AI research at MIT, spoke to The World's Host Marco Werman about why experts — including him — are calling for urgent action. The post Nobel laureates sound the alarm over artificial superintelligence appeared first on The World from PRX.
Last week, several Nobel laureates and high-profile celebrities cautioned that the threat of artificial intelligence is real, particularly regarding what's known as artificial superintelligence. Max Tegmark, head of The Future of Life Institute and a professor doing AI research at MIT, spoke to The World's Host Marco Werman about why experts — including him — are calling for urgent action. The post Nobel laureates sound the alarm over artificial superintelligence appeared first on The World from PRX.
We've been measuring intelligence with a broken ruler for over a century. The IQ test, with its troubled history, is completely unprepared for what's coming: a new kind of mind that operates on a different plane of existence.In Part 1 of my new series, "The Superintelligence Horizon," we're throwing out the old yardsticks to explore the dawn of Artificial Superintelligence. Read the full deep dive here:➡️ https://santaclaritaartificialintelligence.com/post/part-1-a-new-kind-of-mind-the-dawn-of-superintelligenceThis isn't just about faster computers. It's about understanding an intellect that could perceive our reality from a "higher dimension," much like we would view a 2D drawing. This first installment lays the crucial foundation for grasping the most transformative technology of our time. This is a critical conversation we're having right here in the Santa Clarita AI community and for the world. What are your thoughts on the limits of human intelligence? Let's discuss in the comments.#ArtificialIntelligence #Superintelligence #FutureOfAI #AISafety #TechExplained #IQtest #PhilosophyofAI #SantaClaritaAI #SantaClarita #AIYoutube Channels:Conner with Honor - real estateHome Muscle - fat torchingFrom first responder to real estate expert, Connor with Honor brings honesty and integrity to your Santa Clarita home buying or selling journey. Subscribe to my YouTube channel for valuable tips, local market trends, and a glimpse into the Santa Clarita lifestyle.Dive into Real Estate with Connor with Honor:Santa Clarita's Trusted Realtor & Fitness EnthusiastReal Estate:Buying or selling in Santa Clarita? Connor with Honor, your local expert with over 2 decades of experience, guides you seamlessly through the process. Subscribe to his YouTube channel for insider market updates, expert advice, and a peek into the vibrant Santa Clarita lifestyle.Fitness:Ready to unlock your fitness potential? Join Connor's YouTube journey for inspiring workouts, healthy recipes, and motivational tips. Remember, a strong body fuels a strong mind and a successful life!Podcast:Dig deeper with Connor's podcast! Hear insightful interviews with industry experts, inspiring success stories, and targeted real estate advice specific to Santa Clarita.
WarRoom Battleground EP 857: Geoffrey Miller: Artificial Superintelligence Will Evolve to Destroy Us
My guest today is Jonathan Siddharth, co-founder and CEO of Turing.Jonathan incubated Turing in Foundation Capital's Palo Alto office in 2018. Since then, it has grown into a multi-billion dollar company that powers nearly every frontier AI lab: OpenAI, Anthropic, Google, Meta, Microsoft, and others. If you've seen a breakthrough in how AI reasons or codes, odds are Turing had a hand in it.Jonathan has a provocative thesis: within three years, every white-collar job, including the CEO's, will be automated. In this episode, we talk about what it will take to reach artificial superintelligence, why this goal matters, and how the agentic era will fundamentally reshape work. We also dig into his founder journey: what he learned from his first startup Rover, how he built Turing from day one, and how his leadership style has evolved to emphasize speed, intensity, and staying in the details.Jonathan has been at the edge of AI for years, and he has the rare ability to translate what's happening at the frontier into lessons for builders today.Hope you enjoy the conversation! Chapters: 00:00 Cold open00:02:06 Jonathan's backstory: his experience at Stanford00:06:37 Lessons from Rover00:08:39 Early Turing: incubation at Foundation Capital and finding PMF00:13:52 Why Turing took off00:15:12 Evolving from developer cloud to AGI partner for frontier labs00:16:49 How coding improved reasoning - and why Turing became essential00:20:38 Founder lessons: building org speed and intensity00:23:33 Why work-life balance is a false dichotomy00:24:17 Daily standups, flat orgs, and Formula One culture00:25:15 Confrontational energy and Frank Slootman's influence00:29:50 Positioning Turing as “Switzerland” in the AI arms race00:34:32 The four pillars of superintelligence: multimodality, reasoning, tool use, coding00:37:39 From copilots to agents: the 100x improvement00:40:00 Why enterprise hasn't had its “ChatGPT moment” yet00:43:09 Jonathan's thoughts on RL gyms, algorithmic techniques, and evals00:46:32 The blurring line between model providers and AI apps00:47:35 Why defensibility depends on proprietary data and evals00:55:20 RL gyms: how enterprises train agents in simulated environments00:57:39 Underhyped: $30T of white-collar work will be automated
Support the show
Moonshots and Mindsets with Peter Diamandis ✓ Claim : Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Eric Schmidt is the former CEO of Google. Dave Blundin is the founder of Link Ventures – Offers for my audience: Test what's going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Connect with Eric: X: https://x.com/ericschmidt His latest book: https://a.co/d/fCxDy8P Learn about Dave's fund: https://www.linkventures.com/xpv-fund Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 5th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Moonshots and Mindsets with Peter Diamandis ✓ Claim Key Takeaways We will have artificial superintelligence by 2035: “Superintelligence” implies intelligence that is beyond the sum of what humans can do As important as nuclear fusion and fission may be for the future, they will not arrive soon enough to meet the immediate surge in global power demand driven by AI and data infrastructureLearning machines accelerate to their natural limit, and the current limit of AI systems is electricityGreater energy infrastructure is essential to support the intellectual capacity required for a superintelligent abundanceWe will have specialized savants in every field, within five years; the real question is, once we have all these savants, do they unify? Do they ultimately become superhuman? The emergence of superintelligence comes with huge proliferation issues: Competitive issues, China vs. the US issues, electricity issues; we do not even have the language for the deterrence and proliferation aspects of these powerful models The “Mutually Assured AI Malfunction” geopolitical competition framework: If one nation races ahead to develop superintelligent AI, rivals may sabotage their progress (through cyberattacks or strikes) to avoid destabilizing power imbalancesWhatever enables faster learning loops is the business moat of the future “The real risk is not Terminator, it's drift. AI won't destroy humans violently, but might slowly erode human values, autonomy, and judgment if left unregulated or misunderstood.” – Eric Schmidt The tools change, but the structure of humanity will not When superintelligence emerges, every person will have the sum of Einstein and Leonardo da Vinci in their pocket; how humans choose to use their polymath is the question “We don't know what artificial general intelligence will deliver, and we don't know what artificial super intelligence will deliver, but we know it's coming.” – Eric Schmidt Read the full notes @ podcastnotes.orgGet access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Eric Schmidt is the former CEO of Google. Dave Blundin is the founder of Link Ventures – Offers for my audience: Test what's going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Connect with Eric: X: https://x.com/ericschmidt His latest book: https://a.co/d/fCxDy8P Learn about Dave's fund: https://www.linkventures.com/xpv-fund Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 5th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Moonshots and Mindsets with Peter Diamandis ✓ Claim Key Takeaways We will have artificial superintelligence by 2035: “Superintelligence” implies intelligence that is beyond the sum of what humans can do As important as nuclear fusion and fission may be for the future, they will not arrive soon enough to meet the immediate surge in global power demand driven by AI and data infrastructureLearning machines accelerate to their natural limit, and the current limit of AI systems is electricityGreater energy infrastructure is essential to support the intellectual capacity required for a superintelligent abundanceWe will have specialized savants in every field, within five years; the real question is, once we have all these savants, do they unify? Do they ultimately become superhuman? The emergence of superintelligence comes with huge proliferation issues: Competitive issues, China vs. the US issues, electricity issues; we do not even have the language for the deterrence and proliferation aspects of these powerful models The “Mutually Assured AI Malfunction” geopolitical competition framework: If one nation races ahead to develop superintelligent AI, rivals may sabotage their progress (through cyberattacks or strikes) to avoid destabilizing power imbalancesWhatever enables faster learning loops is the business moat of the future “The real risk is not Terminator, it's drift. AI won't destroy humans violently, but might slowly erode human values, autonomy, and judgment if left unregulated or misunderstood.” – Eric Schmidt The tools change, but the structure of humanity will not When superintelligence emerges, every person will have the sum of Einstein and Leonardo da Vinci in their pocket; how humans choose to use their polymath is the question “We don't know what artificial general intelligence will deliver, and we don't know what artificial super intelligence will deliver, but we know it's coming.” – Eric Schmidt Read the full notes @ podcastnotes.orgGet access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Eric Schmidt is the former CEO of Google. Dave Blundin is the founder of Link Ventures – Offers for my audience: Test what's going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Connect with Eric: X: https://x.com/ericschmidt His latest book: https://a.co/d/fCxDy8P Learn about Dave's fund: https://www.linkventures.com/xpv-fund Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 5th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Eric Schmidt is the former CEO of Google. Dave Blundin is the founder of Link Ventures – Offers for my audience: Test what's going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Connect with Eric: X: https://x.com/ericschmidt His latest book: https://a.co/d/fCxDy8P Learn about Dave's fund: https://www.linkventures.com/xpv-fund Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 5th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. --- First published: June 18th, 2025 Source: https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
MONEY FM 89.3 - Prime Time with Howie Lim, Bernard Lim & Finance Presenter JP Ong
Singapore shares rose today with the global trade situation back in focus as investors put the Israel-Iran ceasefire behind them. The Straits Times Index was up 0.64% at 3,963.62 points at about 12.37 pm Singapore time, with a value turnover of S$511.88M seen in the broader market. In terms of counters to watch today, we have AEM, after the semiconductor test solutions provider raised its revenue guidance for its first half ending June to between S$185 million and S$195 million, up from an earlier range of S$155 million to S$170 million. Elsewhere, from how the White House said US President Donald Trump could extend his deadline for trade deals beyond the 9th of July, to how SoftBank Group CEO Masayoshi Son wants the Japanese technology investment group to become the biggest platform provider for “artificial super intelligence” within the next 10 years – more international and corporate headlines remain in focus. On Market View, Money Matters’ finance presenter Chua Tian Tian unpacked the developments with Benjamin Goh, Head of Research and Investor Education, SIAS.See omnystudio.com/listener for privacy information.
Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025) About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More Watch this video episode on YouTube Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie. --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
OpenAI's Sam Altman drops o3-Pro & sees “The Gentle Singularity”, Ilya Sutskever prepares for super intelligence & Mark Zuckerberg is spending MEGA bucks on AI talent. WHAT GIVES? All of the major AI companies are not only preparing for AGI but for true “super intelligence” which is on the way, at least according to *them*. What does that mean for us? And how do we exactly prepare for it? Also, Apple's WWDC is a big AI letdown, Eleven Labs' new V3 model is AMAZING, Midjourney got sued and, oh yeah, those weird 1X Robotics androids are back and running through grassy fields. WHAT WILL HAPPEN WHEN AI IS SMARTER THAN US? ACTUALLY, IT PROB ALREADY IS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links /? Ilya Sutsketver's Commencement Speech About AI https://youtu.be/zuZ2zaotrJs?si=U_vHVpFEyTRMWSNa Apple's Cringe Genmoji Video https://x.com/altryne/status/1932127782232076560 OpenAI's Sam Altman On Superintelligence “The Gentle Singularity” https://blog.samaltman.com/the-gentle-singularity The Secret Mathematicians Meeting Where The Tried To Outsmart AI https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ O3-Pro Released https://x.com/sama/status/1932532561080975797 The most expensive o3-Pro Hello https://x.com/Yuchenj_UW/status/1932544842405720540 Eleven Labs v3 https://x.com/elevenlabsio/status/1930689774278570003 o3 regular drops in price by 80% - cheaper than GPT-4o https://x.com/edwinarbus/status/1932534578469654552 Open weights model taking a ‘little bit more time' https://x.com/sama/status/1932573231199707168 Meta Buys 49% of Scale AI + Alexandr Wang Comes In-House https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html Apple Underwhelms at WWDC Re AI https://www.cnbc.com/2025/06/09/apple-wwdc-underwhelms-on-ai-software-biggest-facelift-in-decade-.html BusinessWeek's Mark Gurman on WWDC https://x.com/markgurman/status/1932145561919991843 Joanna Stern Grills Apple https://youtu.be/NTLk53h7u_k?si=AvnxM9wefXl2Nyjn Midjourney Sued by Disney & Comcast https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/ 1x Robotic's Redwood https://x.com/1x_tech/status/1932474830840082498 https://www.1x.tech/discover/redwood-ai Redwood Mobility Video https://youtu.be/Dp6sqx9BGZs?si=UC09VxSx-PK77q-- Amazon Testing Humanoid Robots To Deliver Packages https://www.theinformation.com/articles/amazon-prepares-test-humanoid-robots-delivering-packages?rc=c3oojq&shared=736391f5cd5d0123 Autonomous Drone Beats Pilots For the First Time https://x.com/AISafetyMemes/status/1932465150151270644 Random GPT-4o Image Gen Pic https://www.reddit.com/r/ChatGPT/comments/1l7nnnz/what_do_you_get/?share_id=yWRAFxq3IMm9qBYxf-ZqR&utm_content=4&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1 https://x.com/AIForHumansShow/status/1932441561843093513 Jon Finger's Shoes to Cars With Luma's Modify Video https://x.com/mrjonfinger/status/1932529584442069392
May 29, 2025 – What happens when AI systems start inventing new materials, running billion-dollar companies, and making decisions in government? In this insightful conversation, Cris Sheridan interviews Dr. Alan D. Thompson, renowned AI...
Send us your thoughtsIn this episode of CFO 4.0, host Hannah Munro speaks with David Wood, Chair of London Futurists, about the accelerating pace of AI and its profound implications for business, society, and the future of work. Together, they explore the near-term possibilities and longer-term consequences of artificial intelligence—from transformation to potential turmoil.In this episode, we cover:Why some companies struggle with AI adoption The rise of neuro-symbolic AI and the combination of logic-based and intuitive systemsBreakthroughs in video generation, self-prompting AI, and the future of AI-generated mediaWhat Artificial General Intelligence (AGI) could mean for jobs, skills, and human relevanceHow emotional intelligence and adaptability will be essential skills for leaders in the AI eraLinks mentioned:David's Linkedin Learn more about London FuturistsLondon Futurists Meetups The Coming Wave: AI, Power and Our Future by Mustafa SuleymanSupremacy: AI, ChatGPT and the race that changed the world by Parmy Olson Uncontrollable : The Threat of Artificial Superintelligence by Darren McKee Explore other CFO 4.0 Podcast episodes here. Subscribe to our Podcast!
Welcome to the Alfalfa Podcast
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
On April 7th, 2025, the AI landscape saw significant advancements and strategic shifts, evidenced by Meta's launch of its powerful Llama 4 AI models, poised to compete with industry leaders. Simultaneously, DeepSeek and Tsinghua University unveiled a novel self-improving AI approach, highlighting China's growing AI prowess, while OpenAI considered a hardware expansion through the potential acquisition of Jony Ive's startup. Microsoft enhanced its Copilot AI assistant with personalisation features and broader application integration, aiming for a more intuitive user experience. Furthermore, a report projected potential existential risks from Artificial Superintelligence by 2027, prompting discussions on AI safety, as Midjourney released its advanced version 7 image generator and NVIDIA optimised performance for Meta's new models.
Listen in as your host Fred Williams and co-host Doug McBurney welcome RSR's resident A.I. expert Daniel Hedrick, of godisnowhere fame for an update on where we are with Artificial Intelligence, (and where A.I. is with us)! *Welcome: Daniel Hedrick, discussing Co-Pilot, LM Studio, Deepseek, Perplexity, Chat GPT, Grok 3, Midjourney, Agentic AI, AGI, ASI, and all things Artificial Intelligence. *The Gospel & Dan Bongino: Hear how Dan Bongino fundamentally agrees with Doug McBurney that A.I. has the potential, if programmed in an unbiased manner, and with access to everything ever written, to be a tool for telling the truth, including confirming the Gospel! *Luddites of the World: Relax! AI is not on the verge of replacing programmers and coders. But it has become an essential tool. *Motivation, Awareness & Experience: AI lacks all 3, but humans don't, so even Artificial Super Intelligence will always need us. *Maximum Problems: How do we constrain AI from going off the rails? like in the paperclip maximizer problem. The answer lies in our connection to God's reality. *The Energy Question: While The human brain uses at most 30 Watts to make over 100 trillion connections, no one's even sure what modern AI platforms are consuming... But it's a lot and growing!
Listen in as your host Fred Williams and co-host Doug McBurney welcome RSR's resident A.I. expert Daniel Hedrick, of godisnowhere fame for an update on where we are with Artificial Intelligence, (and where A.I. is with us)! *Welcome: Daniel Hedrick, discussing Co-Pilot, LM Studio, Deepseek, Perplexity, Chat GPT, Grok 3, Midjourney, Agentic AI, AGI, ASI, and all things Artificial Intelligence. *The Gospel & Dan Bongino: Hear how Dan Bongino fundamentally agrees with Doug McBurney that A.I. has the potential, if programmed in an unbiased manner, and with access to everything ever written, to be a tool for telling the truth, including confirming the Gospel! *Luddites of the World: Relax! AI is not on the verge of replacing programmers and coders. But it has become an essential tool. *Motivation, Awareness & Experience: AI lacks all 3, but humans don't, so even Artificial Super Intelligence will always need us. *Maximum Problems: How do we constrain AI from going off the rails? like in the paperclip maximizer problem. The answer lies in our connection to God's reality. *The Energy Question: While The human brain uses at most 30 Watts to make over 100 trillion connections, no one's even sure what modern AI platforms are consuming... But it's a lot and growing!
Unfiltered chat Blockchain DXB & Society X - LinkedIn Live: Weekly Crypto, Blockchain & AI Review Date & Time: January 23rd, 2025, 11:00 AM GST Hosts: RA George (Blockchain DXB) Markose Chentittha (Oort Foundation + Society X) Guest: Neil Fitzhugh, Head of Marketing at Trac Systems/TAP Protocol Contact details LinkedIn https://short-link.me/OkAJ Website: https://trac.network/ Twitter: Neil: https://x.com/fitzyOG Twitter/ X https://x.com/trac_btc?mx=2 Discord: https://discord.com/invite/trac Telegram: TAP Protocol - https://t.me/tap_protocol GitHub: https://github.com/BennyTheDev Note: This entire episode was created, scripted, and reviewed 100% by AI using Notebook LM by Google, showcasing the power of AI-driven content creation. This week's LinkedIn Live session explored groundbreaking developments in crypto, blockchain, and AI. Below is a streamlined recap of the AI-generated discussion between the hosts and guest Neil Fitzhugh. Discussion on its market significance and the broader implications for investors. Mention of Trump Meme Coin, which surged to a $13B market cap before dropping to $7.3B, underlining meme coin volatility. Neil Fitzhugh explained how the TAP Protocol is transforming Bitcoin's ecosystem with features like smart contracts, AMMs, swaps, and cross-chain bridges. Key Features: Tokenomics and Validator Licenses: Security Audits: Rigorous measures ensure platform reliability. Stargate Investment: A $500B initiative led by industry giants to develop Artificial Super Intelligence infrastructure. Dubai's AI Seal: A certification system to regulate trusted AI companies working with UAE authorities. Analysis of the U.S. District Court's decision and its impact on privacy protocols and Alexey Pertsev's ongoing legal challenges. 2025 is poised to be the "Year of Tokenization," potentially achieving $500B in market value, with real-world asset tokenization leading the charge. Republican-led initiatives in Texas, Pennsylvania, Ohio, and Wyoming highlight growing interest in state-level Bitcoin reserves. CLS Global's admission to wash trading and its repercussions. Solana's rise in transaction volume, signaling stablecoin growth. Ross Ulbricht's Campaign: Renewed focus on decentralization and privacy. FOMC Meeting Predictions: Interest rate changes and their crypto impact. Larry Fink's $700K Bitcoin Prediction: Bitcoin as a hedge against inflation. Neil addressed questions on: Aligning Bitcoin maximalist ideals with TAP Protocol's innovations. Trac Systems' upcoming milestones. Beginner-Level Sessions: Now live on LinkedIn. Spartan Race Discount: Use code George20 for the January 25–26 event. This episode demonstrated the capabilities of AI in analyzing and presenting complex topics. Using Notebook LM by Google, Blockchain DXB and Society X delivered an engaging, AI-driven conversation covering the latest in crypto, blockchain, and AI. To support this channel: https://www.patreon.com/BlockchainDXB ⚡ Buy me Coffee ☕ https://www.buymeacoffee.com/info36/w/6987 ⚡ Advanced Media https://www.amt.tv/ ⚡Spartan Race Trifecta in Dubai https://race.spartan.com/en/race/detail/8646/overview For 20% Discount use code: George20 ⚡ The Race Space Podcast
OpenAI is prepping for Artificial Super Intelligence, Sam Altman says AI fast takeoff is likely, Luma Labs' new Ray 2 AI video model looks good and Reddit goes all GPT on us. Plus, an amazing new small AI model from NVIDIA, executive AI orders for more power and chips, ChatGPT Tasks kind of blows and a whole lotta Shrek (more so than you might want). BRB, WE GOTTA PREP FOR THE SINGULARITY Y'ALL! Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // SHOW LINKS // OpenAI's Economic Blueprint https://openai.com/global-affairs/openais-economic-blueprint/ Sam Altman Says Fast Take Off More Likely https://x.com/tsarnick/status/1879100390840697191 OpenSource $450 Dollar o1 Model https://x.com/LiorOnAI/status/1878876546066506157 MiniMax-01 Launch: Lightning Attention https://x.com/i/trending/1879318547861582090 Runway Prompt-To-Character https://x.com/IXITimmyIXI/status/1878088929330491844 Executive Order For Gigawatt Datacenters https://www.reuters.com/technology/artificial-intelligence/biden-issue-executive-order-ensure-power-ai-data-centers-2025-01-14/ New AI Chip Rules https://www.theinformation.com/articles/why-bidens-final-ai-chip-move-caused-an-uproar?rc=c3oojq&shared=160dd16ac575f520 ChatGPT Tasks https://x.com/OpenAI/status/1879267276291203329 Custom Reddit GPT For Answers https://www.reddit.com/answers/ LumaLabs Ray 2: https://lumalabs.ai/ray Nvidia Lauches Sana https://nvlabs.github.io/Sana/ AI Slop Distorting Wildfire News https://www.fastcompany.com/91260442/ai-slop-has-is-still-distorting-news-about-the-l-a-wildfires French Woman Scammed By AI Brad Pitt https://www.nbcnews.com/news/world/ai-brad-pitt-woman-romance-scam-france-tf1-rcna187745 Fashn Web App: Try-on + Video https://x.com/ayaboch/status/1878888737603830081 New AI or Die https://youtu.be/cAjUy896SOE?si=BpJhqV_oOwvov01q My Swamp https://x.com/andr3_ai/status/1878110156887638380
A production center makes everything locally. It even has grow towers where the produce needs of nearby citizens can be met. After the World Storm, a production center goes into sentry mode when the Internet breaks. The entire insides of the production center become a death trap as all the robots will attack people who try to get in.Merch, a world class hacker, is abducted by a gang who make him try to hack the production center so they can get in and get enough food, water, and goods to have all they need for years.In the course of the story Merch finds an ASI, an Artificial Super Intelligence. This machine is hundreds of times smarter than a human. If he gains access, he could change the course of humanity. As an example, with the ASI in control of the production center, it could build a robot army."Medusa Net" (peer-to-peer internet system)Links/AR glasses with features like: Night vision, "Target Conversation" (allows distant conversation between people who can see each other), "Assist" (AI assistant), Multiple AR feeds/displays.Lutin Two Bot (subscription-locked)Tri-legged bot with lamp (described as spider-like)Double high botRobot with tiny arms for microscale workHologram shell robots (with curved screens for human-like appearance)Autono-cart (autonomous cart)"Follow cart" (presumably autonomous)"G. silk" (advanced fabric that: Never wrinkles, Never stains under normal conditions, Can filter water, Lets through only water and air.Cooling tents (double-walled with air inflation)Temperature-controlled shoes (powered to cool soles)Production center with automated security/defense systemsEngineered microbe medicine (tooth care chewables)"Rig gloves" (for controlling micro-scale robots)I'll extract the technology mentioned in this dystopian/post-apocalyptic story:3D navigation maps in field of viewNight vision capabilitiesJob's Navigator AIDates Navigator AIAI-based hacking systemsSimulation software for mimicking online consumersHome sentry botSex botSolar panel cleaning robotSwarm drone controllerAutono-cab (autonomous taxi)VR worldsFirst-person VR moviesLive Movie CreatorGiantess Center networkGeo-thermal power plantAutomated grow rooms/farming systemsDelivery tubesCard table computersDNA simulatorsRight-to-repair softwareSkills for Lutins (some kind of digital skill system)Automated vending machines (pizza and pasta)Digital door locksPest trap softwarePublic talk lineGroup talkI'll compile a list of the technology mentioned in this post-apocalyptic story:E-paper/E-paper screensMastodon (social network)Security cards with changing QR codesBio-sampler pensThrive Navigator (upgraded survival-focused AI)Metis/Matis (ASI - Artificial Super Intelligence)AI hackers (subordinate AIs used for hacking)AI jury systems (to keep ASI in check)Giantess guard botsBuilder botsConstruction botsMaintenance botsBattle droidsCleaning bots"Half-high bots" (climbing capable)Sentry mode botsGiantess Production towers/centerProduction equipmentAutomated facilitiesSurvival bunkersUnderground utility tunnelsChemical weaponsSound assault weaponsMicrowave weaponsBattle droidsSmell scanner/smell visionEmbedded technology ("embeds")Many of the characters in this project appear in future episodes. Using storytelling to place you in a time period, this series takes you, year by year, into the future. From 2040 to 2195. If you like emerging tech, eco-tech, futurism, perma-culture, apocalyptic survival scenarios, and disruptive science, sit back and enjoy short stories that showcase my research into how the future may play out. This is Episode 1 of the podcast "In 20xx Scifi and Futurism." The companion site is https://in20xx.com These are works of fiction. Characters and groups are made-up and influenced by current events but not reporting facts about people or groups in the real world.Copyright © Cy Porter 2024. All rights reserved.
Send Everyday AI and Jordan a text messageIt's the trillion dollar AI question. When will we achieve Artificial General Intelligence? (And what the heck is it, anyway?) We'll give you the 101 on what you need to know, and one secret that could be holding the official discovery back.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AGIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition of AI, AGI and ASI2. Evolution of AGI3. Impacts of Advancements in AGI4. Future of AGITimestamps:02:00 When is AGI coming?09:54 Machines performing tasks requiring human intelligence.13:10 Generative AI brings impressive outputs through language.14:04 Generative AI democratizes US AI capabilities; narrow compared.19:00 Correct prompting with PPP course at podpp.com.20:52 Big tech companies openly working toward AGI now.25:37 AGI development inevitable, desirable, surpassing human capabilities.28:16 Pre-2020, experts said AGI was 80 years away.31:25 Criticism of experts in generative AI misunderstandings.33:49 Has AI's definition of AGI changed?37:21 Definition of AGI has evolved over time.40:45 Partnership between Microsoft and OpenAI pivotal.45:21 OpenAI benefits from important Microsoft partnership changes.47:08 Tech companies must focus on AGI development.Keywords:AGI, Artificial General Intelligence, OpenAI, Microsoft, partnership, AI development, AI startup, Anthropic, copyright infringement, Google's Gemini team, Token offering, AI models, USAID, ChatGPT Enterprise, AI pace estimation, AI evolution, Artificial Superintelligence, AI prediction chart, ARK Invest, GPT 3 technology, Traditional AI, Generative AI, AI democratization, AGI benchmark, ChatGPT course, AI startups, Big tech companies, AGI cost reduction, Future of work, Everyday AI podcast. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, “Artificial Intelligence: 10 Things You Should Know.” We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim's recent research projects such as “Promptbreeder,” “Debating with More Persuasive LLMs Leads to More Truthful Answers,” and more. The complete show notes for this episode can be found at https://twimlai.com/go/706.
Lexi Bass has a BA in Arts Administration from the University of Kentucky, a MA in Art from the University of Louisville and an MFA in Experimental and Documentary Arts from Duke University. She is currently a lecturer in Animation and Digital Art at the University of Kentucky School of Arts and Visual Studies and an experimental filmmaker and artist. Her films have screened widely in London, Amsterdam, and other European cities and Los Angeles, Philadelphia, Minneapolis, and various locations across KY. Her new film Meander will air on Tuesday 15 Oct at the Lyric Theatre and Cultural Arts Center at 6:30pm. Meander (2024) Lexi BassAs artificial intelligence replaces workers in our increasingly elderly global population, companies engineering AI race robots amidst human inequities and emerging problems of AI sentience. The sum spells disaster for the human race in the dystopian world of Meander, which evokes both ancient Greek mythology and near-future science fiction. Meander finds herself destitute in the Underworld with no way to finance escape other than offering her biological potential for surrogate pregnancy up to dubious experimentation in an underground facility. Meanwhile, filmmaker/narrator, Lexi Bass, recounts her experiences of pregnancy and motherhood at the precipice of age 40 and the loss of her own mother shortly after, questioning the future of humanity at the precipice of Artificial General Intelligence and Artificial Super-Intelligence. For more and to connect with us, visit https://www.artsconnectlex.org/art-throb-podcast.html
Safe underground, a colony of 9000 are cared for by robots and automated services. But people are agitated. Youth run wild. How will the citizens adjust to free necessities but little else? Questions of post work, UBI, and life purpose arise.Artificial Super Intelligence may save them. A combination of capitalism and needs met for all may save them.T-Line: An underground train systemRobots: Various types for construction, cleaning, and other tasksArtificial Intelligence (AI): Used for various purposesAugmented Reality (AR) glassesVirtual Reality (VR) systemsLutin Bots: Advanced robotic assistants (Lutin One and Lutin Two models)Robot baby (Taylor): Used for data collection on parentingHooded tunics with cooling and air filtering systemsGyro clogs: Some type of self-balancing footwearInduction stovesGeothermal power plantAquaponic systemsCentral cooling systemsLocal internetArtificial Superintelligence (ASI) systemGene therapy for addiction treatmentBuilder AI: For planning and constructing flood tunnelsBone-mounted AR glassesAll-sensor night vision technologyPhage cream: Possibly for protection against pathogensOnline education services and AI teachersVR classrooms and schoolsDigital currency systemsAdvanced medical technology (e.g., growing new skin)VR windows for wall mountingMany of the characters in this project appear in future episodes. Using storytelling to place you in a time period, this series takes you, year by year, into the future. From 2040 to 2195. If you like emerging tech, eco-tech, futurism, perma-culture, apocalyptic survival scenarios, and disruptive science, sit back and enjoy short stories that showcase my research into how the future may play out. This is Episode 59 of the podcast "In 20xx Scifi and Futurism." The companion site is https://in20xx.com where you can find a timeline of the future, descriptions of future development, and printed fiction.These are works of fiction. Characters and groups are made-up and influenced by current events but not reporting facts about people or groups in the real world.Copyright © Leon Horn 2024. All rights reserved.
This week, we are back with part two of Generative Quarterly with Semil Shah and Lightspeed Partner and host Michael Mignano. Semil is a founding General Partner of Haystack and a Venture Partner at Lightspeed. Semil and Mike pick up their conversation on consumer AI technology, starting with innovative consumer tech like Friend AI by Avi Schiffmann. Mike and Semil consider the impact of Artificial Super Intelligence on the future of work, debate the future evolution of software on demand, and ask if we need AI agents to help us solve our boredom? Episode Chapters (00:00) Introduction (00:31) Consumer AI Tech (03:05) Autonomous AI Agents Versus Copilots (04:19) Matt Levine: Robots Make Good AI Junior Analysts (05:53) Future of Training Entry Level Consultants (07:55) Artificial Super Intelligence as a Drop in Coworker (09:38) Will We Have Our Own Agentic Consultants? (11:50) Software On Demand (16:32) AI Generated Music and Content (20:31) Conclusion Stay in touch: www.lsvp.com X: https://twitter.com/lightspeedvp LinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/ Instagram: https://www.instagram.com/lightspeedventurepartners/ Subscribe on your favorite podcast app: generativenow.co Email: generativenow@lsvp.com The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
Send Everyday AI and Jordan a text messageAre we really 'a few thousand' days from Superintelligence? Also, what the heck does Superintelligence even mean? We're breaking down the latest hot takes from Sam Altman and simplifying superintelligence. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition and Discussion of AI Levels2. Defining superintelligence and its implications3. Major Historical Transitions and Their Relevance to AI4. Review and Analysis of Sam Altman's Perspective5. Sam Altman's Influence on AITimestamps:00:00 Are we thousands of days from superintelligence?04:12 AI advancements amplify human capabilities and innovation.09:06 Superintelligence theoretical; AGI practical; Altman's rising influence.10:02 Sam Altman became globally significant in AI.15:40 Share your predictions about utopian or dystopian futures.19:55 Generative AI creates multimodal outputs from inputs.22:06 AI systems excel in narrow, specific tasks.25:45 I'd choose large language models for problem solving.29:00 WorkLab podcast: insights for evolving work leaders.32:03 AI likened to internet's skeptical early reception.34:10 Is AI progress genuine or just marketing?38:08 Advancing AI redefines AGI achievement; ASI unclear.40:57 Superintelligence may be achieved within our lifetime.Keywords:Jordan Wilson, advancements in AI, superintelligence, safe superintelligence, everyday AI, everydayai.com, generative AI, chat GPT, Artificial Narrow Intelligence, ANI, Artificial General Intelligence, AGI, Artificial Superintelligence, ASI, WorkLab Podcast, Microsoft, historical periods, technological transitions, Sam Altman, Intelligence Age, AI in business, OpenAI, skepticism about AI, AI definitions, timeline for superintelligence, utopian superintelligence, dystopian superintelligence, audience interaction, superintelligence debate, AI integration
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.It's the trillion dollar AI question. When will we achieve Artificial General Intelligence? (And what the heck is it, anyway?) We'll give you the 101 on what you need to know, and one secret that could be holding the official discovery back. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AGIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Definition of AI, AGI and ASI2. Evolution of AGI3. Impacts of Advancements in AGI4. Future of AGITimestamps:02:00 Daily AI news06:15 When is AGI coming?09:54 Machines performing tasks requiring human intelligence.13:10 Generative AI brings impressive outputs through language.14:04 Generative AI democratizes US AI capabilities; narrow compared.19:00 Correct prompting with PPP course at podpp.com.20:52 Big tech companies openly working toward AGI now.25:37 AGI development inevitable, desirable, surpassing human capabilities.28:16 Pre-2020, experts said AGI was 80 years away.31:25 Criticism of experts in generative AI misunderstandings.33:49 Has AI's definition of AGI changed?37:21 Definition of AGI has evolved over time.40:45 Partnership between Microsoft and OpenAI pivotal.45:21 OpenAI benefits from important Microsoft partnership changes.47:08 Tech companies must focus on AGI development.52:37 Future work, business, career with AI impact.Keywords:AGI, Artificial General Intelligence, OpenAI, Microsoft, partnership, AI development, AI startup, Anthropic, copyright infringement, Google's Gemini team, Token offering, AI models, USAID, ChatGPT Enterprise, AI pace estimation, AI evolution, Artificial Superintelligence, AI prediction chart, ARK Invest, GPT 3 technology, Traditional AI, Generative AI, AI democratization, AGI benchmark, ChatGPT course, AI startups, Big tech companies, AGI cost reduction, Future of work, Everyday AI podcast. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Why should we consider slowing AI development? Could we slow down AI development even if we wanted to? What is a "minimum viable x-risk"? What are some of the more plausible, less Hollywood-esque risks from AI? Even if an AI could destroy us all, why would it want to do so? What are some analogous cases where we slowed the development of a specific technology? And how did they turn out? What are some reasonable, feasible regulations that could be implemented to slow AI development? If an AI becomes smarter than humans, wouldn't it also be wiser than humans and therefore more likely to know what we need and want and less likely to destroy us? Is it easier to control a more intelligent AI or a less intelligent one? Why do we struggle so much to define utopia? What can the average person do to encourage safe and ethical development of AI?Kat Woods is a serial charity entrepreneur who's founded four effective altruist charities. She runs Nonlinear, an AI safety charity. Prior to starting Nonlinear, she co-founded Charity Entrepreneurship, a charity incubator that has launched dozens of charities in global poverty and animal rights. Prior to that, she co-founded Charity Science Health, which helped vaccinate 200,000+ children in India, and, according to GiveWell's estimates at the time, was similarly cost-effective to AMF. You can follow her on Twitter at @kat__woods; you can read her EA writing here and here; and you can read her personal blog here.Further reading:Robert Miles AI Safety @ YouTube"The AI Revolution: The Road to Superintelligence", by Tim UrbanUncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World, by Darren McKeeThe Nonlinear NetworkPauseAIDan Hendrycks @ Manifund (AI regrantor)Adam Gleave @ Manifund (AI regrantor)StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Send Everyday AI and Jordan a text messageEnter to win a FREE Custom Avatar from Hour One as part of their #HourOneChallenge - Go find out more hereFor the first time ever....(As far as our research shows) This will be the first time an AI clone interviews its human counterpart live. Is my AI clone smarter than me? Will I crumble under the pressure? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIRelated Episodes: Ep 258: Will AI Take Our Jobs? Our answer might surprise you.Ep 200: 200 Facts, Stats, and Hot Takes About GenAI – Celebrating 200 EpisodesUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Potential Impact of AI on Careers2. Leaders in AI Technology3. Discussion on Global AI Innovation4. Privacy and Security Concerns with AI5. Creative Potential of AI6. AI Integration into SocietyTimestamps:02:25 Daily AI news07:55 OpenAI excels as business system, surpasses others.12:15 Train AI, use tools, future of creativity.16:00 Traditional web search outdated, replaced by alternatives.16:40 Internet use becoming unbearable, predicts shift to AI.21:49 Altman and Huang predict AGI within 5 years.25:14 NVIDIA's rise to most valuable company predicted.28:30 AI will automate and oversee business decisions.32:27 Hour One offers AI avatar video technology.36:18 Personal data privacy and security concerns with AI.38:57 Companies need humans to weed out bias.43:00 Flying cars may flop at first, but future potential.44:35 AI technology may replace human work graduallyKeywords:Jordan Wilson, AI technology, career disruption, Artificial General Intelligence, Artificial Superintelligence, NVIDIA, Amazon, AI digital avatar, generative AI, McDonald's, IBM, AI drive-through technology, Google DeepMind, v2a model, Runway Gen 3 Alpha, AI video creator, OpenAI, ChatGPT4O, AI integration into workforce, future of work, AI as co-workers or bosses, US AI innovation, China AI competition, Hour 1 communication technology, AI companionship industry, data privacy, AI bias and stereotypes, self-replicating AI, humanoid robots, AI in day-to-day life.
On this episode of the podcast, coleman sits down with Darren McKee to discuss his book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. The two discuss the central case for concern surrounding AI risk, deep fakes, and Darren's approach to understanding the rapidly changing AI risk landscape. --- Support this podcast: https://podcasters.spotify.com/pod/show/xriskpodcast/support
Humayun Sheikh is the CEO & Founder of Fetch AI and the chairman of the Artificial Superintelligence Alliance. We discuss:Fetch AI and its missionThe merging of Fetch AI, Singularitynet & Ocean Protocol to form the Artificial Superintelligence AllianceHow the new ASI token will workThe combination of AI and BlockchainAI's impact on societyCrypto market outlook
Bitcoin is up .5% at $70,288 Eth is up .5% at $3,588 Binance Coin, up slightly at $579 Those are your leaders by market cap. Top gainers in the last 24 hours. Mantle, up 40% Coinbase to store more USDC on BASE Fetch.ai, SingularityNET, and Ocean Protocol propose merger to create "Artificial Superintelligence Alliance" HSBC introduces tokenized gold in Hong Kong KuCoin sees big outflows in response to DOJ action. 0G labs raises $35M Learn more about your ad choices. Visit megaphone.fm/adchoices
Yale Anthropologist Lisa Messeri spent a year doing field work in Los Angeles in 2018 studying the political ecology of the VR community, and will be releasing her landmark book called In the Land of the Unreal: Virtual and Other Realities in Los Angeles on Friday, March 8th. It's the best book about the culture of VR that I've read so far as it is pulling in many insights from Science and Technology Studies (STS), anthropology, social sciences, sci fi, pop culture, and philosophy. Making claims about reality is daunting for any working scholar in the 21st Century, and Messeri uses the feeling of "unreality" as a analytical tool to analyze not only virtual reality, but also the fracturing nature of our political context, but also the unreality of Los Angeles as the factory of dreams and façade-like architecture that blurs the boundary for what's deeply real vs what's surface scaffolding enough to transport you into another reality. Messeri uses the framing of fantasy to interrogate a number of claims being made by the VR community circa 2018. Fantasy by her definition could include both positive aspirational dreams, but they could also turn out to be deluded illusions. I personally prefer the using the phrase of potential since it is a bit more neutral for me, and includes both the promising positive potentials as well as the more perilous negative potentials. But she splits her book into three parts the Fantasy of Location exploring the unreality of Los Angeles as well as how VR transports you into another world per Mel Slaters place illusion. The second part is the Fantasy of Being deconstructs the VR as the ultimate empathy machine per Chris Milk's infamous 2015 TED Talk. Then the third part explores the Fantasy of Representation with the aspirations of the LA VR community to create a more diverse and equitable ecosystem that transcends the bias and power dynamics of Silicon Valley. In each one of these three sections, Messeri uses case studies and follows specific individuals over time to see whether or not some of these aspirations and potentials end up becoming grounded into physical reality, or whether they end up collapsing into a more deluded illusion. I was inspired to dig into my backlog of 800+ unpublished Voices of VR podcast episodes to publish some interviews that I conducted between 2017-2019 featuring some of the main characters and protagonists featured in Messeri's book: Marci Jastrow is featured in Chapter 3 letting Messeri become a scholar-in-residence at Technicolor Experience Center Carrie Shaw of Embodied Labs is featured in Chapter 5, and radically opens up her business to Messeri to study Jackie Morie is featured in Chapter 6 as Messeri deconstructs some of the gender essentialist claims that VR is a medium that's a natural fit for women. And Joanna Popper is featured in Chapter 7 as Messeri breaks down the unique pathways into emerging technology that she was noting as an interesting trend from an anthropological perspective. I had a chance to read through an advanced copy of In the Land of the Unreal: Virtual and Other Realities in Los Angeles, and it's already started to make a huge impact on the way that I think about the many dimensions of unreality in our present day realities ranging from the surreal experiences of VR presence to the fractured reality bubbles of our political discourse to the ways in which techno-utopian solutionism can impact the philosophies that are driving how technologies like AI are developed aspiring towards speculations of Artificial General Intelligence or Artificial Superintelligence. I even started applying Messeri's unreality analytic to make sense of some of what Alvin Wang Graylin was saying in our discussion about Our Next Reality. I said, "I found myself is this kind of unreality of a potential imaginal future of this post-scarcity, post-labor context where all of our problems have been solved,
The book Our Next Reality: How the AI-powered Metaverse Will Reshape the World is structured as a debate between Alvin Wang Graylin and Louis Rosenberg, who each have over 30 years of experience in XR and AI. Graylin embodies the eternal optimist and leans towards techno-utopian views while Rosenberg voices the more skeptical perspectives while leaning more towards cautious optimism and acknowledging the privacy hazards, control and alignment risks, as well as the ethical and moral dilemmas. The book is the strongest when it speaks about the near-term implications of how AI will impact XR in specific contexts, but starts to go off the rails for me when they start exploring the more distant-future implications of Artificial Superintelligence at the economic and political scales of society. At the same time, both sides acknowledge the positive and negative potential futures, and that neither path are necessarily guaranteed as it will be up to the tech companies, governments, and broader society which path of the future we go down. What I really appreciated about the book is that both Graylin and Rosenberg reference many personal examples and anecdotes around the intersection of XR and AI throughout each of their three decades of experience working with emerging technologies. Even though the book is structured as a debate, they also both agree on some fundamental premises that the Metaverse is inevitable (or rather spatial computing, XR, or mixed reality), and that AI has been and will continue to be a critical catalyst for it's growth and evolution. They both also wholeheartedly agree that it is a matter of time before we achieve either an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), but they differ on the implications of these technologies. Graylin believes that ASI has the potential to lead humanity into post-labor, post-scarcity, techno-utopian future reality where all of humanity has willingly given up all cultural, political, and economic control over to our ASI overlords who become these perfectly rationally-driven philosopher kings, but yet still see humans as their ancestors via an uncharacteristically anthropomorphized emotional connection with compassionate affinity. Rosenberg dismisses this as a sort of wishful thinking that humans would be able to exert any control over ASI, and that ASI would be anything other than cold-hearted, calculating, ruthless, and unpredictably alien. Rosenberg also cautions that humanity could be headed towards cultural stagnation if the production of all art, media, music, and creative endeavors is ceded over to ASI, and that unaligned and self-directed ASI could be more dangerous than nuclear weapons. Graylin acknowledges the duality of possible futures within the context of this interview, but also tends to be biased towards the more optimistic future within the actual book. There is also a specific undercurrent of ideas and philosophies about AI that are woven throughout Graylin's and Rosenberg's book. Philosopher and historian Dr. Émile P. Torres has coined the acronym "TESCREAL" in collaboration with AI Ethicist Dr. Timnit Gebru that stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. Torres wrote an article in Truthdig elaborating on these interconnected bundle of TESCREAL ideologies are the underpinnings of many of the debates about ASI and AGI (with links included in the original quote): At the heart of TESCREALism is a “techno-utopian” vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling “post-human” civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.
Jim talks with Trent McConaghy about the ideas in his recent essay "bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence." They discuss the meaning of BCI (brain-computer interfaces) and acc (accelerationism), categories of AI, how much room there is for above-human intelligence, whether AI is achieving parallelism, the risks of artificial superintelligence (ASI), problems with deceleration, AI intelligences balancing each other, decentralized approaches to AI, problems with the "pull the plug" idea, humans as the weak security link, the silicon Midas touch, competing with AI using BCIs, the need for super-high bandwidth, the noninvasive road to BCIs, realistic killer apps, eye tracking, pragmatic telepathy, subvocalization, reaching adoption-level quality, the arc between noninvasive and full silicon, near-infrared sensors, issues around mass adoption of implants, maintaining cognitive liberty, the risk of giving malevolent ASIs the keys to the kingdom, whether humans plus ASIs might compete with ASIs, and much more. Episode Transcript JRS EP13 - Blockchain, AI, and DAOs "bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence," by Trent McConaghy Ocean Protocol "Nature 2.0: The Cradle of Civilization Gets an Upgrade," by Trent McConaghy Trent McConaghy on Twitter Trent McConaghy is founder of Ocean Protocol. He has 25 years of deep tech experience with a focus on AI and blockchain. He co-founded Analog Design automation Inc. in 1999, which built AI-powered tools for creative circuit design. It was acquired by Synopsys in 2004. He co-founded Solido Design Automation in 2004, using AI to mitigate process variation and help drive Moore's Law. Solido was later acquired by Siemens. He then went on to launch ascribe in 2013 for NFTs on Bitcoin, then Ocean Protocol in 2017 for decentralized data markets for AI. He currently focuses on Ocean Predictoor for crowd-sourced AI prediction feeds.
*A.G.I., A.S.I. & Y.O.U. Hear Daniel's thoughts on the existence of Artificial General Intelligence, Artificial Super Intelligence, and the odds an AI bot will be sitting at your desk one morning anytime soon. *The Power Problem: The power necessary to run artificial intelligence computers is so enormous that it appears it may only be solved by integrating biological systems similar to the one's God made, (and He made them using real intelligence). *Father Knows Best? Who is credited as the "father" of Artificial Intelligence? Of course it's Alan Turing. (And A.I. was all he fathered, being a self professed atheist, and convicted pervert). Turing is best known as a code breaker at Bletchley Park in England, who, along with at least 10,000 nearly forgotten others, and aided by the brave capture of code books and enigma machines from German U-Boats in combat, helped the Allies Defeat the AXIS in WWII. But, does either Alan, or A.I. actually know best? *First Principles (& Teachers): The Bible warns those teaching AI how to "think," : "...be not many of you teachers, knowing that we shall receive the greater condemnation." *Worm GPT & AI Warfare: From silly poetry to blowing up the world, AI has a little something for everyone... "AI, the FBI, Elections, Deep Fakes Etc.: Hear all about the potential future of AI systems, democracy, and how we might bring back so much of the past!
*A.G.I., A.S.I. & Y.O.U. Hear Daniel's thoughts on the existence of Artificial General Intelligence, Artificial Super Intelligence, and the odds an AI bot will be sitting at your desk one morning anytime soon. *The Power Problem: The power necessary to run artificial intelligence computers is so enormous that it appears it may only be solved by integrating biological systems similar to the one's God made, (and He made them using real intelligence). *Father Knows Best? Who is credited as the "father" of Artificial Intelligence? Of course it's Alan Turing. (And A.I. was all he fathered, being a self professed atheist, and convicted pervert). Turing is best known as a code breaker at Bletchley Park in England, who, along with at least 10,000 nearly forgotten others, and aided by the brave capture of code books and enigma machines from German U-Boats in combat, helped the Allies Defeat the AXIS in WWII. But, does either Alan, or A.I. actually know best? *First Principles (& Teachers): The Bible warns those teaching AI how to "think," : "...be not many of you teachers, knowing that we shall receive the greater condemnation." *Worm GPT & AI Warfare: From silly poetry to blowing up the world, AI has a little something for everyone... "AI, the FBI, Elections, Deep Fakes Etc.: Hear all about the potential future of AI systems, democracy, and how we might bring back so much of the past!
This episode should be of special interest to anyone who is human. There is only one topic for this episode. That topic is A.I. and A.S.I. Artificial Intelligence and Artificial Super Intelligence. I am not trying to scare you only prepare you for what our future holds for us.
The crew gets to interview TRC's very own Darren McKee, author of the critically acclaimed book, ‘Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World'. We chat about the challenges of writing a first book, some of the key takeaways, and have a few laughs along the way.
This week… OpenAI lays out safety measures for dealing with Artificial Super Intelligence, Google's Deepmind solved a previously impossible math problem & then we made Guy Fieri cartoons with Domo's AI animation software. These are all of equal importance! Plus, Gavin dove into Digi.AI a new AI “companion” app, ChatGPT turned a Chevy dealership's chatbot into a hilarious nightmare & Google Labs has some incredibly cool new music tools you can play with right now. AND THEN… It's an A4H Interview with Twitch Steamer & Podcaster Gina Darling whom Kevin got to know well at G4. We talk about AI companionship, get AI to help her buy gifts for her boyfriends parents and introduce her to AI Gina Darling (surprise!) Oh and don't forget our AI co-host this week, we're actually visited by AI Santa Claus and his lil head elf Max. Santa tells us about how they're using AI to automate the North Pole but, unfortunately, he forgot to tell Max and the rest of the elves. It's an endless cavalcade of ridiculous and informative AI news, AI tools, and AI entertainment cooked up just for you. Follow us for more AI discussions, AI news updates, and AI tool reviews on X @AIForHumansShow Join our vibrant community on TikTok @aiforhumansshow For more info, visit our website at https://www.aiforhumans.show/ /// Show links /// New Prepared-ness Team at OpenAI https://openai.com/safety/preparedness Google Deepmind Does New Math https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/ Finals Dev Talks AI https://www.gamedeveloper.com/audio/embark-studios-ai-let-devs-do-more-with-less-when-making-the-finals GPT-4.5? Nah https://twitter.com/AiBreakfast/status/1736392167906574634?s=20 https://x.com/rowancheung/status/1736616840510533830?s=20 Sentient Chevy Bot https://twitter.com/ChrisJBakke/status/1736533308849443121 https://www.autoevolution.com/news/chatgpt-powered-customer-support-at-chevrolet-dealership-hilariously-recommended-tesla-226253.html Google Labs Music FX https://aitestkitchen.withgoogle.com/tools/music-fx Domo AI Discord https://discord.com/invite/domoai Digi.AI AI Companion App https://digi.ai/ Gina Darling @GinaDarlingChannel https://www.twitch.tv/missginadarling The Spill It Podcast: https://www.youtube.com/@ShowBobas
The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?
The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?
The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?
The Biden administration claims it wants to get out in front of the development of artificial intelligence. However, the likely scenario is that AI will leave government regulators in its wake. Original Article: Can Government Regulate Artificial Super Intelligence?
Read the full transcript here. How can we find and expand the limitations of our imaginations, especially with respect to possible futures for humanity? What sorts of existential threats have we not yet even imagined? Why is there a failure of imagination among the general populace about AI safety? How can we make better decisions under uncertainty and avoid decision paralysis? What kinds of tribes have been forming lately within AI fields? What are the differences between alignment and control in AI safety? What do people most commonly misunderstand about AI safety? Why can't we just turn a rogue AI off? What threats from AI are unique in human history? What can the average person do to help mitigate AI risks? What are the best ways to communicate AI risks to the general populace?Darren McKee (MSc, MPA) is the author of the just-released Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. He is a speaker and sits on the Board of Advisors for AIGS Canada, the leading safety and governance network in the country. McKee also hosts the international award-winning podcast, The Reality Check, a top 0.5% podcast on Listen Notes with over 4.5 million downloads. Learn more about him on his website, darrenmckee.info, or follow him on X / Twitter at @dbcmckee. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]
Wondering what the heck is going on with AI? Why are some people so concerned? Darren's new beginner-friendly book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World addresses exactly those questions. In an engaging and easy-to-read style, it explores the promise and peril of advanced AI, why it might be a threat, and what we can do about it. No technical or science background required! Available on: Amazon US Amazon Canada and many other Amazon marketplaces as well.
Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We'll listen in on Sam's conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We'll then be introduced to philosopher Nick Bostrom's “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We'll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We'll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we're building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.