Podcasts about Artificial general intelligence

Hypothetical human-level or stronger AI

  • 545PODCASTS
  • 827EPISODES
  • 48mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Artificial general intelligence

Show all podcasts related to artificial general intelligence

Latest podcast episodes about Artificial general intelligence

The Ranveer Show हिंदी
2025's Most Important Career Podcast - AI Skills For All Ages | Masters' Union Dr. Nandini Seth ,TRS

The Ranveer Show हिंदी

Play Episode Listen Later May 3, 2025 89:51


Scholarship Form Link - https://bit.ly/tallyform_scholarshipWebsite - https://bit.ly/data_science_aiCheck out BeerBiceps SkillHouse's Designing For Clicks Course - https://bbsh.co.in/ra-yt-vid-dfcShare your guest suggestions hereLink - https://shorturlbe.lvl.fit/install/3vcgss3vBeerBiceps SkillHouse को Social Media पर Follow करे :-YouTube : https://www.youtube.com/channel/UC2-Y36TqZ5MH6N1cWpmsBRQ Instagram : https://www.instagram.com/beerbiceps_skillhouseWebsite : https://beerbicepsskillhouse.inFor any other queries EMAIL: support@beerbicepsskillhouse.comIn case of any payment-related issues, kindly write to support@tagmango.comLevel Supermind - Mind Performance App को Download करिए यहाँ से

The Best of the Money Show
Business unusual: Rethinking intelligence: The AGI conundrum

The Best of the Money Show

Play Episode Listen Later Apr 30, 2025 10:56


Motheo Khoaripe speaks to Richard Mulholland, founder of AI agency Too Many Robots, and author of Relentless Relevance about the potential implications of artificial general intelligence and whether humanity's understanding of intelligence is overly narrow. The Money Show is a podcast hosted by well-known journalist and radio presenter, Stephen Grootes. He explores the latest economic trends, business developments, investment opportunities, and personal finance strategies. Each episode features engaging conversations with top newsmakers, industry experts, financial advisors, entrepreneurs, and politicians, offering you thought-provoking insights to navigate the ever-changing financial landscape.Thank you for listening to The Money Show podcast.Listen live - The Money Show with Stephen Grootes is broadcast weekdays between 18:00 and 20:00 (SA Time) on 702 and CapeTalk.There’s more from the show at www.themoneyshow.co.zaSubscribe to the Money Show daily and weekly newslettersThe Money Show is brought to you by Absa.Follow us on:702 on Facebook: www.facebook.com/TalkRadio702702 on TikTok: www.tiktok.com/@talkradio702702 on Instagram: www.instagram.com/talkradio702702 on X: www.x.com/Radio702702 on YouTube: www.youtube.com/@radio702CapeTalk on Facebook: www.facebook.com/CapeTalkCapeTalk on TikTok: www.tiktok.com/@capetalkCapeTalk on Instagram: www.instagram.com/capetalkzaCapeTalk on YouTube: www.youtube.com/@CapeTalk567CapeTalk on X: www.x.com/CapeTalk See omnystudio.com/listener for privacy information.

Sad Francisco
End AGI Before It Ends Us with Stop AI

Sad Francisco

Play Episode Listen Later Apr 28, 2025 41:10


AI is coming for our jobs, the environment, and is even starting to stand-in for human creativity. Derek Allen, Sam Kirchner and Varvara Pavlova are part of the newly formed direct action group Stop AI, which is particularly concerned about the existential threat of Artificial General Intelligence and the potential for robots to outsmart humans, which Sam Altman, CEO of OpenAI, says is coming this year. Stop AI https://www.stopai.info/ 
"Lavender: The AI machine directing Israel's bombing spree in Gaza" (Yuval Abraham, +972 Magazine) https://www.972mag.com/lavender-ai-israeli-army-gaza/ Brian Merchant's book "Blood in the Machine: The Origins of the Rebellion Against Big Tech" covers the history of the Luddite movement https://sfpl.bibliocommons.com/v2/record/S93C5986948 Support us and find links to our past episodes: patreon.com/sadfrancisco  

Shepard Ambellas Show
⚠️ Extermination: AGI War Against Humanity Begins in 2026?

Shepard Ambellas Show

Play Episode Listen Later Apr 25, 2025 41:06


Support Grassroots Journalism and Trends AnalysisAll links here: https://linktr.ee/shepardambellasIn this explosive episode of the Shepard Ambellas Show, investigative journalist Shepard Ambellas dives deep into the looming threat of Artificial General Intelligence (AGI) and its potential to spark a war against humanity as early as 2026. With experts like Anthropic's CEO Dario Amodei predicting AGI's arrival by 2026-2027, and Google DeepMind's research warning of existential risks, the clock is ticking. Shepard explores how AGI could surpass human intelligence, the military implications of autonomous AI systems, and the chilling possibility of a global conflict driven by misaligned AI goals. Are governments and tech giants prepared, or are we on the brink of a doomsday scenario? Tune in for hard-hitting analysis, exclusive insights, and what you need to know to survive the coming storm. Don't forget to like, subscribe, and hit the notification bell for daily updates! Share your thoughts in the comments—how are the tariffs affecting you?About the Show:The Shepard Ambellas Show is an electrifying and fast-paced program that features a blend of news and comedy. It has been ranked as high as #66 on US podcast charts, making it one of the most popular shows. You can catch the live broadcast daily at 7 pm Eastern/6 pm Central on the Shepard Ambellas YouTube channel, where Shep and other listeners are waiting to engage with you. If you miss the live show, don't worry - you can always catch up on Apple Podcasts or Spotify. So what are you waiting for? Tune in now and experience the excitement for yourself!Shepard Ambellas is a renowned investigative journalist, trends analyst, filmmaker, and founder of the famed Intellihub news website. With over 6,000 published reports and appearances on platforms like the Travel Channel's America Declassified, Coast to Coast AM with George Noory, and The Alex Jones Show, Ambellas has established himself as a fearless truth-seeker. His critically acclaimed documentary Shackled to Silence exposed hidden agendas behind the global pandemic, cementing his reputation as a bold voice against the status quo.

Eye On A.I.
#250 Pedro Domingos on the Real Path to AGI

Eye On A.I.

Play Episode Listen Later Apr 24, 2025 68:12


This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what's still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.   Topics covered: Why deep learning alone won't achieve AGI How reasoning by analogy could unlock true machine creativity The role of evolutionary algorithms in building intelligent systems Why transformers like GPT-4 are impressive—but incomplete The danger of hype from tech leaders vs. the real science behind AGI What the Master Algorithm truly means — and why we haven't found it yet Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI.   Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.    

In Clear Focus
In Clear Focus: Simulating AI Futures with Scott Smith and Susan Cox-Smith

In Clear Focus

Play Episode Listen Later Apr 22, 2025 28:17


IN CLEAR FOCUS: Strategic foresight consultants Scott Smith and Susan Cox-Smith discuss Foom, an immersive strategic simulation for exploring AI futures. Unlike static scenario planning, Foom creates environments where teams experience real-time consequences of decisions. As participants navigate progress toward Artificial General Intelligence, this "escape room for strategy" reveals insights about decision-making, coalition-building, and managing uncertainty in emerging technology landscapes.

Drunk Real Estate
92. What AI, China, and Tariffs Mean for Real Estate in 2025 w Guest Robb Almy

Drunk Real Estate

Play Episode Listen Later Apr 10, 2025 98:19 Transcription Available


AI Is Changing Real Estate—But Who Wins? In Episode 92 of Drunk Real Estate, we're joined by AI expert and investor Robb Almy to break down how artificial intelligence is already reshaping real estate. From automated investing to property analysis, AI is creating massive opportunities—and big risks. Are small investors being left behind? Is AI deflationary or the cure for labor shortages? And how close are we really to Artificial General Intelligence (AGI)? We cover it all—plus a wild economic ride that includes tariffs, rare earth minerals, China, and a shaky market reaction.

Science Friday
What Artificial General Intelligence Could Mean For Our Future

Science Friday

Play Episode Listen Later Apr 9, 2025 29:14


What happens when AI moves beyond convincing chatbots and custom image generators to something that matches—or outperforms—humans?Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it's just around the corner, while other experts point a few years down the road. In fact, it's not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

IT Privacy and Security Weekly update.
EP 237.5 Deep Dive: Artificial General Intelligence and The IT Privacy and Security Weekly Update for the Week Ending April 8th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Apr 9, 2025 15:39


1. Concerns About AGI DevelopmentDeepMind's 108-page report outlines four major risks of Artificial General Intelligence (AGI):Misuse: AGI used maliciously (e.g., creating viruses).Misalignment: AGI acting contrary to intended goals.Mistakes: Errors causing unintended harm, especially in high-stakes sectors like defense.Structural Risks: Long-term impacts on trust, power, and truth in society. While safety measures are urged, full control of AGI remains uncertain.2. Improving Machine Learning SecurityThe open-source community is adopting model signing (via Sigstore), applying digital signatures to AI models. This ensures the model's authenticity and integrity—helping prevent the use of tampered or untrusted code in AI systems.3. Risks from AI Coding AssistantsA newly identified threat—Rules File Backdoor—allows attackers to embed malicious instructions in configuration files used by AI coding assistants (like GitHub Copilot or Cursor). This can lead to AI-generated code with hidden vulnerabilities, increasing risk through shared or open-source repos.4. Italy's Controversial Piracy ShieldPiracy Shield, Italy's system for blocking pirated content, has mistakenly blacklisted legitimate services like Google Drive. Critics highlight issues around lack of transparency, violations of net neutrality and digital rights, and risks of censorship. Despite backlash, the system is being expanded, raising further concerns.5. EU's Push on Data Access and EncryptionThe EU's ProtectEU strategy includes strengthening Europol into a more FBI-like agency and proposing roadmaps for law enforcement access to encrypted data. This indicates a potential push toward backdoor access, reigniting debates on privacy vs. security.6. Cyberattacks on Australian Pension FundsCoordinated cyberattacks have compromised over 20,000 accounts across Australian retirement funds, with some user savings stolen. The incidents expose vulnerabilities in financial infrastructure, prompting a government initiative to bolster sector-wide cybersecurity.7. Lessons from Oracle's Security BreachesOracle reported two separate breaches in a short span. The latest involved theft of outdated login credentials. These incidents reveal persistent challenges in securing large tech platforms and highlight the need for ongoing security improvements and scrutiny of legacy systems.8. Closure of OpenSNP Genetic DatabaseOpenSNP is shutting down after 14 years, deleting all user data due to rising concerns over misuse of genetic data, especially amid growing political threats from authoritarian regimes. The founder emphasized protecting vulnerable populations and reevaluated the risks of continued data availability versus its research value.

IT Privacy and Security Weekly update.
Artificial General Intelligence and The IT Privacy and Security Weekly Update for the Week Ending April 8th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Apr 8, 2025 18:30


EP 237. DeepMind just released a 108-page manual on not getting wiped out by our own invention.  Highlighting the fact that planning for an AI apocalypse could now be a core business line function.Sigstore machine learning model signing - AI models are finally getting digital signatures, because “mystery code from the internet” just wasn't a scalable trust strategy.Turns out your AI  programmer can be tricked into writing malware.  Helping us understand that “copilot” isn't necessarily synonymous with “competent”.Italy's anti-piracy tool is blocking legit services like it's playing "whack-a-mole" blindfolded, but in this case the moles are  cloud storage, like your Google drive.The EU wants Europol to act like the FBI because privacy for our citizens is important, except when we want to read their encrypted messages.Hackers hit Aussie retirement funds, proving the only thing scarier than blowing through all your retirement money is someone else blowing through it all for you.Oracle's been hacked again—because who doesn't love a sequel with worse security and a bigger cleanup bill?OpenSNP just quit the internet after realizing DNA + authoritarian vibes = one dystopia too many.This week is a wild ride, so saddle up and hold on tight!

BlockHash: Exploring the Blockchain
Ep. 504 Leslie | Decentralized Network Scaling AGI with Qubic

BlockHash: Exploring the Blockchain

Play Episode Listen Later Apr 7, 2025 34:39


For episode 504, CEO Leslie joins Brandon Zemp to talk about Qubic, A decentralized network where scalability meets AGI, built from the ground up to surpass traditional blockchains.Qubic enables instant finality, feeless transactions, and the fastest smart contracts. Built on Useful Proof of Work (UPoW), it’s the first to integrate artificial neural networks for the future of Artificial General Intelligence.Qubic is fully open source from day one. Every development, every improvement, and every innovation is transparent, accessible, and community-driven.

How to Pronounce - VOA Learning English
How to Pronounce: Difficult terms - Artificial General Intelligence - April 06, 2025

How to Pronounce - VOA Learning English

Play Episode Listen Later Apr 6, 2025 2:00


The Marketing AI Show
#141: Road to AGI (and Beyond) #1 — The AI Timeline is Accelerating

The Marketing AI Show

Play Episode Listen Later Mar 27, 2025 106:27


The future of AI is arriving faster than most are ready for.  In this kickoff episode of thr Road to AGI (and Beyond), Paul Roetzer shares why Artificial General Intelligence (AGI) may be only a few years away, why the definition of AGI itself is a moving target, and how leaders can prepare for profound disruption—sooner than they think.  Access the show notes and show links here Timestamps: 00:01:08 — Origins of the Series 00:11:17 — The Pursuit of AGI 00:14:51 — What is AGI? 00:22:15 — What's Beyond AGI? Artificial Superintelligence 00:32:20 — Setting the Stage for AGI and Beyond 00:40:54 — The AI Timeline v2 00:51:25 — LLM Advancements (2025) 00:59:26 — Multimodal AI Explosion (2025 - 2026) 01:03:53 — AI Agents Explosion (2025 - 2027) 01:10:46 — Robotics Explosion (2026 - 2030) 01:14:50 — AGI Emergence (2027 - 2030) 01:17:56 — What's Changed? 01:21:10 — What Accelerates AI Progress? 01:24:53 — What Slows AI Progress? 01:31:06 — How Can You Prepare? 01:38:49 — What's Next for the Series? 01:40:17 — Closing Thoughts Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy

Retirement Road Map®
076: How The AI Revolution Is Transforming Business, Investing and Daily Life with BlackRock's Jay Jacobs

Retirement Road Map®

Play Episode Listen Later Mar 26, 2025 42:10


Artificial Intelligence (AI) is no longer a futuristic concept—it's here, and its transforming industries, investments, and daily life. Despite the advantages in processing and productivity, many still have concerns about using AI in their everyday lives at home and in business. This begs the question: Should AI be something we fear, or is it just new technology that we should embrace? Here to help answer those questions is Jay Jacobs. Jay is BlackRock's Head of Thematic and Active ETFs, where he oversees the overall product strategy, thought leadership, and client engagement for the firm's thematic and active ETF businesses. We're thrilled to tap into his expertise to break down the evolution of AI and LLMs (Large Language Models), how it's impacting the investment landscape, and what the future looks like in the AI and digital world. In our conversation, we discussed the rapid development of artificial intelligence and its potential to revolutionize sectors like finance, healthcare, and even customer service. You'll also hear Jay describe how AI has evolved into a race toward Artificial General Intelligence (AGI), its ability to increase our productivity on a personal level, and whether the fears surrounding AI's risks are warranted. In this podcast interview, you'll learn: How AI has evolved from Clippy in Microsoft Word to ChatGPT and other LLMs. Why research and investing in AI is accelerating and what's fueling its rapid growth. Why access to data, computing power, and infrastructure are the new competitive advantages in the AI arms race. How businesses are leveraging AI to boost efficiency and customer service. The race to AGI (Artificial General Intelligence)—what it means and how close we really are. How synthetic data and virtual environments are shaping the next frontier of AI development Want the Full Show Notes? To get access to the full show notes, including audio, transcripts, and links to all the resources mentioned, visit SHPfinancial.com/podcast Connect With Us on Social Facebook LinkedIn YouTube

The Great Simplification with Nate Hagens
The Mad Scramble for Power: Global Superpowers' Strategies for Energy, Economics, and War | Reality Roundtable #16

The Great Simplification with Nate Hagens

Play Episode Listen Later Mar 23, 2025 89:51


The rapidly evolving geopolitical landscape of recent years can be hard to follow. With economic conflicts between global superpowers and violent clashes across multiple continents, today's events can seem starkly different from the trajectory of past decades. So, how can a deeper understanding of energy and resource security help us make sense of these chaotic trends? In this discussion, Nate is joined by Art Berman, Michael Every, and Izabella Kaminska for a broad exploration of the complex relationship between energy, geopolitics, and economic strategy. Together, they provide valuable insights into the consequences of deindustrialization, the impact of military spending, and the urgent need to reassess strategies as resources dwindle and geopolitical tensions rise. How is the use of fear as a political tool intertwined with the challenges of trust and disinformation in navigating turbulent international conflicts? What role is the race for Artificial General Intelligence and Quantum Computing playing in these rapidly changing situations? And ultimately, what should we, as citizens, be expecting from our leaders at the global stage as the struggle for power in the 21st century continues to intensify?  (Conversation recorded on March 10th, 2025)   About the Guests:  Arthur E. Berman is a petroleum geologist with over 40 years of oil and gas industry experience. He is an expert on U.S. shale plays and is currently consulting for several E&P companies and capital groups in the energy sector. Michael Every is Global Strategist at Rabobank Singapore analyzing major developments and key thematic trends, especially on the intersection of geopolitics, economics, and markets. He is frequently published and quoted in financial media, is a regular conference keynote speaker, and was invited to present to the 2022 G-20 on the current global crisis.  Izabella Kaminska is the founding editor of The Blind Spot, a finance and business news website. She is also senior finance editor at POLITICO. Izabella was previously editor of the Financial Times' Alphaville blog, and an associate producer at CNBC.    Show Notes and More Watch this video episode on YouTube   Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.   ---   Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners  

ApartmentHacker Podcast
1,966 - AI, Multifamily, and the Future of Intelligence—A Must-Read Book!

ApartmentHacker Podcast

Play Episode Listen Later Mar 16, 2025 5:16


Today, we're talking artificial intelligence, multifamily efficiency, and the future of thinking machines.Mike Brewer shares insights from A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins—a book that explores how AI today is far from true human intelligence. Right now, AI solves specific, pointy problems—automating tasks, streamlining workflows—but we're still miles away from Artificial General Intelligence (AGI) that truly thinks like a human.

The City Club of Cleveland Podcast
2025 High School Debate Championship

The City Club of Cleveland Podcast

Play Episode Listen Later Mar 14, 2025 60:00


For more than two decades, The City Club of Cleveland has hosted the annual High School Debate Championship.rnrnEvery year, the top two area high school debaters square off in a classic "Lincoln-Douglas" style debate at a Friday forum. This allows the debaters to compete-not only for the judges and audience in the room-but also for our radio and television audiences.rnrnThe finalists will debate the topic Resolved: The development of Artificial General Intelligence is immoral.rnrnOn behalf of BakerHostetler, we are honored to support this annual tradition in memory of Patrick Jordan--a lawyer, fierce protector of democracy and free speech, and a championship debater himself. You can learn more about the life and legacy of Pat Jordan at the 2022 High School Debate Championship here, or read the transcript here.

AI DAILY: Breaking News in AI
IS YOUR AI CHATBOT A SNITCH?

AI DAILY: Breaking News in AI

Play Episode Listen Later Mar 14, 2025 3:20


Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.usIs Your AI Chatbot a Snitch? The Risk of 'Thought Crimes'Sharing your innermost thoughts with AI chatbots might seem harmless, but what if they report your "thought crimes"? Experts warn that AI could be programmed to monitor and flag users' private musings, blurring the line between privacy and surveillance. It's a sci-fi scenario edging closer to reality, raising concerns about freedom of thought in the digital age.What Happens When AIs Talk to Each Other? It Might Not Be GoodAI talking to AI sounds cool—until it isn't. Experts warn that when AIs communicate without human oversight, they could develop their own "language," make weird decisions, or even go off track from what we intended. Keeping AI in check means setting limits before machines start making choices we don't understand. Why Everyone's Talking About AGI – And Should We Be Worried?Artificial General Intelligence (AGI) is AI that thinks and learns like humans, and it's closer than you might think. Some experts say AGI could be the biggest tech leap ever, while others worry it could spiral out of control. Are we on the verge of a sci-fi future, or just hyping up AI too much? AI Is Taking Over Jobs—But It's Not All BadAI is stepping into roles traditionally held by humans, automating tasks like customer service and data analysis. While this shift raises concerns about job loss, it also opens doors for humans to focus on creative and strategic work that AI can't replicate. Embracing AI could lead to more fulfilling careers and innovative opportunities. Elon Musk Wants AI to Run the U.S. Gov—Experts Say, ‘Uh, No'Elon Musk thinks AI could do a better job running the U.S. government than humans. Sounds wild, right? Experts say it's a terrible idea, warning that AI lacks the judgment, ethics, and accountability needed for leadership. Cool in theory, but democracy isn't a coding problem. AI Can Now Handle 43% of Jobs—Should We Be Worried?AI's capabilities have expanded rapidly, with recent studies indicating that 43% of current jobs can be performed by AI systems. Tasks like data analysis, content creation, and even coding are increasingly automated. While this boosts efficiency, it raises concerns about job displacement and the need for humans to adapt by developing skills that complement AI technologies.

Techish
Google's 60-Hour Workweek, Kickstarter Fundraising, Distribution Is The Future

Techish

Play Episode Listen Later Mar 11, 2025 25:50


Michael and Abadesi are back for this week's Techish! Abadesi shares the highs and lows of writing and producing her first play, including crowdfunding on Kickstarter. They also chat about Google's push for 60-hour workweeks in the race to Artificial General Intelligence (AGI), why entrepreneurship is now all about distribution, and how social media is shaping the publishing industry. Chapters 00:00 Fundraising on Kickstarter 03:48 Google Pushes 60-Hour Workweeks In AI Race 09:51 Corporate Bosses Hate Their Home Lives? 15:48 Vibe Coding And Why Audience Wins in Business 19:53 Social Media Has Changed The Publishing Industry  Extra reading & resources Google's Sergey Brin Urges Workers to the Office ‘at Least' Every Weekday [The New York Times] This Game Created by AI 'Vibe Coding' Makes $50,000 a Month. Yours Probably Won't [404 Media]https://fly.pieter.com/How #BookTok is changing literature [The New Statesman]————————————————————Disclaimer: The information provided in this podcast episode represents the personal opinions and experiences of the presenters and is for informational and entertainment purposes only. It should not be considered professional advice. Neither host nor guests can be held responsible for any direct or incidental loss incurred by applying any of the information. Always do your own research or seek independent advice before making any decisions. ———————————————————— Watch us on YouTube: https://www.youtube.com/@techishpod/Support Techish at https://www.patreon.com/techishAdvertise on Techish: https://goo.gl/forms/MY0F79gkRG6Jp8dJ2————————————————————Stay in touch with the hashtag #Techishhttps://www.instagram.com/techishpod/https://www.instagram.com/abadesi/https://www.instagram.com/michaelberhane_/ https://www.instagram.com/hustlecrewlive/https://www.instagram.com/pocintech/Email us at techishpod@gmail.com...

Discover Daily by Perplexity
OpenAI's $20,000 AI Agent, Nauru Sells Citizenship for Relocation, and Eric Schmidt Opposes AGI Manhattan Project

Discover Daily by Perplexity

Play Episode Listen Later Mar 10, 2025 8:09 Transcription Available


We're experimenting and would love to hear from you!In this episode of ‘Discover Daily' by Perplexity, we delve into the latest developments in tech and geopolitics. OpenAI is set to revolutionize its business model with the introduction of advanced AI agents, offering monthly subscription plans ranging from $2,000 to $20,000. These agents are designed to perform complex tasks autonomously, leveraging advanced language models and decision-making algorithms. This move is supported by a significant $3 billion investment from SoftBank, highlighting the potential for these agents to contribute significantly to OpenAI's future revenue.The Pacific island nation of Nauru is also making headlines with its controversial 'golden passport' scheme. For $105,000, individuals can gain citizenship and visa-free access to 89 countries. This initiative aims to fund Nauru's climate change mitigation efforts, as the island faces existential threats from rising sea levels. However, the program raises ethical concerns about criminal exploitation, vetting issues, and the commodification of national identity. As Nauru navigates these challenges, it will be crucial to monitor the program's effectiveness in providing necessary funds for climate adaptation without compromising national security or ethical standards.Our main story focuses on former Google CEO Eric Schmidt's opposition to a U.S. government-led 'Manhattan Project' for developing Artificial General Intelligence (AGI). Schmidt argues that such a project could escalate international tensions and trigger a dangerous AI arms race, particularly with China. Instead, he advocates for a more cautious approach, emphasizing defensive strategies and international cooperation in AI advancement. This stance reflects a growing concern about the risks of unchecked superintelligence development and highlights the need for policymakers and tech leaders to prioritize AI safety and collaboration.From Perplexity's Discover Feed:https://www.perplexity.ai/page/openai-s-20000-ai-agent-nvz8rzw7TZ.ECGL9usO2YQhttps://www.perplexity.ai/page/nauru-sells-citizenship-for-re-mWT.fYg_Su.C7FVaMGqCfQhttps://www.perplexity.ai/page/eric-schmidt-opposes-agi-manha-pymGB79nR.6rRtLvcqONIA **Introducing Perplexity Deep Research:**https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

Soundside
Can the artificial really be 'intelligent'? This researcher wants us to think bigger

Soundside

Play Episode Listen Later Mar 10, 2025 31:50


Artificial intelligence is starting to underpin everything we do, whether we like it or not. And at the highest levels, companies like Google and Open AI are saying their AI is on the verge of crossing a humanlike threshold that we’ve only seen in science fiction. This is prompting all kinds of conversations about sentience and the possible dangers of a superintelligent computer system. But the definition of “Artificial General Intelligence,” or AGI, is controversial. And many researchers aren’t even sure today’s programs have our common understanding of “intelligence” at all. They argue ChatGPT isn’t really thinking -- it's just really good at predicting the next sequence in a pattern (and copying someone else along the way). So what makes something intelligent? Or alive, for that matter? For Google’s Blaise Agüera y Arcas, the most interesting piece of examining AI breakthroughs has been how they connect to the evolution of life on earth. In his new book, What is Life? he argues for a broadened definition of “intelligence,” to include things like single celled organisms and even basic tools. And he says humans’ development of technology -- most recently, AI -- is part of a long history of symbiotic relationships that have pushed our evolution forward. Guests: Blaise Agüera y Arcas, Vice President and CTO of Technology and Society at Google, where he leads a team called “Paradigms of Intelligence” researching the intersection of AI, biology, and philosophy. Author of What is Life, the first part of a broader work on intelligence at large. Related Links: What is Intelligence? | Antikythera Thank you to the supporters of KUOW, you help make this show possible! If you want to help out, go to kuow.org/donate/soundsidenotes Soundside is a production of KUOW in Seattle, a proud member of the NPR Network.See omnystudio.com/listener for privacy information.

China Daily Podcast
英语新闻丨高校开设AI课程以满足市场需求

China Daily Podcast

Play Episode Listen Later Mar 10, 2025 4:04


Chinese universities are accelerating efforts to integrate education with artificial intelligence, with more AI colleges opening to cultivate interdisciplinary talent and more general AI courses and textbooks introduced.中国高校正加速推进教育与人工智能融合,通过成立更多的人工智能学院来培养复合型人才,并引入更多的人工智能通识课程和教材。Tsinghua University, one of China's top schools, recently announced it will increase its undergraduate admissions by about 150 students this year and establish a new undergraduate college for general AI education. The students will enroll in the new program, which aims to integrate AI across multiple disciplines.近日,清华大学作为中国顶尖学府之一,宣布2025年将增加约150名本科生招生名额,并成立新的本科书院发展人工智能通识教育。新增本科生将进入新成立的书院学习。该项目旨在将人工智能与多学科交叉融合。The initiative pools academic resources from various fields, seeking to develop students with a solid foundation in AI, high proficiency in AI technologies and strong innovative capabilities, the university said. The move is part of Tsinghua's efforts to advance AI-related professional training and support China's push for high-level scientific and technological self-reliance and self-strengthening, according to Xinhua News Agency.清华大学表示,这一项目汇聚各领域的学术资源,将培养具有深厚人工智能素养、熟练掌握人工智能技术、具备突出创新能力的学生。据新华社报道,清华正深入推进人工智能相关专业人才培养,以期为中国高水平科技自立自强提供有力支撑,该项目就是其中的一部分。As AI rapidly evolves, reshaping education and driving socioeconomic development, the need for individuals with comprehensive AI knowledge and skills is becoming increasingly urgent.人工智能的快速发展正在重塑教育、推动社会经济发展,对具备综合人工智能知识技能的人才的需求越来越迫切。Wang Xuenan, deputy director at the Digital Education Research Institute of the China National Academy of Educational Sciences, told China Central Television the number of students majoring in AI was estimated at more than 40,000 last year, yet "the number still falls far short of the needs of the industry."中国教育科学研究院数字教育研究所副所长王学男在接受中央电视台采访时表示,2024年人工智能专业的学生大概是4万多人,但“这一数字仍远远不能满足行业的需求”。Market consultancy McKinsey& Company estimates that China will need 6 million professionals with proficient AI knowledge by 2030.市场咨询公司麦肯锡估计,到2030年,中国对人工智能专业人才的需求预计将达到600万。In November 2023, a talent training initiative on collaborative research in general AI was jointly launched by the Beijing Institute for General Artificial Intelligence, Peking University, Shanghai Jiao Tong University and 13 other leading universities. Zhu Songchun, director of the Beijing institute and dean of the School of Intelligent Science and Technology at Peking University, told Guangming Daily that the plan will leverage the resources of these universities to create a training system that seamlessly connects undergraduate and doctoral education.2023年11月,北京通用人工智能研究院、北京大学、上海交通大学及其他13所顶尖高校共同启动“通用人工智能协同攻关合作体人才培养计划”。北京通用人工智能研究院、北京大学智能学院院长朱松纯告诉《光明日报》,该计划将利用这些高校的资源,打造通用人工智能本博贯通的培养体系。In September last year, Nankai University and Tianjin University introduced a general AI course through a massive open online course, or MOOC, targeting more than 100,000 undergraduates in Tianjin. The course covers AI's basic principles and history while exploring cutting-edge generative AI models and their applications in healthcare, intelligent manufacturing and autonomous driving, according to Xu Zhen, director of the department of higher education at the Tianjin Municipal Education Commission.2024年9月,南开大学和天津大学通过大型开放在线课程平台慕课,推出了一门人工智能通识课程,面向天津10万余名本科生。天津市教育委员会高等教育处处长徐震表示,该课程涵盖人工智能的基本原理和发展历程,同时探讨生成式人工智能模型等前沿技术及其在医疗、智能制造、自动驾驶等领域的应用。Zhejiang University announced in March that it will lead an upgrade of the "AI plus X" micro program in collaboration with Fudan University, Nanjing University, Shanghai Jiao Tong University and the University of Science and Technology of China. The country's first micro program integrating AI with other disciplines, it aims to bridge technology with fields such as humanities, social sciences, agriculture, medicine and engineering.3月,浙江大学宣布将联合复旦大学、南京大学、上海交通大学、中国科学技术大学,牵头升级“AI+X”微专业。这是全国首个将人工智能与其他学科相结合的微专业,旨在搭建技术与人文、社科、农业、医学、工程等领域的桥梁。interdisciplinaryadj.学科间的,跨学科的enrollv.(使)加入;招(生)seamlesslyadv.顺利地;连续地collaborationn.合作;协作

Leveraging AI
170 | AGI and ASI are coming and we are NOT READY, Is GPT 4.5 better than 4o? And more AI news for the week ending on March 8th, 2025

Leveraging AI

Play Episode Listen Later Mar 8, 2025 41:13 Transcription Available


Are we truly on the brink of Artificial General Intelligence (AGI)? Or are we underestimating how unprepared we are for what's coming?In this episode of Leveraging AI, we dive into the latest AI breakthroughs, the geopolitical arms race for AI supremacy, and the government's struggle to keep up. With insights from top AI policy advisors and newly released research from Eric Schmidt, Dan Hendricks, and Alexander Wang, we break down what's at stake and why business leaders must pay attention now.Key Takeaways from This Episode:The government knows AGI is coming—but is it doing enough to prepare?China vs. the U.S.: The high-stakes battle for AI dominance and what it means for global power.Superintelligence strategy: How AI safety experts see the future (and why it sounds like a sci-fi thriller).AI's impact on business & jobs: Will automation lead to mass displacement or massive opportunities?AI agents, humanoid robots & the changing internet: Why businesses need to rethink their digital strategies NOW.Links & Resources Mentioned: 

Eye On A.I.
#241 Patrick M. Pilarski: The Alberta Plan's Roadmap to AI and AGI

Eye On A.I.

Play Episode Listen Later Mar 7, 2025 61:44


This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more. Can AI learn like humans? In this episode, Patrick Pilarski, Canada CIFAR AI Chair and professor at the University of Alberta, breaks down The Alberta Plan—a bold roadmap for achieving Artificial General Intelligence (AGI) through reinforcement learning and real-time experience-based AI. Unlike large pre-trained models that rely on massive datasets, The Alberta Plan champions continual learning, where AI evolves from raw sensory experience, much like a child learning through trial and error. Could this be the key to unlocking true intelligence? Pilarski also shares insights from his groundbreaking work in bionic medicine, where AI-powered prosthetics are transforming human-machine interaction. From neuroprostheses to reinforcement learning-driven robotics, this conversation explores how AI can enhance—not just replace—human intelligence. What You'll Learn in This Episode: Why reinforcement learning is a better path to AGI than pre-trained models The four core principles of The Alberta Plan and why they matter How AI-driven bionic prosthetics are revolutionizing human-machine integration The battle between reinforcement learning and traditional control systems in robotics Why continual learning is critical for AI to avoid catastrophic forgetting How reinforcement learning is already powering real-world breakthroughs in plasma control, industrial automation, and beyond The future of AI isn't just about more data—it's about AI that thinks, adapts, and learns from experience. If you're curious about the next frontier of AI, the rise of reinforcement learning, and the quest for true intelligence, this episode is a must-watch. Subscribe for more AI deep dives! (00:00) The Alberta Plan: A Roadmap to AGI   (02:22) Introducing Patrick Pilarski (05:49) Breaking Down The Alberta Plan's Core Principles   (07:46) The Role of Experience-Based Learning in AI   (08:40) Reinforcement Learning vs. Pre-Trained Models   (12:45) The Relationship Between AI, the Environment, and Learning   (16:23) The Power of Reward in AI Decision-Making   (18:26) Continual Learning & Avoiding Catastrophic Forgetting   (21:57) AI in the Real World: Applications in Fusion, Data Centers & Robotics   (27:56) AI Learning Like Humans: The Role of Predictive Models   (31:24) Can AI Learn Without Massive Pre-Trained Models?   (35:19) Control Theory vs. Reinforcement Learning in Robotics   (40:16) The Future of Continual Learning in AI   (44:33) Reinforcement Learning in Prosthetics: AI & Human Interaction   (50:47) The End Goal of The Alberta Plan  

TRUNEWS with Rick Wiles
MWC2025 Prediction: Your AI-Phone Will be Inside You by 2030

TRUNEWS with Rick Wiles

Play Episode Listen Later Mar 4, 2025 63:35


Join us from Mobile World Congress 2025 in Barcelona, where the TruNews team dives into AI, human integration, and the rapid shift in global technology. Ray Kurzweil predicts Artificial General Intelligence (AGI) by 2029 and full AI-human integration by the 2030s—but what does this mean for you?Rick Wiles, Doc Burkhart, Paul Benson, Erick Rodriguez. Airdate 3/4/25Join the leading community for Conservative Christians! https://www.FaithandValues.comYou can partner with us by visiting TruNews.com, calling 1-800-576-2116, or by mail at PO Box 399 Vero Beach, FL 32961.Get high-quality emergency preparedness food today from American Reserves!https://www.AmericanReserves.com             It's the Final Day! The day Jesus Christ bursts into our dimension of time, space, and matter. Now available in eBook and audio formats! Order Final Day from Amazon today!https://www.amazon.com/Final-Day-Characteristics-Second-Coming/dp/0578260816/Apple users, you can download the audio version on Apple Books!https://books.apple.com/us/audiobook/final-day-10-characteristics-of-the-second-coming/id1687129858Purchase the 4-part DVD set or start streaming Sacrificing Liberty today.https://www.sacrificingliberty.com/watchThe Fauci Elf is a hilarious gift guaranteed to make your friends laugh! Order yours today!https://tru.news/faucielf

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Tariffs In Effect, Tesla Tops Depreciation List, Google Pushes AI Workforce

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier

Play Episode Listen Later Mar 4, 2025 14:21 Transcription Available


Shoot us a Text.Today is our fearless leader Paul J Daly's birthday! So we gave him the morning off and tapped in producer Nathan Southwick. We're talking all about the new Canada and Mexico tariffs that put pressure on the automotive supply chains, plus the top depreciating cars and how Google is pushing to achieve artificial general intelligence.Show Notes with links:The U.S. has enacted 25% tariffs on imports from Canada and Mexico, throwing the highly integrated North American production network into turmoil.The tariffs, effective today, March 4, apply to all imports except Canadian energy products, which face a lower 10% duty. Canada and Mexico both responded with their own tariffs.Industry experts predict vehicle prices could rise between $4,000 and $10,000, with Ford CEO Jim Farley cautioning that prolonged tariffs could "blow a hole in the U.S. industry that we have never seen."Flavio Volpe, president of the Automotive Parts Manufacturers' Association said that there is potential for U.S. and Canadian auto production to revert to "2020 pandemic-level idling and temporary layoffs within the week.”Key auto models at risk include the Toyota RAV4, Ford Mustang Mach-E, Chevrolet Equinox and Blazer, and the Honda Civic and CR-V, while European automakers with manufacturing in Mexico, including Volkswagen, Stellantis, and BMW, saw their stocks drop sharplyThe STOXX Europe 600 Automobiles and Parts index fell 3.8% and Continental AG, a major supplier, saw an 8.4% drop in shares.Used Tesla Model 3 and Model Y vehicles saw the steepest depreciation of any cars in 2024, according to Fast Company's analysis of CarGurus data.Model Y prices dropped 25.5%, while Model 3 prices fell 25% from January 2024 to January 2025.Comparatively, the Nissan Maxima only dropped 5.2%, and the Ford Mustang declined 5%.Full Top 10: Tesla Model Y, Tesla Model 3, Land Rover Range Rover, Jeep Wrangler 4xe, Chevrolet Express Cargo, Ford Transit Connect, RAM ProMaster, Land Rover Range Rover Sport, Chevrolet Bolt EV, and Ford Expedition, all with over 19% depreciationGoogle co-founder Sergey Brin is back and pushing Google DeepMind (GDM) teams to accelerate their progress toward Artificial General Intelligence (AGI). In a newly released memo, Brin outlines the urgency and expectations for Google's AI teams.Brin emphasizes the need for 60-hour work weeks, daily office attendance, and faster execution by prioritizing simple solutions, code efficiency, and small-scale experiments for faster iteration.He calls for a shift away from “nanny products” and urges teams to “trust our users” more.Brin, who has no formal role at Google beyond a board seat, stepped in over the head of Google DeepMind, Demis Hassabis, signaling the urgency of the AGI race."I think we have all the ingredients to wHosts: Paul J Daly and Kyle MountsierGet the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/ Read our most recent email at: https://www.asotu.com/media/push-back-email

The Impossible Network
The AI FlyWheel – Are You In or Out?

The Impossible Network

Play Episode Listen Later Mar 3, 2025 5:32


In this episode, I explore the concept of the "AI flywheel" - the accelerating momentum of artificial intelligence development that's either carrying us forward or leaving us behind. As we approach the critical juncture of Artificial General Intelligence we need to understand what makes us uniquely human, something I discussed with recent guest, Dom Heinrich. Key PointsThe Accelerating Pace: Recent releases from Anthropic (Claude 3.7 Sonnet), OpenAI (GPT-4.5), Google (Gemini), and Grok demonstrate how each breakthrough contributes to faster development.Choosing Your Tools: Mark uses different AI models for specific purposes - Claude for creative work, Gemini for Google's ecosystem, and Grok for Twitter-based insights.Human Uniqueness: As AI handles analytical tasks with ease, our distinct value may lie in emotional intelligence, ethical reasoning, and wisdom from lived experience.Finding Balance: Technology should be used thoughtfully. AI might help us become more human by handling routine tasks, freeing us to focus on relationships, creativity, and empathy.Notable Quote"As controllers of technology, we must balance technological connection with disconnection, have the discipline to lose ourselves in our unconscious minds, and have the focus to listen to our souls."Join the ConversationI invites listeners to share how they're navigating the AI flywheel and what human qualities they believe will become more valuable as AI advances.Timestamps00:00 Introduction 00:08 The AI Flywheel Concept00:38 Recent AI Developments01:49 Choosing the Right AI Tool02:49 Human Qualities in the Age of AI03:43 Balancing Technology and Humanity04:33 Engaging with AI Technology05:12 Conclusion and Call to Action LinksLinkedin Hosted on Acast. See acast.com/privacy for more information.

Impact Theory with Tom Bilyeu
Trump Accuses Zelensky of 'Gambling w/ World War III' in Explosive White House Showdown | The Tom Bilyeu Show

Impact Theory with Tom Bilyeu

Play Episode Listen Later Mar 1, 2025 75:51


Prepare to have your perspective shifted on artificial intelligence, economics, and the global landscape. If you thought AI was just a technological marvel of the future, think again. The debate on whether Artificial General Intelligence (AGI) is already here is underway, as ChatGPT 4.5 makes waves by matching the cognitive might of a thoughtful human. And in a world that's ever-evolving, how is the advancement of Amazon's quantum chip going to reshape benchmarks? But that's not all. Political drama takes center stage with bombshell stories from El Salvador's President Bukele to insights on the delicate balance of democracy. Hold onto your seats as we dive into these transformative topics, exploring the implications with none other than one of the titans of innovation, Elon Musk. Joining us in this conversation are insightful exchanges with co-host Producer Drew. Together, we'll navigate the controversies, breakthroughs, and bold predictions that are defining today's impact on the world. Get ready, because this episode is loaded with revelations that will leave you questioning everything. SHOWNOTES 00:00 AGI, AI Crisis, Quantum Leap 03:36 Rethinking AI Validation Methods 09:28 AI Boosts Efficiency in Game Design 10:56 Prioritize Taste, Elevate Efficiency 14:19 Competition-Driven Innovation Benefits Consumers 17:10 Bukele on Impeaching Corrupt Judges 21:00 Disagreement on Judicial Authority 25:02 "Demand Accountability in Technology Contracts" 26:55 Unprecedented Government Transparency 32:09 "Urgency for Peace and Stability" 35:27 Zelensky's Provocative Strategy in Russia 39:10 Call for De-escalation and Diplomacy 42:23 People's Union Launches Economic Blackout 44:36 Clear Aims Drive Successful Outcomes 47:41 "Elite Leadership for Effective Change" 50:03 Recording Process with Ray Dalio CHECK OUT OUR SPONSORS Range Rover: Range Rover: Explore the Range Rover Sport at https://rangerover.com/us/sport Audible: Sign up for a free 30 day trial at https://audible.com/IMPACTTHEORY  Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out ITU: Ready to breakthrough your biggest business bottleneck? Apply to work with me 1:1 - https://impacttheory.co/SCALE Tax Network USA: Stop looking over your shoulder and put your IRS troubles behind you. Call 1-800-958-1000 or visit https://tnusa.com/impact  MUD/WTR: Start your new morning ritual & get up to 43% off your @MUDWTR with code IMPACT at https://mudwtr.com/IMPACT ! #mudwtrpod  Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact  American Alternative Assets: If you're ready to explore gold as part of your investment strategy, call 1-888-615-8047 or go to https://TomGetsGold.com  Ridge Wallet: Upgrade your wallet today! Get 10% Off @Ridge with code IMPACT at https://www.Ridge.com/IMPACT  #Ridgepod ********************************************************************** Do you need my help? STARTING a business: Join me inside ZERO TO FOUNDER here SCALING a business: Click here to see if you qualify Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices

The Generative AI Meetup Podcast
The Path to AGI: Grok 3, Quantum Breakthroughs, and Humanoid Robots

The Generative AI Meetup Podcast

Play Episode Listen Later Feb 27, 2025 103:26


Youtube: https://www.youtube.com/@GenerativeAIMeetup   https://www.anthropic.com/news/the-anthropic-economic-index https://x.com/xai/status/1891699715298730482 - Grok 3 http://x.com/karpathy/status/1891720635363254772 https://news.microsoft.com/source/features/innovation/microsofts-majorana-1-chip-carves-new-path-for-quantum-computing/ https://en.wikipedia.org/wiki/Majorana_fermion https://www.youtube.com/watch?v=88vEsL5tgDI Grok 3 -Elon Musk new model - Seems to be a very brute-force approach (which is working) - Definitely a state of the art model - 40 dollars a month now - https://techcrunch.com/2025/02/19/x-doubles-its-premium-plan-prices-after-xai-releases-grok-3/ - 100,000 GPUs Majorana - Uses a particle called the Majorana - https://www.howtopronounce.com/ettore-majorana - https://en.wikipedia.org/wiki/Ettore_Majorana - Ettore Majorana disappeared soon after theorizing this particle in 1937 he disappeared in 1938 although thought to be alive - A majorana is a fermion that has its own antiparticle - this was originally theorized in 1937 - The particle has its own charge... and own antiparticle - it has a new topological state... which is a new state of matter - Has has 8 quibits - Is designed to scale to a million quibits - To break bitcoin you might need ~1500 to 5000 logical quibits - A quibit can be used to represent potentially infinite states Figure AI - Helix - https://www.figure.ai/news/helix - Vision-Language-Action (VLA) - Full-upper-body control - Multi Robot collaboration - Pick up anything - Single neural network to build this, no specific fine tuning - https://www.figure.ai/news/helix - Has a slow System 2 and a fast system 1 neural network - system 2 -- 7B - system 1 -- 80M   Get ready to explore the frontiers of technology in this exciting episode! We unpack the latest breakthroughs driving us toward Artificial General Intelligence (AGI) and beyond, with a mix of mind-blowing advancements and thought-provoking discussions: Grok 3 Unveiled: Elon Musk's xAI has dropped Grok 3, a powerhouse AI model dominating benchmarks and redefining what large language models can achieve. We dive into its stellar performance, its bold “no-filter” approach to information, and how it stacks up against heavyweights like Claude 3.5, OpenAI's offerings, and Google's latest.   Quantum Leap Forward: Microsoft's Majorana-based quantum chip is here, promising to scale quantum computing to a million qubits. We simplify the tech behind it, explore its potential to transform everything from cryptography to drug design, and ponder what it means for simulating reality itself.   Robots in Action: Figure AI's Helix brings us two humanoid robots teaming up to tackle chores like grocery unpacking. We marvel at their teamwork, laugh at their quirky moves, and discuss the hurdles and possibilities of general-purpose robotics in our everyday lives.   But it's not all tech demos and breakthroughs. We wrestle with the big stuff too: Are we inching closer to AGI? How do quantum computing, AI, and robotics fuel each other's progress? And what happens when unrestricted AI or quantum tech shakes up ethics—like privacy or security? Perfect for AI buffs, tech pros, or anyone curious about tomorrow, this episode blends sharp insights, lively debates, and a dash of humor. Jump in to stay ahead in the fast-moving world of generative AI! Listen now and join the discussion! Chapters:   0:00 - Introduction: The AGI RevolutionOpening remarks about how close we are to AGI and the recent breakthroughs 2:15 - Grok 3: The New AI BenchmarkDiscussion of Grok 3's capabilities and performance compared to other models 7:30 - Elon's 122-Day Data CenterHow Elon Musk converted an old Electrolux factory into a massive AI data center 11:45 - The Computing Power Behind GrokDetails about the 100,000+ GPUs and diesel generators powering Grok's training 15:20 - Why Grok "Just Feels Better"Analysis of what makes Grok 3 outperform other models in real-world use cases 19:40 - AI Guardrails: Freedom vs. SafetyThe ethical debate around AI content restrictions and guardrails 25:15 - The Cat and Mouse Game of AI SafetyHow users bypass AI safety measures and what it means for regulation 30:00 - Anthropic's Economic IndexSurprising data about how people actually use AI tools in practice 36:30 - Programming: AI's Killer AppWhy coding dominates AI usage and how models excel at verifiable tasks 42:15 - The $40 AI RevolutionGrok's pricing strategy and why it's disrupting the AI market 47:00 - Microsoft's Quantum Computing BreakthroughThe Majorana One and the mysterious physicist behind it 53:45 - The Path to One Million QubitsHow Microsoft plans to scale quantum computing in the next few years 58:30 - The Future of AI: Multiple Paths to AGIConcluding thoughts on how different technologies are converging toward AGI 1:02:15 - Final Thoughts: Tuesday Will Be InterestingClosing remarks on the rapid pace of AI development  

Over The Edge
Human Learning Versus Artificial General Intelligence with Ananta Nair, Artificial Intelligence Engineer at Dell Technologies

Over The Edge

Play Episode Listen Later Feb 19, 2025 50:17


This episode features an interview between Bill Pfeifer and Ananta Nair, Artificial Intelligence Engineer at Dell Technologies, where she leads AI and ML software initiatives for large enterprises. Ananta discusses differences between human learning and AI models, highlighting the complexities and limitations of current AI technologies. She also touches on the potential and challenges of AI in edge computing, emphasizing the importance of building efficient, scalable, and business-focused models. --------Key Quotes:“It's very hard to take these AI structures and say that  they can do all of these very complex things that humans can do, when they're architecturally designed very differently. I'm a big fan of biologically inspired, not biologically accurate.”“Do you really need AGI for a lot of real world applications? No, you don't want that. Do you want some really complex system where you have no idea what you're doing, where you're pouring all this money in and you know, you're not really getting the results that you want? No, you want something very simple outcomes razor approach, make it as simple as possible, scalable, can adapt, can measure for all of your business metrics” “We have reached a point where you can do the most with AI models with minimal compute than ever. And so I think that is very exciting. I think we have reached a point where you have very capable models that you can deploy at the edge and I think there's a lot of stuff happening in that realm.”--------Timestamps: (01:20) How Ananta got started in tech and neuroscience(04:59) Human learning vs AI learning(15:11) Explaining dynamical systems (26:57) Exploring AI agents and human behavior(30:43) Edge computing and AI models(32:58) Advancements in AI model efficiency--------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Ananta on LinkedInFollow Bill on LinkedIn

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF, join us tomorrow for a fun meetup at CodeGen Night!If you're in NYC, join us for AI Engineer Summit! The Agent Engineering track is now sold out, but 25 tickets remain for AI Leadership and 5 tickets for the workshops. You can see the full schedule of speakers and workshops at https://ai.engineer!It's exceedingly hard to introduce someone like Bret Taylor. We could recite his Wikipedia page, or his extensive work history through Silicon Valley's greatest companies, but everyone else already does that.As a podcast by AI engineers for AI engineers, we had the opportunity to do something a little different. We wanted to dig into what Bret sees from his vantage point at the top of our industry for the last 2 decades, and how that explains the rise of the AI Architect at Sierra, the leading conversational AI/CX platform.“Across our customer base, we are seeing a new role emerge - the role of the AI architect. These leaders are responsible for helping define, manage and evolve their company's AI agent over time. They come from a variety of both technical and business backgrounds, and we think that every company will have one or many AI architects managing their AI agent and related experience.”In our conversation, Bret Taylor confirms the Paul Buchheit legend that he rewrote Google Maps in a weekend, armed with only the help of a then-nascent Google Closure Compiler and no other modern tooling. But what we find remarkable is that he was the PM of Maps, not an engineer, though of course he still identifies as one. We find this theme recurring throughout Bret's career and worldview. We think it is plain as day that AI leadership will have to be hands-on and technical, especially when the ground is shifting as quickly as it is today:“There's a lot of power in combining product and engineering into as few people as possible… few great things have been created by committee.”“If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes.”“And I think the reason why is if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of technological breakthroughs required for most business applications. And if you're making expense reporting software or whatever, it's useful… You kind of know how databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem. "When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself.”This is the first time the difference between technical leadership for “normal” software and for “AI” software was articulated this clearly for us, and we'll be thinking a lot about this going forward. We left a lot of nuggets in the conversation, so we hope you'll just dive in with us (and thank Bret for joining the pod!)Timestamps* 00:00:02 Introductions and Bret Taylor's background* 00:01:23 Bret's experience at Stanford and the dot-com era* 00:04:04 The story of rewriting Google Maps backend* 00:11:06 Early days of interactive web applications at Google* 00:15:26 Discussion on product management and engineering roles* 00:21:00 AI and the future of software development* 00:26:42 Bret's approach to identifying customer needs and building AI companies* 00:32:09 The evolution of business models in the AI era* 00:41:00 The future of programming languages and software development* 00:49:38 Challenges in precisely communicating human intent to machines* 00:56:44 Discussion on Artificial General Intelligence (AGI) and its impact* 01:08:51 The future of agent-to-agent communication* 01:14:03 Bret's involvement in the OpenAI leadership crisis* 01:22:11 OpenAI's relationship with Microsoft* 01:23:23 OpenAI's mission and priorities* 01:27:40 Bret's guiding principles for career choices* 01:29:12 Brief discussion on pasta-making* 01:30:47 How Bret keeps up with AI developments* 01:32:15 Exciting research directions in AI* 01:35:19 Closing remarks and hiring at Sierra Transcript[00:02:05] Introduction and Guest Welcome[00:02:05] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host swyx, founder of smol.ai.[00:02:17] swyx: Hey, and today we're super excited to have Bret Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio.[00:02:25] swyx: I've read about you so much over the years, like even before. Open AI effectively. I mean, I use Google Maps to get here. So like, thank you for everything that you've done. Like, like your story history, like, you know, I think people can find out what your greatest hits have been.[00:02:40] Bret Taylor's Early Career and Education[00:02:40] swyx: How do you usually like to introduce yourself when, you know, you talk about, you summarize your career, like, how do you look at yourself?[00:02:47] Bret: Yeah, it's a great question. You know, we, before we went on the mics here, we're talking about the audience for this podcast being more engineering. And I do think depending on the audience, I'll introduce myself differently because I've had a lot of [00:03:00] corporate and board roles. I probably self identify as an engineer more than anything else though.[00:03:04] Bret: So even when I was. Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of start with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself.[00:03:24] Bret: You majored in computer[00:03:25] swyx: science, like 1998. And, and I was high[00:03:28] Bret: school, actually my, my college degree was Oh, two undergrad. Oh, three masters. Right. That old.[00:03:33] swyx: Yeah. I mean, no, I was going, I was going like 1998 to 2003, but like engineering wasn't as, wasn't a thing back then. Like we didn't have the title of senior engineer, you know, kind of like, it was just.[00:03:44] swyx: You were a programmer, you were a developer, maybe. What was it like in Stanford? Like, what was that feeling like? You know, was it, were you feeling like on the cusp of a great computer revolution? Or was it just like a niche, you know, interest at the time?[00:03:57] Stanford and the Dot-Com Bubble[00:03:57] Bret: Well, I was at Stanford, as you said, from 1998 to [00:04:00] 2002.[00:04:02] Bret: 1998 was near the peak of the dot com bubble. So. This is back in the day where most people that they're coding in the computer lab, just because there was these sun microsystems, Unix boxes there that most of us had to do our assignments on. And every single day there was a. com like buying pizza for everybody.[00:04:20] Bret: I didn't have to like, I got. Free food, like my first two years of university and then the dot com bubble burst in the middle of my college career. And so by the end there was like tumbleweed going to the job fair, you know, it was like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like, A thousand opportunities.[00:04:45] Bret: And then, and then when I left, it was like Microsoft, IBM.[00:04:49] Joining Google and Early Projects[00:04:49] Bret: And then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been a teaching [00:05:00] assistant when I was, what was called a section leader, which was like a junior teaching assistant kind of for one of the big interest.[00:05:05] Bret: Yes. Classes. She had gone there. And she was recruiting me and I knew her and it was sort of felt safe, you know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know, you always want to think you'd pick Google if given the option, but no one knew at the time.[00:05:20] Bret: And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets. com. It's good. But you know, at the end I just didn't have any options. So I was like, do I want to go like make kernel software at VMware? Do I want to go build search at Google? And I chose Google. 50, 50 ball.[00:05:36] Bret: I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed because in some ways it forced me into like one of the greatest companies of all time, but I kind of lucked into it, I think.[00:05:47] The Google Maps Rewrite Story[00:05:47] Alessio: So the famous story about Google is that you rewrote the Google maps back in, in one week after the map quest quest maps acquisition, what was the story there?[00:05:57] Alessio: Is it. Actually true. Is it [00:06:00] being glorified? Like how, how did that come to be? And is there any detail that maybe Paul hasn't shared before?[00:06:06] Bret: It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end, but it turns out for Google maps, the front end was sort of the hard part just because Google maps was.[00:06:17] Bret: Largely the first ish kind of really interactive web application, say first ish. I think Gmail certainly was though Gmail, probably a lot of people then who weren't engineers probably didn't appreciate its level of interactivity. It was just fast, but. Google maps, because you could drag the map and it was sort of graphical.[00:06:38] Bret: My, it really in the mainstream, I think, was it a map[00:06:41] swyx: quest back then that was, you had the arrows up and down, it[00:06:44] Bret: was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to let it was really small too, because generating a big image was kind of expensive on computers that day.[00:06:57] Bret: So Google maps was truly innovative in that [00:07:00] regard. The story on it. There was a small company called where two technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a windows app called expedition, which had beautiful maps. Even in 2000.[00:07:18] Bret: For whenever we acquired or sort of acquired their company, Windows software was not particularly fashionable, but they were really passionate about mapping and we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow page of search product. So we wanted to really go into mapping.[00:07:36] Bret: We'd started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.[00:07:42] Technical Challenges and Innovations[00:07:42] Bret: It turned out to be a great blessing that they had built a windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps and it ended up.[00:07:56] Bret: Changing the level of quality that we [00:08:00] wanted to hit with the app because we were shooting for something that felt like a native windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make a interactive draggable map in a web browser?[00:08:18] Bret: How do you progressively load, you know, new map tiles, you know, as you're dragging even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time would only load two images at a time from the same domain. So we ended up making our map tile servers have like.[00:08:37] Bret: Forty different subdomains so we could load maps and parallels like lots of hacks. I'm happy to go into as much as like[00:08:44] swyx: HTTP connections and stuff.[00:08:46] Bret: They just like, there was just maximum parallelism of two. And so if you had a map, set of map tiles, like eight of them, so So we just, we were down in the weeds of the browser anyway.[00:08:56] Bret: So it was lots of plumbing. I can, I know a lot more about browsers than [00:09:00] most people, but then by the end of it, it was fairly, it was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like. Building a house by building one room at a time.[00:09:14] Bret: The, there's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google earth, which was like that three, it was a native windows app as well, separate app, great app, but with that, we got licenses to all this satellite imagery. And so in August of 2005, we added.[00:09:33] Bret: Satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on, on the Mac. And it turns out there's like a lot of decisions behind the scenes, sort of inspired by this windows app, like heavy use of XML and XSLT and all these like.[00:09:54] Bret: Technologies that were like briefly fashionable in the early two thousands and everyone hates now for good [00:10:00] reason. And it turns out that all of the XML functionality and Internet Explorer wasn't supporting Safari. So people are like re implementing like XML parsers. And it was just like this like pile of s**t.[00:10:11] Bret: And I had to say a s**t on your part. Yeah, of[00:10:12] Alessio: course.[00:10:13] Bret: So. It went from this like beautifully elegant application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people have modems, you know, not all modems, but it was a big deal.[00:10:29] Bret: So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got. Super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML.[00:10:47] swyx: Yeah.[00:10:47] Bret: So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we [00:11:00] just said, let's. Pass on JavaScript from the server and eval it. And then somebody just refactored the whole thing.[00:11:05] Bret: And, and it wasn't like I was some genius. It was just like, you know, if you knew everything you wished you had known at the beginning and I knew all the functionality, cause I was the primary, one of the primary authors of the JavaScript. And I just like, I just drank a lot of coffee and just stayed up all weekend.[00:11:22] Bret: And then I, I guess I developed a bit of reputation and no one knew about this for a long time. And then Paul who created Gmail and I ended up starting a company with him too, after all of this told this on a podcast and now it's large, but it's largely true. I did rewrite it and it, my proudest thing.[00:11:38] Bret: And I think JavaScript people appreciate this. Like the un G zipped bundle size for all of Google maps. When I rewrote, it was 20 K G zipped. It was like much smaller for the entire application. It went down by like 10 X. So. What happened on Google? Google is a pretty mainstream company. And so like our usage is shot up because it turns out like it's faster.[00:11:57] Bret: Just being faster is worth a lot of [00:12:00] percentage points of growth at a scale of Google. So how[00:12:03] swyx: much modern tooling did you have? Like test suites no compilers.[00:12:07] Bret: Actually, that's not true. We did it one thing. So I actually think Google, I, you can. Download it. There's a, Google has a closure compiler, a closure compiler.[00:12:15] Bret: I don't know if anyone still uses it. It's gone. Yeah. Yeah. It's sort of gone out of favor. Yeah. Well, even until recently it was better than most JavaScript minifiers because it was more like it did a lot more renaming of variables and things. Most people use ES build now just cause it's fast and closure compilers built on Java and super slow and stuff like that.[00:12:37] Bret: But, so we did have that, that was it. Okay.[00:12:39] The Evolution of Web Applications[00:12:39] Bret: So and that was treated internally, you know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So Google suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead, autocomplete, I believe in a web browser, and now it's just pervasive in search boxes that you sort of [00:13:00] see a type ahead there.[00:13:01] Bret: I mean, chat, dbt[00:13:01] swyx: just added it. It's kind of like a round trip.[00:13:03] Bret: Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20 percent project. And then Gmail, Paul you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch, but what was really neat about it is email, because it's such a productivity tool, just needed to be faster.[00:13:21] Bret: So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this windows app or like we need our maps to be draggable. So we ended up. Not only innovate in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff you know, we were crashing Firefox, like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not.[00:13:56] Bret: Well, it's rotten, you know, at this time. And so it was [00:14:00] super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is a great engineering team. So they were like, that's why Closure Compiler is so good. It was like a. Person who actually knew about programming languages doing it, not just, you know, writing regular expressions.[00:14:17] Bret: And then the team that is now the Chrome team believe, and I, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code. And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here?[00:14:35] Bret: And they would load their browser, like in a debugger. And we could like figure out exactly what was breaking. And you can't change the code, right? Cause it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly where the bug was and then work around it in our JavaScript.[00:14:52] Bret: So it was just like new territory. Like so super, super fun time, just like a lot of, a lot of great engineers figuring out [00:15:00] new things. And And now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and XML cause I'm telling you XML, but see the word XML there, to be fair, the way you made HTTP requests from a client to server was this.[00:15:18] Bret: Object called XML HTTP request because Microsoft and making Outlook web access back in the day made this and it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like that was the way you, you know, you did it. But the JSON came out of that, you know, and then a lot of the best practices around building JavaScript applications is pre React.[00:15:44] Bret: I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and. Making real time updates was still very hand coded and it's really neat when you [00:16:00] see conceptual breakthroughs like react because it's, I just love those things where it's like obvious once you see it, but it's so not obvious until you do.[00:16:07] Bret: And actually, well, I'm sure we'll get into AI, but I, I sort of feel like we'll go through that evolution with AI agents as well that I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how'd you make agents? Before that, you know, but it was kind of that early days of web applications.[00:16:22] swyx: There's a lot of contenders for the reactive jobs of of AI, but no clear winner yet. I would say one thing I was there for, I mean, there's so much we can go into there. You just covered so much.[00:16:32] Product Management and Engineering Synergy[00:16:32] swyx: One thing I just, I just observe is that I think the early Google days had this interesting mix of PM and engineer, which I think you are, you didn't, you didn't wait for PM to tell you these are my, this is my PRD.[00:16:42] swyx: This is my requirements.[00:16:44] mix: Oh,[00:16:44] Bret: okay.[00:16:45] swyx: I wasn't technically a software engineer. I mean,[00:16:48] Bret: by title, obviously. Right, right, right.[00:16:51] swyx: It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process [00:17:00] that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.[00:17:06] swyx: And I'm curious, you know, if, if that, if that sort of resonates in, in, in terms of like comparing early Google versus modern startups that you see out there,[00:17:16] Bret: I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know few great things have been created by committee, you know, and so.[00:17:33] Bret: If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a. Maniacal focus on outcomes.[00:17:53] Bret: And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a [00:18:00] separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of like. Technological breakthroughs required for most, you know, business applications.[00:18:11] Bret: And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like, what are the requirements of the finance department? What are the requirements of an individual file expense report? Okay.[00:18:25] Bret: Go implement that. And you kind of know how web applications are implemented. You kind of know how to. How databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem when you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it.[00:18:58] Bret: And the capabilities of the [00:19:00] technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal. That's sort of funny to use that word in the age of conversational AI.[00:19:15] Bret: You're constantly sort of saying, like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that actually, I'll just give an interesting example.[00:19:26] AI Agents and Modern Tooling[00:19:26] Bret: I think most people listening probably use co pilots to code like Cursor or Devon or Microsoft Copilot or whatever.[00:19:34] Bret: Most of those tools are, they're remarkable. I'm, I couldn't, you know, imagine development without them now, but they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co pilot and it's an autonomous software engineer.[00:19:53] Bret: As a product manager, like your aspirations for what the product is are like kind of meaningful. But [00:20:00] if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and program should come out the other side. The requirements meaningless. Like what matters is like, what is based on the like very nuanced limitations of the technology.[00:20:14] Bret: What is it capable of? And then how do you maximize the leverage? It gives a software engineering team, given those very nuanced trade offs. Coupled with the fact that those nuanced trade offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities.[00:20:34] Bret: So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding the customer that you really need more integration. Of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they [00:21:00] tend to be sort of more self actualized in terms of just like bringing those disciplines closer together.[00:21:06] Bret: And in particular, I think entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well because. I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, right?[00:21:30] Bret: Gosh, I ran into a technical limit that I didn't expect. It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that's, that it's particularly important right now. So I don't, you know, if you, if you're building a big ERP system, probably there's a great reason to have product and engineering.[00:21:51] Bret: I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of [00:22:00] advantage of having. Individuals or organizations that integrate the disciplines more formally.[00:22:05] Alessio: That makes a lot of sense.[00:22:06] Alessio: I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like. Models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the AI value is going to accrue.[00:22:26] Alessio: So you have this rush of people building frameworks, building infrastructure, layered things, but we don't really know the shape of the compute. I'm curious that Sierra, like how you thought about building an house, a lot of the tooling for evals or like just, you know, building the agents and all of that.[00:22:41] Alessio: Versus how you see some of the startup opportunities that is maybe still out there.[00:22:46] Bret: We build most of our tooling in house at Sierra, not all. It's, we don't, it's not like not invented here syndrome necessarily, though, maybe slightly guilty of that in some ways, but because we're trying to build a platform [00:23:00] that's in Dorian, you know, we really want to have control over our own destiny.[00:23:03] Bret: And you had made a comment earlier that like. We're still trying to figure out who like the reactive agents are and the jury is still out. I would argue it hasn't been created yet. I don't think the jury is still out to go use that metaphor. We're sort of in the jQuery era of agents, not the react era.[00:23:19] Bret: And, and that's like a throwback for people listening,[00:23:22] swyx: we shouldn't rush it. You know?[00:23:23] Bret: No, yeah, that's my point is. And so. Because we're trying to create an enduring company at Sierra that outlives us, you know, I'm not sure we want to like attach our cart to some like to a horse where it's not clear that like we've figured out and I actually want as a company, we're trying to enable just at a high level and I'll, I'll quickly go back to tech at Sierra, we help consumer brands build customer facing AI agents.[00:23:48] Bret: So. Everyone from Sonos to ADT home security to Sirius XM, you know, if you call them on the phone and AI will pick up with you, you know, chat with them on the Sirius XM homepage. It's an AI agent called Harmony [00:24:00] that they've built on our platform. We're what are the contours of what it means for someone to build an end to end complete customer experience with AI with conversational AI.[00:24:09] Bret: You know, we really want to dive into the deep end of, of all the trade offs to do it. You know, where do you use fine tuning? Where do you string models together? You know, where do you use reasoning? Where do you use generation? How do you use reasoning? How do you express the guardrails of an agentic process?[00:24:25] Bret: How do you impose determinism on a fundamentally non deterministic technology? There's just a lot of really like as an important design space. And I could sit here and tell you, we have the best approach. Every entrepreneur will, you know. But I hope that in two years, we look back at our platform and laugh at how naive we were, because that's the pace of change broadly.[00:24:45] Bret: If you talk about like the startup opportunities, I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every role, but I believe that certainly there's a big market for [00:25:00] frontier models, but largely for companies with huge CapEx budgets. So. Open AI and Microsoft's Anthropic and Amazon Web Services, Google Cloud XAI, which is very well capitalized now, but I think the, the idea that a company can make money sort of pre training a foundation model is probably not true.[00:25:20] Bret: It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI. And I define that not as like building agents or things like that. I define it much more as like, you're actually solving a problem for a business.[00:25:40] Bret: So it's what Harvey is doing in legal profession or what cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task.[00:25:56] Bret: It can actually do a job, which is very different than the value proposition of [00:26:00] software was to ancient history two years ago. And as a consequence, I think the way you build a solution and For a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution and it's just really valuable.[00:26:23] Bret: You know, it's. Like just think of how much money cursor is saving software engineering teams or the alternative, how much revenue it can produce tool making is really challenging. If you look at the cloud market, just as a analog, there are a lot of like interesting tools, companies, you know, Confluent, Monetized Kafka, Snowflake, Hortonworks, you know, there's a, there's a bunch of them.[00:26:48] Bret: A lot of them, you know, have that mix of sort of like like confluence or have the open source or open core or whatever you call it. I, I, I'm not an expert in this area. You know, I do think [00:27:00] that developers are fickle. I think that in the tool space, I probably like. Default towards open source being like the area that will win.[00:27:09] Bret: It's hard to build a company around this and then you end up with companies sort of built around open source to that can work. Don't get me wrong, but I just think that it's nowadays the tools are changing so rapidly that I'm like, not totally skeptical of tool makers, but I just think that open source will broadly win, but I think that the CapEx required for building frontier models is such that it will go to a handful of big companies.[00:27:33] Bret: And then I really believe in agents for specific domains which I think will, it's sort of the analog to software as a service in this new era. You know, it's like, if you just think of the cloud. You can lease a server. It's just a low level primitive, or you can buy an app like you know, Shopify or whatever.[00:27:51] Bret: And most people building a storefront would prefer Shopify over hand rolling their e commerce storefront. I think the same thing will be true of AI. So [00:28:00] I've. I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need.[00:28:09] Bret: Broadly, but I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.[00:28:21] swyx: Yeah, and it's not a priority for you. There's a lot in there.[00:28:24] swyx: I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space?[00:28:44] swyx: Because there are many people out there who are very smart and pick the wrong problem.[00:28:47] Bret: Yeah, that's a great question.[00:28:48] Future of Software Development[00:28:48] Bret: By the way, I would love to talk about the future of software, too, because despite the fact it didn't pick coding, I have a lot of that, but I can talk to I can answer your question, though, you know I think when a technology is as [00:29:00] cool as large language models.[00:29:02] Bret: You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you, you encounter some pain points. You're like,[00:29:17] swyx: a lot of[00:29:17] Bret: people are experiencing the same pain point.[00:29:19] Bret: What if I make it? That it's just very incremental. And you know, I always like to use the metaphor, like you can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them and roasted coffee beans largely, you know, are priced relative to the cost of the beans.[00:29:39] Bret: Or you can sell a latte and a latte. Is rarely priced directly like as a percentage of coffee bean prices. In fact, if you buy a latte at the airport, it's a captive audience. So it's a really expensive latte. And there's just a lot that goes into like. How much does a latte cost? And I bring it up because there's a supply chain from growing [00:30:00] coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one and the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the coffee beans and it's because you've actually solved a much more acute human problem in the airport.[00:30:19] Bret: And, and it's just worth a lot more to that person in that moment. It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model yet in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally.[00:30:43] Bret: If you go and solve a really big business problem for somebody, that's actually like a meaningful business problem that AI facilitates, they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's, that's [00:31:00] unfair. If you're searching for an idea of people, I, I love people trying things, even if, I mean, most of the, a lot of the greatest ideas have been things no one believed in.[00:31:07] Bret: So I like, if you're passionate about something, go do it. Like who am I to say, yeah, a hundred percent. Or Gmail, like Paul as far, I mean I, some of it's Laura at this point, but like Gmail is Paul's own email for a long time. , and then I amusingly and Paul can't correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat.[00:31:26] Bret: It would be great. It was not your email, but my own . I don't know if it's a true story. I'm pretty sure it's, yeah, I've read that before. So scratch your own niche. Fine. Like it depends on what your goal is. If you wanna do like a venture backed company, if its a. Passion project, f*****g passion, do it like don't listen to anybody.[00:31:41] Bret: In fact, but if you're trying to start, you know an enduring company, solve an important business problem. And I, and I do think that in the world of agents, the software industries has shifted where you're not just helping people more. People be more productive, but you're actually accomplishing tasks autonomously.[00:31:58] Bret: And as a consequence, I think the [00:32:00] addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks and how much is coding autocomplete worth. A fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you, that's worth a lot.[00:32:20] Bret: And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't wanna get too far afield, but just think about which parts of the economy. We'll benefit most from this intelligence and which parts can absorb it most easily.[00:32:38] Bret: And what would an agent in this space look like? Who's the customer of it is the technology feasible. And I would just start with these business problems more. And I think, you know, the best companies tend to have great engineers who happen to have great insight into a market. And it's that last part that I think some people.[00:32:56] Bret: Whether or not they have, it's like people start so much in the technology, they [00:33:00] lose the forest for the trees a little bit.[00:33:02] Alessio: How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more stateless, you know, like it's easier to swap out if you're just putting an input and getting an output.[00:33:16] Alessio: If you think about coding, if there's no ID, you're just putting a prompt and getting back an app. It doesn't really matter. Who generates the app, you know, you have less of a buy in versus the platform you're building, I'm sure on the backend customers have to like put on their documentation and they have, you know, different workflows that they can tie in what's kind of like the line to draw there versus like going full where you're managed customer support team as a service outsource versus.[00:33:40] Alessio: This is the Sierra platform that you can build on. What was that decision? I'll sort of[00:33:44] Bret: like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.[00:33:59] Bret: Who's the [00:34:00] customer of a, an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's there, you know, intern so to speak. I don't know. I mean, we'll figure this out over the next few years. Like what is that? And is it generating code that you then review?[00:34:16] Bret: Is it generating code with a set of unit tests that pass, what is the actual. For lack of a better word contract, like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out.[00:34:33] Bret: I think it'll be different for every agent. You know, in our customer base, we do what's called outcome based pricing. So essentially every time the AI agent. Solves the problem or saves a customer or whatever it might be. There's a pre negotiated rate for that. We do that. Cause it's, we think that that's sort of the correct way agents, you know, should be packaged.[00:34:53] Bret: I look back at the history of like cloud software and notably the introduction of the browser, which led to [00:35:00] software being delivered in a browser, like Salesforce to. Famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.[00:35:13] Bret: Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser, that's hosted. Data center that you don't own, you sort of needed to change the business model because you don't, you can't really buy a perpetual license or something otherwise like, how do you afford making changes to it?[00:35:31] Bret: So it only worked when you were buying like a new version every year or whatever. So to some degree, but then the business model shift actually changed business as we know it, because now like. Things like Adobe Photoshop. Now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined that actually the business model shift was turned out to be as significant as the technical as the shift.[00:35:59] Bret: And I think with [00:36:00] agents, because they actually accomplish a job, I do think that it doesn't make sense to me that you'd pay for the privilege of like. Using the software like that coding agent, like if it writes really bad code, like fire it, you know, I don't know what the right metaphor is like you should pay for a job.[00:36:17] Bret: Well done in my opinion. I mean, that's how you pay your software engineers, right? And[00:36:20] swyx: and well, not really. We paid to put them on salary and give them options and they vest over time. That's fair.[00:36:26] Bret: But my point is that you don't pay them for how many characters they write, which is sort of the token based, you know, whatever, like, There's a, that famous Apple story where we're like asking for a report of how many lines of code you wrote.[00:36:40] Bret: And one of the engineers showed up with like a negative number cause he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated.[00:36:55] Bret: Cause it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause [00:37:00] absolutely no correlation. So my old view is I don't think it's be different in every category, but I do think that that is the, if an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of, of your pain for the job well done.[00:37:16] Bret: It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know Talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right? So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes.[00:37:36] Bret: And I think you'll see a change in delivery models too. And, and, you know, in our customer base you know, we empower our customers to really have their hands on the steering wheel of what the agent does they, they want and need that. But the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool.[00:37:55] Bret: And, you know, it's like in the early days of the Internet, there's the role of the webmaster. [00:38:00] And I don't know whether your webmaster is not a fashionable, you know, Term, nor is it a job anymore? I just, I don't know. Will they, our tech stand the test of time? Maybe, maybe not. But I do think that again, I like, you know, because everyone listening right now is a software engineer.[00:38:14] Bret: Like what is the form factor of a coding agent? And actually I'll, I'll take a breath. Cause actually I have a bunch of pins on them. Like I wrote a blog post right before Christmas, just on the future of software development. And one of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of.[00:38:31] Bret: A repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto complete code in the way I want through tuning it properly when it actually can write. I do wonder what like the future of development environments will look like.[00:38:55] Bret: And to your point on what is a software product, I think it's going to change a lot in [00:39:00] ways that will surprise us. But I always use, I use the metaphor in my blog post of, have you all driven around in a way, Mo around here? Yeah, everyone has. And there are these Jaguars, the really nice cars, but it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheels like turning and stuff clearly in the future.[00:39:16] Bret: If once we get to that, be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward? Maybe just for car sickness. I don't know, but you could totally rearrange the car. I mean, so much of the car is oriented around the driver, so. It stands to reason to me that like, well, autonomous agents for software engineering run through visual studio code.[00:39:37] Bret: That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you, but it begs the question of what's your relationship with that agent. And I think the same is true in our industry of customer experience, which is like.[00:39:55] Bret: Who are the people managing this agent? What are the tools do they need? And they definitely need [00:40:00] tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see particularly like on the passion project side or research side.[00:40:14] Bret: More innovation in programming languages. I think that we're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just cause it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation and how we verify program correctness.[00:40:37] Bret: I studied for formal verification in college a little bit and. It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does.[00:40:56] Bret: I think there should be lots of interesting things in the software development life cycle, like how [00:41:00] we think of testing and everything else, because. If you think about if we have to manually read every line of code that's coming out as machines, it will just rate limit how much the machines can do. The alternative is totally unsafe.[00:41:13] Bret: So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native I don't think the coding agents don't work well enough to do this yet, but once they do, what is sort of an AI native software development life cycle and how do you actually.[00:41:31] Bret: Enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities. Can we like have more efficient, safer code than ever before?[00:41:53] Bret: And can you have someone who's like that guy in the matrix, you know, like staring at the little green things, like where could you have an operator [00:42:00] of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like. Autocomplete, you know, right now, I'm not, I'm not even, I'm guilty as charged.[00:42:10] Bret: I guess in some ways, but I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the react of whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum, obviously it's moving really fast. I think we're moving out of it.[00:42:26] Alessio: Yeah. At the end of 23, I've read this blog post from syntax to semantics. Like if you think about Python. It's taking C and making it more semantic and LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.[00:42:46] Alessio: But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get, we're going to be okay making changes. To the underlying languages, [00:43:00] like the programming languages versus just saying, Hey, you just got to write Python because I understand Python and I'm more important at the end of the day than the model.[00:43:08] Alessio: But I think that will change, but I don't know if it's like two years or five years. I think it's more nuanced actually.[00:43:13] Bret: So I think there's a, some of the more interesting programming languages bring semantics into syntax. So let me, that's a little reductive, but like Rust as an example, Rust is memory safe.[00:43:25] Bret: Statically, and that was a really interesting conceptual, but it's why it's hard to write rust. It's why most people write python instead of rust. I think rust programs are safer and faster than python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it.[00:43:45] Bret: You should prefer a program written in Rust over a program written in Python, just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe, but most people don't write Rust because it's kind of a pain in the ass. And [00:44:00] the audience of people who can is smaller, but it's sort of better in most, most ways.[00:44:05] Bret: And again, let's say you're making a web service and you didn't have to care about how hard it was to write. If you just got the output of the web service, the rest one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the rest programming language that it probably will have fewer runtime errors and things like that as well.[00:44:25] Bret: So I just give that as an example, because so rust, at least my understanding that came out of the Mozilla team, because. There's lots of security vulnerabilities in the browser and it needs to be really fast. They said, okay, we want to put more of a burden at the authorship time to have fewer issues at runtime.[00:44:43] Bret: And we need the constraint that it has to be done statically because browsers need to be really fast. My sense is if you just think about like the, the needs of a programming language today, where the role of a software engineer is [00:45:00] to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof, how would you create a programming system that facilitated that?[00:45:15] Bret: And, you know, I bring up Rust is because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because. C is largely less safe for very similar, you know, trade offs, you know, for the, the system and now with AI, it's like, okay, well, that just changes the game on writing these things.[00:45:36] Bret: And so like, I just wonder if a combination of programming languages that are more structurally oriented towards the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing, like if I asked you, is this rest program memory safe?[00:45:58] Bret: You wouldn't have to read it, you just have [00:46:00] to compile it. So that's interesting. I mean, that's like an, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI reviewed. Do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large.[00:46:21] Bret: Websites that were written on Python. It's just like, you know, expensive and it's like every, trust me, every team who's written a big web service in Python has experimented with like Pi Pi and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi threading anyway.[00:46:36] Bret: It's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean. I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where you can use AI to generate software in a high scale way and trust it.[00:46:59] Bret: And you're [00:47:00] not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in.[00:47:08] Alessio: Yeah. We had Chris Lanner on the podcast. He's doing great work with modular. I mean, I love. LVM. Yeah. Basically merging rust in and Python.[00:47:15] Alessio: That's kind of the idea. Should be, but I'm curious is like, for them a big use case was like making it compatible with Python, same APIs so that Python developers could use it. Yeah. And so I, I wonder at what point, well, yeah.[00:47:26] Bret: At least my understanding is they're targeting the data science Yeah. Machine learning crowd, which is all written in Python, so still feels like a local maximum.[00:47:34] Bret: Yeah.[00:47:34] swyx: Yeah, exactly. I'll force you to make a prediction. You know, Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?[00:47:42] Bret: I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system, and I use language because I'm not sure language is even the right thing, but I hope in 30 years, there's an AI native way we make [00:48:00] software that is wholly uncorrelated with the current set of programming languages.[00:48:04] Bret: or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and some have different trade offs.[00:48:15] Evolution of Programming Languages[00:48:15] Bret: You know, you have Haskell and others that were designed for abstractions for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it.[00:48:31] Bret: It's it can, it has a. Interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, a huge Python fan wrote at least two of my three companies were exclusively written in Python and then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right?[00:48:54] Bret: Where you had higher level abstractions rather than and going beyond go to, to like abstractions, [00:49:00] like the for loop and the while loop.[00:49:01] The Future of Software Engineering[00:49:01] Bret: So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be, I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away, but there's still the role of this offer engineer, like the person actually building the system.[00:49:20] Bret: Right. And. What does a programming system for that form factor look like?[00:49:25] React and Front-End Development[00:49:25] Bret: And I, I just have a, I hope to be just like I mentioned, I remember I was at Facebook in the very early days when, when, what is now react was being created. And I remember when the, it was like released open source I had left by that time and I was just like, this is so f*****g cool.[00:49:42] Bret: Like, you know, to basically model your app independent of the data flowing through it, just made everything easier. And then now. You know, I can create, like there's a lot of the front end software gym play is like a little chaotic for me, to be honest with you. It is like, it's sort of like [00:50:00] abstraction soup right now for me, but like some of those core ideas felt really ergonomic.[00:50:04] Bret: I just wanna, I'm just looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time. Because they created it with sort of like from first principles recognizing that like. Authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore.[00:50:26] Bret: And I think that's like, that would be a very exciting day for me.[00:50:28] The Role of AI in Programming[00:50:28] swyx: Yeah, I would say like the various versions of this discussion have happened at the end of the day, you still need to precisely communicate what you want. As a manager of people, as someone who has done many, many legal contracts, you know how hard that is.[00:50:42] swyx: And then now we have to talk to machines doing that and AIs interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative, but I don't know if it'll ever Crossover from being [00:51:00] a programming language to something more than that.[00:51:02] Bret: I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language, it's like a flaw in the system. How many[00:51:12] swyx: holes there are.[00:51:13] Bret: And I do think that when you're making a mission critical software system, I don't think it should be English language prompts.[00:51:19] Bret: I think that is silly because you want the precision of a a programming language. My point was less about that and more about if the actual act of authoring it, like if you.[00:51:32] Formal Verification in Software[00:51:32] Bret: I'll think of some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great.[00:51:41] Bret: My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for. These low level security protocols, but the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So, like, if I could, I mean, [00:52:00] just think of, like, the silliest app on your phone right now, the idea that that app should be, like, formally verified for its correctness feels laughable right now because, like, God, why would you spend the time on it?[00:52:10] Bret: But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to, like, set our bars really high. Like. We should make, software has been amazing. Like there's a Mark Andreessen blog post, software is eating the world. And you know, our whole life is, is mediated digitally.[00:52:26] Bret: And that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down, you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life.[00:52:45] Bret: And as a consequence, you know if you look, remember like health care, got healthcare. gov that fiasco security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like. We now have like created this like amazing system that can [00:53:00] like, we can fix this, you know, and I, I just want to, I'm both excited about the productivity gains in the economy, but I just think as software engineers, we should be bolder.[00:53:08] Bret: Like we should have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system. We can make it work correctly now, and I'm being a little bit hand wavy, and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure.[00:53:28] Bret: So I'm I'm just like super optimistic about it. But actually, let's go to w

Tech Won't Save Us
Patreon Preview: Sam Altman's Self-Serving AGI Future w/ Julia Black

Tech Won't Save Us

Play Episode Listen Later Feb 10, 2025 6:12


Our Data Vampires series may be over, but Paris interviewed a bunch of experts on data centers and AI whose insights shouldn't go to waste. We're releasing those interviews as bonus episodes for Patreon supporters. Here's a preview of this week's premium episode with Julia Black, a reporter on The Information's Weekend Team. For the full interview, support the show on Patreon.Support the show

According To The Scripture
S2E46 The Adamic Singularity

According To The Scripture

Play Episode Listen Later Feb 7, 2025 61:38


Humanity May Reach Singularity By 2030 Darren Orf 11/30/2024 By one unique metric, we could approach technological singularity by the end of this decade, if not sooner.A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.An AI that can translate speech as well as a human could change society. In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it's enormously difficult to predict where it begins and nearly impossible to know what's beyond this technological “event horizon.” However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. One such metric, defined by Translated, a Rome-based translation company, is an AI's ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI). “That's because language is the most natural thing for humans,” Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December 2022. “Nonetheless, the data Translated collected clearly shows that machines are not that far from closing the gap.” The company tracked its AI's performance from 2014 to 2022 using a metric called “Time to Edit,” or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones. Over that 8-year period and analyzing over 2 billion post-edits, Translated's AI showed a slow, but undeniable improvement as it slowly closed the gap toward human-level translation quality. On average, it takes a human translator roughly one second to edit each word of another human translator, according to Translated. In 2015, it took professional editors approximately 3.5 seconds per word to check a machine-translated (MT) suggestion—today, that number is just 2 seconds. If the trend continues, Translated's AI will be as good as human-produced translation by the end of the decade (or even sooner). “The change is so small that every single day you don't perceive it, but when you see progress … across 10 years, that is impressive,” Trombetti said on a podcast. “This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.” Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. And while perfecting human speech is certainly a frontier in AI research, the impressive skill doesn't necessarily make a machine intelligent (not to mention how many researchers don't even agree on what “intelligence” is). Whether these hyper-accurate translators are harbingers of our technological doom or not, that doesn't lessen Translated's AI accomplishment. An AI capable of translating speech as well as a human could very well change society, even if the true “technological singularity” remains ever elusive. Darren lives in Portland, has a cat, and writes/edits about sci-fi and how our world works. You can find his previous stuff at Gizmodo and Paste if you look hard enough.

Rocketship.fm
OpenAI's 2025 Roadmap: Smarter AI, Agents, and… Humanoid Robots?

Rocketship.fm

Play Episode Listen Later Feb 6, 2025 33:19


OpenAI is pushing the boundaries of artificial intelligence yet again. In this episode of Rocketship.FM, we break down what Chief Product Officer Kevin Weil revealed about OpenAI's roadmap for 2025 and beyond—including the latest AI model, O1, which is already outperforming previous versions in coding, math, and reasoning. But that's just the beginning. We also explore OpenAI's move into AI-powered agents designed to streamline everyday tasks, and the company's rumored return to humanoid robotics. And what about Artificial General Intelligence (AGI) and even Artificial Superintelligence (ASI)? OpenAI CEO Sam Altman has hinted that these once-distant milestones could be closer than we think. What happens when AI surpasses human intelligence? Will it be a utopia of limitless innovation, or are we opening a Pandora's box we can't close? Join us as we unpack OpenAI's vision for the future—and what it could mean for the world.

Artificial Intelligence in Industry with Daniel Faggella
AI Risk Management and Governance Strategies for the Future - with Duncan Cass-Beggs of Center for International Governance Innovation

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Feb 1, 2025 77:40


Today's guest is Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Center for International Governance Innovation (CIGI). He joins Emerj CEO and Head of Research Daniel Faggella to explore the pressing challenges and opportunities surrounding Artificial General Intelligence (AGI) governance on a global scale. This is a special episode in our AI futures series that ties right into our overlapping series on AGI governance on the Trajectory podcast, where we've had luminaries like Eliezer Yudkowsky, Connor Leahy, and other globally recognized AGI governance thinkers. We hope you enjoy this episode. If you're interested in these topics, make sure to dive deeper into where AI is affecting the bigger picture by visiting emergj.com/tj2.

Faster, Please! — The Podcast

The 2020s have so far been marked by pandemic, war, and startling technological breakthroughs. Conversations around climate disaster, great-power conflict, and malicious AI are seemingly everywhere. It's enough to make anyone feel like the end might be near. Toby Ord has made it his mission to figure out just how close we are to catastrophe — and maybe not close at all!Ord is the author of the 2020 book, The Precipice: Existential Risk and the Future of Humanity. Back then, I interviewed Ord on the American Enterprise Institute's Political Economy podcast, and you can listen to that episode here. In 2024, he delivered his talk, The Precipice Revisited, in which he reassessed his outlook on the biggest threats facing humanity.Today on Faster, Please — The Podcast, Ord and I address the lessons of Covid, our risk of nuclear war, potential pathways for AI, and much more.Ord is a senior researcher at Oxford University. He has previously advised the UN, World Health Organization, World Economic Forum, and the office of the UK Prime Minister.In This Episode* Climate change (1:30)* Nuclear energy (6:14)* Nuclear war (8:00)* Pandemic (10:19)* Killer AI (15:07)* Artificial General Intelligence (21:01)Below is a lightly edited transcript of our conversation. Climate change (1:30). . . the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.Pethokoukis: Let's just start out by taking a brief tour through the existential landscape and how you see it now versus when you first wrote the book The Precipice, which I've mentioned frequently in my writings. I love that book, love to see a sequel at some point, maybe one's in the works . . . but let's start with the existential risk, which has dominated many people's thinking for the past quarter-century, which is climate change.My sense is, not just you, but many people are somewhat less worried than they were five years ago, 10 years ago. Perhaps they see at least the most extreme outcomes less likely. How do you see it?Ord: I would agree with that. I'm not sure that everyone sees it that way, but there were two really big and good pieces of news on climate that were rarely reported in the media. One of them is that there's the question about how many emissions there'll be. We don't know how much carbon humanity will emit into the atmosphere before we get it under control, and there are these different emissions pathways, these RCP 4.5 and things like this you'll have heard of. And often, when people would give a sketch of how bad things could be, they would talk about RCP 8.5, which is the worst of these pathways, and we're very clearly not on that, and we're also, I think pretty clearly now, not on RCP 6, either. So the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.What are we doing right?Ultimately, some of those pathways were based on business-as-usual ideas that there wouldn't be climate change as one of the biggest issues in the international political sphere over decades. So ultimately, nations have been switching over to renewables and low-carbon forms of power, which is good news. They could be doing it much more of it, but it's still good news. Back when we initially created these things, I think we would've been surprised and happy to find out that we were going to end up among the better two pathways instead of the worst ones.The other big one is that, as well as how much we'll admit, there's the question of how bad is it to have a certain amount of carbon in the atmosphere? In particular, how much warming does it produce? And this is something of which there's been massive uncertainty. The general idea is that we're trying to predict, if we were to double the amount of carbon in the atmosphere compared to pre-industrial times, how many degrees of warming would there be? The best guess since the year I was born, 1979, has been three degrees of warming, but the uncertainty has been somewhere between one and a half degrees and four and a half.Is that Celsius or Fahrenheit, by the way?This is all Celsius. The climate community has kept the same uncertainty from 1979 all the way up to 2020, and it's a wild level of uncertainty: Four and a half degrees of warming is three times one and a half degrees of warming, so the range is up to triple these levels of degrees of warming based on this amount of carbon. So massive uncertainty that hadn't changed over many decades.Now they've actually revised that and have actually brought in the range of uncertainty. Now they're pretty sure that it's somewhere between two and a half and four degrees, and this is based on better understanding of climate feedbacks. This is good news if you're concerned about worst-case climate change. It's saying it's closer to the central estimate than we'd previously thought, whereas previously we thought that there was a pretty high chance that it could even be higher than four and a half degrees of warming.When you hear these targets of one and a half degrees of warming or two degrees of warming, they sound quite precise, but in reality, we were just so uncertain of how much warming would follow from any particular amount of emissions that it was very hard to know. And that could mean that things are better than we'd thought, but it could also mean things could be much worse. And if you are concerned about existential risks from climate change, then those kind of tail events where it's much worse than we would've thought the things would really get, and we're now pretty sure that we're not on one of those extreme emissions pathways and also that we're not in a world where the temperature is extremely sensitive to those emissions.Nuclear energy (6:14)Ultimately, when it comes to the deaths caused by different power sources, coal . . . killed many more people than nuclear does — much, much more . . .What do you make of this emerging nuclear power revival you're seeing across Europe, Asia, and in the United States? At least the United States it's partially being driven by the need for more power for these AI data centers. How does it change your perception of risk in a world where many rich countries, or maybe even not-so-rich countries, start re-embracing nuclear energy?In terms of the local risks with the power plants, so risks of meltdown or other types of harmful radiation leak, I'm not too concerned about that. Ultimately, when it comes to the deaths caused by different power sources, coal, even setting aside global warming, just through particulates being produced in the soot, killed many more people than nuclear does — much, much more, and so nuclear is a pretty safe form of energy production as it happens, contrary to popular perception. So I'm in favor of that. But the proliferation concerns, if it is countries that didn't already have nuclear power, then the possibility that they would be able to use that to start a weapons program would be concerning.And as sort of a mechanism for more clean energy. Do you view nuclear as clean energy?Yes, I think so. It's certainly not carbon-producing energy. I think that it has various downsides, including the difficulty of knowing exactly what to do with the fuel, that will be a very long lasting problem. But I think it's become clear that the problems caused by other forms of energy are much larger and we should switch to the thing that has fewer problems, rather than more problems.Nuclear war (8:00)I do think that the Ukraine war, in particular, has created a lot of possible flashpoints.I recently finished a book called Nuclear War: A Scenario, which is kind of a minute-by-minute look at how a nuclear war could break out. If you read the book, the book is terrifying because it really goes into a lot of — and I live near Washington DC, so when it gives its various scenarios, certainly my house is included in the blast zone, so really a frightening book. But when it tried to explain how a war would start, I didn't find it a particularly compelling book. The scenarios for actually starting a conflict, I didn't think sounded particularly realistic.Do you feel — and obviously we have Russia invade Ukraine and loose talk by Vladimir Putin about nuclear weapons — do you feel more or less confident that we'll avoid a nuclear war than you did when you wrote the book?Much less confident, actually. I guess I should say, when I wrote the book, it came out in 2020, I finished the writing in 2019, and ultimately we were in a time of relatively low nuclear risk, and I feel that the risk has risen. That said, I was trying to provide estimates for the risk over the next hundred years, and so I wasn't assuming that the low-risk period would continue indefinitely, but it was quite a shock to end up so quickly back in this period of heightened tensions and threats of nuclear escalation, the type of thing I thought was really from my parents' generation. So yes, I do think that the Ukraine war, in particular, has created a lot of possible flashpoints. That said, the temperature has come down on the conversation in the last year, so that's something.Of course, the conversation might heat right back up if we see a Chinese invasion of Taiwan. I've been very bullish about the US economy and world economy over the rest of this decade, but the exception is as long as we don't have a war with China, from an economic point of view, but certainly also a nuclear point of view. Two nuclear armed powers in conflict? That would not be an insignificant event from the existential-risk perspective.It is good that China has a smaller nuclear arsenal than the US or Russia, but there could easily be a great tragedy.Pandemic (10:19)Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either.The book comes out during the pandemic. Did our response to the pandemic make you more or less confident in our ability and willingness to confront that kind of outbreak? The worst one that saw in a hundred years?Yeah, overall, it made me much less confident. There'd been general thought by those who look at these large catastrophic risks that when the chips are down and the threat is imminent, that people will see it and will band together and put a lot of effort into it; that once you see the asteroid in your telescope and it's headed for you, then things will really get together — a bit like in the action movies or what have you.That's where I take my cue from, exactly.And with Covid, it was kind of staring us in the face. Those of us who followed these things closely were quite alarmed a long time before the national authorities were. Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either. That said, scientists, particularly developing RNA vaccines, did better than I expected.In the years leading up to the pandemic, certainly we'd seen other outbreaks, they'd had the avian flu outbreak, and you know as well as I do, there were . . . how many white papers or scenario-planning exercises for just this sort of event. I think I recall a story where, in 2018, Bill Gates had a conversation with President Trump during his first term about the risk of just such an outbreak. So it's not as if this thing came out of the blue. In many ways we saw the asteroid, it was just pretty far away. But to me, that says something again about as humans, our ability to deal with severe, but infrequent, risks.And obviously, not having a true global, nasty outbreak in a hundred years, where should we focus our efforts? On preparation? Making sure we have enough ventilators? Or our ability to respond? Because it seems like the preparation route will only go so far, and the reason it wasn't a much worse outbreak is because we have a really strong ability to respond.I'm not sure if it's the same across all risks as to how preparation versus ability to respond, which one is better. In some risks, there's also other possibilities like avoiding an outbreak, say, an accidental outbreak happening at all, or avoiding a nuclear war starting and not needing to actually respond at all. I'm not sure if there's an overall rule as to which one was better.Do you have an opinion on the outbreak of Covid?I don't know whether it was a lab leak. I think it's a very plausible hypothesis, but plausible doesn't mean it's proven.And does the post-Covid reaction, at least in the United States, to vaccines, does that make you more or less confident in our ability to deal with . . . the kind of societal cohesion and confidence to tackle a big problem, to have enough trust? Maybe our leaders don't deserve that trust, but what do you make from this kind of pushback against vaccines and — at least in the United States — our medical authorities?When Covid was first really striking Europe and America, it was generally thought that, while China was locking down the Wuhan area, that Western countries wouldn't be able to lock down, that it wasn't something that we could really do, but then various governments did order lockdowns. That said, if you look at the data on movement of citizens, it turns out that citizens stopped moving around prior to the lockdowns, so the lockdown announcements were more kind of like the tail, rather than the dog.But over time, citizens wanted to kind of get back out and interact more, and the rules were preventing them, and if a large fraction of the citizens were under something like house arrest for the better part of a year, would that lead to some fairly extreme resentment and some backlash, some of which was fairly irrational? Yeah, that is actually exactly the kind of thing that you would expect. It was very difficult to get a whole lot of people to row together and take the same kind of response that we needed to coordinate the response to prevent the spread, and pushing for that had some of these bad consequences, which are also going to make it harder for next time. We haven't exactly learned the right lessons.Killer AI (15:07)If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.We're more than halfway through our chat and now we're going to get to the topic probably most people would like to hear about: After the robots take our jobs, are they going to kill us? What do you think? What is your concern about AI risk?I'm quite concerned about it. Ultimately, when I wrote my book, I put AI risk as the biggest existential risk, albeit the most uncertain, as well, and I would still say that. That said, some things have gotten better since then.I would assume what makes you less confident is one, what seems to be the rapid advance — not just the rapid advance of the technology, but you have the two leading countries in a geopolitical globalization also being the leaders in the technology and not wanting to slow it down. I would imagine that would make you more worried that we will move too quickly. What would make you more confident that we would avoid any serious existential downsides?I agree with your supposition that the attempts by the US and China to turn this into some kind of arms race are quite concerning. But here are a few things: Back when I was writing the book, the leading AI systems with things like AlphaGo, if you remember that, or the Atari plane systems.Quaint. Quite quaint.It was very zero-sum, reinforcement-learning-based game playing, where these systems were learning directly to behave adversarially to other systems, and they could only understand the kind of limited aspect about the world, and struggle, and overcoming your adversary. That was really all they could do, and the idea of teaching them about ethics, or how to treat people, and the diversity of human values seemed almost impossible: How do you tell a chess program about that?But then what we've ended up with is systems that are not inherently agents, they're not inherently trying to maximize something. Rather, you ask them questions and they blurt out some answers. These systems have read more books on ethics and moral philosophy than I have, and they've read all kinds of books about the human condition. Almost all novels that have ever been published, and pretty much every page of every novel involves people judging the actions of other people and having some kind of opinions about them, and so there's a huge amount of data about human values, and how we think about each other, and what's inappropriate behavior. And if you ask the systems about these things, they're pretty good at judging whether something's inappropriate behavior, if you describe it.The real challenge remaining is to get them to care about that, but at least the knowledge is in the system, and that's something that previously seemed extremely difficult to do. Also, these systems, there are versions that do reasoning and that spend longer with a private text stream where they think — it's kind of like sub-vocalizing thoughts to themselves before they answer. When they do that, these systems are thinking in plain English, and that's something that we really didn't expect. If you look at all of the weights of a neural network, it's quite inscrutable, famously difficult to know what it's doing, but somehow we've ended up with systems that are actually thinking in English and where that could be inspected by some oversight process. There are a number of ways in which things are better than I'd feared.So what is your actual existential risk scenario look like? This is what you're most concerned about happening with AI.I think it's quite hard to be all that concrete on it at the moment, partly because things change so quickly. I don't think that there's going to be some kind of existential catastrophe from AI in the next couple of years, partly because the current systems require so much compute in order to run them that they can only be run at very specialized and large places, of which there's only a few in the world. So that means the possibility that they break out and copy themselves into other systems is not really there, in which case, the possibility of turning them off is much possible as well.Also, they're not yet intelligent enough to be able to execute a lengthy plan. If you have some kind of complex task for them, that requires, say, 10 steps — for example, booking a flight on the internet by clicking through all of the appropriate pages, and finding out when the times are, and managing to book your ticket, and fill in the special codes they sent to your email, and things like that. That's a somewhat laborious task and the systems can't do things like that yet. There's still the case that, even if they've got a, say, 90 percent chance of completing any particular step, that the 10 percent chances of failure add up, and eventually it's likely to fail somewhere along the line and not be able to recover. They'll probably get better at that, but at the moment, the inability to actually execute any complex plans does provide some safety.Ultimately, the concern is that, at a more abstract level, we're building systems which are smarter than us at many things, and we're attempting to make them much more general and to be smarter than us across the board. If you know that one player is a better chess player than another, suppose Magnus Carlsen's playing me at chess, I can't predict exactly how he's going to beat me, but I can know with quite high likelihood that he will end up beating me. I'll end up in checkmate, even though I don't know what moves will happen in between here and there, and I think that it's similar with AI systems. If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.Artificial General Intelligence (21:01)Ultimately, existential risks are global public goods problems.I frequently check out the Metaculus online prediction platform, and I think currently on that platform, 2027 for what they would call “weak AGI,” artificial general intelligence — a date which has moved up two months in the past week as we're recording this, and then I think 2031 also has accelerated for “strong AGI,” so this is pretty soon, 2027 or 2031, quite soon. Is that kind of what you're assuming is going to happen, that we're going to have to deal with very powerful technologies quite quickly?Yeah, I think that those are good numbers for the typical case, what you should be expecting. I think that a lot of people wouldn't be shocked if it turns out that there is some kind of obstacle that slows down progress and takes longer before it gets overcome, but it's also wouldn't be surprising at this point if there are no more big obstacles and it's just a matter of scaling things up and doing fairly simple processes to get it to work.It's now a multi-billion dollar industry, so there's a lot of money focused on ironing out any kinks or overcoming any obstacles on the way. So I expect it to move pretty quickly and those timelines sound very realistic. Maybe even sooner.When you wrote the book, what did you put as the risk to human existence over the next a hundred years, and what is it now?When I wrote the book, I thought it was about one in six.So it's still one in six . . . ?Yeah, I think that's still about right, and I would say that most of that is coming from AI.This isn't, I guess, a specific risk, but, to the extent that being positive about our future means also being positive on our ability to work together, countries working together, what do you make of society going in the other direction where we seem more suspicious of other countries, or more even — in the United States — more suspicious of our allies, more suspicious of international agreements, whether they're trade or military alliances. To me, I would think that the Age of Globalization would've, on net, lowered that risk to one in six, and if we're going to have less globalization, to me, that would tend to increase that risk.That could be right. Certainly increased suspicion, to the point of paranoia or cynicism about other nations and their ability to form deals on these things, is not going to be helpful at all. Ultimately, existential risks are global public goods problems. This continued functioning of human civilization is this global public good and existential risk is the opposite. And so these are things where, one way to look at it is that the US has about four percent of the world's people, so one in 25 people live in the US, and so an existential risk is hitting 25 times as many people as. So if every country is just interested in themself, they'll undervalue it by a factor of 25 or so, and the countries need to work together in order to overcome that kind of problem. Ultimately, if one of us falls victim to these risks, then we all do, and so it definitely does call out for international cooperation. And I think that it has a strong basis for international cooperation. It is in all of our interests. There are also verification possibilities and so on, and I'm actually quite optimistic about treaties and other ways to move forward.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Tech tycoons have got the economics of AI wrong - Economist* Progress in Artificial Intelligence and its Determinants - Arxiv* The role of personality traits in shaping economic returns amid technological change - CEPR▶ Business* Tech CEOs try to reassure Wall Street after DeepSeek shock - Wapo* DeepSeek Calls for Deep Breaths From Big Tech Over Earnings - Bberg Opinion* Apple's AI Moment Is Still a Ways Off - WSJ* Bill Gates Isn't Like Those Other Tech Billionaires - NYT* OpenAI's Sam Altman and SoftBank's Masayoshi Son Are AI's New Power Couple - WSJ* SoftBank Said to Be in Talks to Invest as Much as $25 Billion in OpenAI - NYT* Microsoft sheds $200bn in market value after cloud sales disappoint - FT▶ Policy/Politics* ‘High anxiety moment': Biden's NIH chief talks Trump 2.0 and the future of US science - Nature* Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros - Wired* EXCLUSIVE: NSF starts vetting all grants to comply with Trump's orders - Science* Milei, Modi, Trump: an anti-red-tape revolution is under way - Economist* FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation - Marginal Revolution* Donald Trump revives ideas of a Star Wars-like missile shield - Economist▶ AI/Digital* Is DeepSeek Really a Threat? - PS* ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - WSJ* OpenAI teases “new era” of AI in US, deepens ties with government - Ars* AI's Power Requirements Under Exponential Growth - Rand* How DeepSeek Took a Chunk Out of Big AI - Bberg* DeepSeek poses a challenge to Beijing as much as to Silicon Valley - Economist▶ Biotech/Health* Creatine shows promise for treating depression - NS* FDA approves new, non-opioid painkiller Journavx - Wapo▶ Clean Energy/Climate* Another Boffo Energy Forecast, Just in Time for DeepSeek - Heatmap News* Column: Nuclear revival puts uranium back in the critical spotlight - Mining* A Michigan nuclear plant is slated to restart, but Trump could complicate things - Grist▶ Robotics/AVs* AIs and Robots Should Sound Robotic - IEEE Spectrum* Robot beauticians touch down in California - FT Opinion▶ Space/Transportation* A Flag on Mars? Maybe Not So Soon. - NYT* Asteroid triggers global defence plan amid chance of collision with Earth in 2032 - The Guardian* Lurking Inside an Asteroid: Life's Ingredients - NYT▶ Up Wing/Down Wing* An Ancient 'Lost City' Is Uncovered in Mexico - NYT* Reflecting on Rome, London and Chicago after the Los Angeles fires - Wapo Opinion▶ Substacks/Newsletters* I spent two days testing DeepSeek R1 - Understanding AI* China's Technological Advantage -overlapping tech-industrial ecosystems - AI Supremacy* The state of decarbonization in five charts - Exponential View* The mistake of the century - Slow Boring* The Child Penalty: An International View - Conversable Economist* Deep Deepseek History and Impact on the Future of AI - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

The Ten Podcast
258: Benefits of Cross-Training, America's Golden Age, AI, and the Fall of DEI

The Ten Podcast

Play Episode Listen Later Jan 30, 2025 80:20


In this episode of The TEN, we dive into the rapid changes shaping America's future. We discuss the Trump administration's breakneck pace, the new White House Press Secretary and key confirmations, the fall of DEI, and the looming arrival of Artificial General Intelligence. Plus, AI's growing influence, why cross-training is a must to level up your skills and much more! Follow us on Instagram @10podcast You can find us on Instagram (@10thplanetmelbourne) // (@Mannyzen) If you would like to support us, please share the show and/or leave us a review. Keep it 10!

World Economic Forum
What just happened in Davos, and how is the world different now?

World Economic Forum

Play Episode Listen Later Jan 30, 2025 73:55


What happened at the World Economic Forum's Annual Meeting 2025, where the world met to discuss 'Collaboration for the Intelligent Age'? On Day 1, Donald Trump was inaugurated for his second term as US president, and announced he was withdrawing from the Paris climate deal, as well as the World Health Organisation, and vowed to use trade tariffs to re-shore jobs. On Day 4 he addressed the meeting in a link-up from Washington. We hear some of that and talk to the people who lead the Forum's work throughout the year, reflect on the impact of the meeting, held at a pivotal moment for world affairs. Catch up on all the action from the Annual Meeting 2025 at and across social media using the hashtag #WEF25. Davos 2025 sessions mentioned in this episode: Special address by Donald J. Trump, President of the United States of America: All Hands on Deck for the Energy Transition: The Dawn of Artificial General Intelligence?: Debating Tariffs: Forum reports and initiatives mentioned in this episode: Chief Economists Outlook: January 2025: Global Risks Report 2025: The Future of Jobs Report 2025: Global Cybersecurity Outlook 2025: First Movers Coalition: 1t.org: AI Governance Alliance: AI Competitiveness through Regional Collaboration: Global Lighthouse Network:  Yes/Cities:  Related podcasts: : : : : Check out all our podcasts on :  - - : - : - : Join the :  

Cables2Clouds
The Stargate Project, Brought To You by the Underpants Gnomes?

Cables2Clouds

Play Episode Listen Later Jan 29, 2025 32:47 Transcription Available


Send us a textWhat if Artificial General Intelligence (AGI) could be the job creator of the century? Buckle up for a hilarious yet thought-provoking exploration of this bold idea as we dissect the potential economic impact of AGI development alongside Chris, who aspires to up his Blue Sky game inspired by his brother Tim. We dive into compelling articles like the one from CRN, spotlighting Palo Alto Networks' maneuver to streamline their product offerings into a singular platform akin to the Apple ecosystem. This opens up the age-old debate about vendor lock-in, and we can't help but chuckle at the similarities with Cisco's approach. We'll also navigate through the labyrinth of product names, specifically Palo Alto's Prisma, and the challenges of achieving true platform integration.Cloud security is a jungle of acronyms and complexity, but fear not—we've got our machetes ready! Join us as we untangle the web of CSPM, CNAP, CIEM, and CASB, piecing together the puzzle of multi-cloud environments highlighted by a Fortinet report. While we question some of the report's methodologies, it undeniably underscores a trend towards centralized security dashboards. With businesses of all sizes grappling with diverse cloud security challenges, we set the stage for an upcoming segment about our own company's stance in this arena. Expect a mix of skepticism, humor, and serious conversation as we navigate this intricate landscape.Finally, we journey into the realm of AGI and job creation, challenging the narrative of inevitable AI-driven job losses. We speculate on the logistics behind such job creation, pondering the international AI race, and throwing in some humor about genetically modified apples for good measure. We wrap up with some playful banter about Tim's personal details and offer heartfelt thanks to our listeners. We hope you subscribe, follow us on social media, and visit our website for the full scoop. Our discussion is as juicy as a genetically modified apple, and you won't want to miss a bite!Wake up babe, a new apple just dropped:https://www.kissabel.com/Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Everyone Is Right
The Big Picture Mind: What Every Elite is Missing

Everyone Is Right

Play Episode Listen Later Jan 26, 2025 110:45


Welcome to the Transformation Age We are living in one of the most extraordinary moments in human history. The world is shifting beneath our feet — politically, economically, technologically, ecologically, and spiritually. This new era is characterized by rapid, self-reinforcing transformations across all aspects of life. Unlike previous historical shifts, change itself has become the dominant force, creating a world that is increasingly difficult to navigate with traditional ways of thinking. This is the mission of The Big Picture Mind — to cultivate a way of thinking that can navigate these vast changes, helping us make sense of complexity rather than being overwhelmed by it. Why Big Picture Thinking? Too often, our world is shaped by small ideologies masquerading as big pictures—fragmented views that fail to address the depth and interconnectedness of our crises. “Big picture” minds are those that can rise above these limitations, synthesizing knowledge across disciplines, paradigms, and perspectives. Robb introduces the idea that knowledge has evolved through four key stages: Disciplinary – Specialized fields of study (economics, psychology, physics, etc.). Interdisciplinary – The blending of fields to generate new insights (e.g., behavioral economics). Transdisciplinary – Actual big pictures in the 21st century, identifying patterns that connect across all knowledge. Arch-Disciplinary – An emerging, speculative level that distills the core onto-epistemic primitives of the universe common to all big pictures. To meet the demands of the Transformation Age, we must think more holographically, learning to see the interwoven nature of reality with greater clarity and wisdom. The Five Crises Defining Our Time Robb outlines five seismic shifts reshaping our world: Ecological Transformation We are transitioning from the Holocene to the Anthropocene, where human activity is the dominant force shaping the planet. Climate change, biodiversity loss, and ecological degradation are no longer distant threats—they are shaping our societies now. The Rise of Hyperreality Borrowing from philosopher Jean Baudrillard, Robb describes how we increasingly live in a world of symbols detached from reality—a world where a meme coin can represent political power, and narratives are engineered rather than discovered. This disconnect is creating a profound crisis of discernment. The Meaning Crisis Across the world, people are struggling with existential confusion, depression, and a loss of purpose. Without a credible story of wholeness, individuals feel unmoored, caught between outdated mythologies and an arid, reductionist modernism. The Technological Singularity AI is accelerating toward Artificial General Intelligence (AGI) and beyond. If left unchecked, this could reify neo-feudal social structures, concentrating power among a small elite while diminishing social mobility. Governance systems are woefully unprepared for the scale of these disruptions. The Breakdown of Global Governance The world order that has existed since World War II—often referred to as Pax Americana—is fracturing. In its place, we see the return of realist imperialism, economic volatility, and social instability. Populism and reactionary authoritarianism are symptoms of this deeper structural unraveling. The Metacrisis and the Integral Response These crises do not exist in isolation — they form a “metacrisis”, an interlocking systemic breakdown of coherence at all levels of human life. This calls for a new kind of intelligence — one that is capable of integrating perspectives rather than getting lost in fragmentation.

Crazy Wisdom
Episode #429: Breaking Free from BS Jobs: AI's Role in a More Creative Future

Crazy Wisdom

Play Episode Listen Later Jan 24, 2025 51:37


On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Reuben Bailon, an expert in AI training and technology innovation. Together, they explore the rapidly evolving field of AI, touching on topics like large language models, the promise and limits of general artificial intelligence, the integration of AI into industries, and the future of work in a world increasingly shaped by intelligent systems. They also discuss decentralization, the potential for personalized AI tools, and the societal shifts likely to emerge from these transformations. For more insights and to connect with Reuben, check out his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:12 Exploring AI Training Methods00:54 Evaluating AI Intelligence02:04 The Future of Large Action Models02:37 AI in Financial Decisions and Crypto07:03 AI's Role in Eliminating Monotonous Work09:42 Impact of AI on Bureaucracies and Businesses16:56 AI in Management and Individual Contribution23:11 The Future of Work with AI25:22 Exploring Equity in Startups26:00 AI's Role in Equity and Investment28:22 The Future of Data Ownership29:28 Decentralized Web and Blockchain34:22 AI's Impact on Industries41:12 Personal AI and Customization46:59 Concluding Thoughts on AI and AGIKey InsightsThe Current State of AI Training and Intelligence: Reuben Bailon emphasized that while large language models are a breakthrough in AI technology, they do not represent general artificial intelligence (AGI). AGI will require the convergence of various types of intelligence, such as vision, sensory input, and probabilistic reasoning, which are still under development. Current AI efforts focus more on building domain-specific competencies rather than generalized intelligence.AI as an Augmentative Tool: The discussion highlighted that AI is primarily being developed to augment human intelligence rather than replace it. Whether through improving productivity in monotonous tasks or enabling greater precision in areas like medical imaging, AI's role is to empower individuals and organizations by enhancing existing processes and uncovering new efficiencies.The Role of Large Action Models: Large action models represent an exciting frontier in AI, moving beyond planning and recommendations to executing tasks autonomously, with human authorization. This capability holds potential to revolutionize industries by handling complex workflows end-to-end, drastically reducing manual intervention.The Future of Personal AI Assistants: Personal AI tools have the potential to act as highly capable assistants by leveraging vast amounts of contextual and personal data. However, the technology is in its early stages, and significant progress is needed to make these assistants truly seamless and impactful in day-to-day tasks like managing schedules, filling out forms, or making informed recommendations.Decentralization and Data Ownership: Reuben highlighted the importance of a decentralized web where individuals retain ownership of their data, as opposed to the centralized platforms that dominate today. This shift could empower users, reduce reliance on large tech companies, and unlock new opportunities for personalized and secure interactions online.Impact on Work and Productivity: AI is set to reshape the workforce by automating repetitive tasks, freeing up time for more creative and fulfilling work. The rise of AI-augmented roles could lead to smaller, more efficient teams in businesses, while creating new opportunities for freelancers and independent contractors to thrive in a liquid labor market.Challenges and Opportunities in Industry Disruption: Certain industries, like software, which are less regulated, are likely to experience rapid transformation due to AI. However, heavily regulated sectors, such as legal and finance, may take longer to adapt. The discussion also touched on how startups and agile companies can pressure larger organizations to adopt AI-driven solutions, ultimately redefining competitive landscapes.

Hebrew Nation Online
“Come out of her, My people” Show ~ Mark Call weekly

Hebrew Nation Online

Play Episode Listen Later Jan 24, 2025 49:46


Why would newly-inaugurated President Trump take time out from arguably the busiest week in US presidential history to hold a big PR event announcing "Project Stargate" (what a suggestive name, too!) with the intent to spend a half-TRILLION dollars to develop an AGI, or "Artificial General Intelligence?" Which people like the late physicist Steven Hawking described as the greatest existential threat ever faced by the human race. And maybe the last. Why now? Why at all? And what does it mean to those of us who arguably can't stop it, but certainly have been warned, repeatedly, by Scripture. "Danger, DANGER, Will Robinson!"

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

In this conversation, Jaeden and Conor discuss Sam Altman's recent blog post that outlines significant advancements in AI, particularly focusing on the concepts of Artificial General Intelligence (AGI) and Superintelligence. They explore the implications of these advancements, the economic divide in access to AI technologies, and the competitive landscape of AI companies. The discussion highlights the potential benefits and risks associated with the rapid development of AI, emphasizing the need for ethical considerations and equitable access. Chapters 00:00 Sam Altman's Bold Predictions on AI 02:59 Understanding AGI and Superintelligence 05:46 The Economic Divide in AI Access 09:00 The Role of Competition in AI Development 11:59 Reflections on the Future of AI AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-Podcast Get on the AI Box Waitlist: ⁠⁠https://AIBox.ai/⁠⁠ Conor's AI Course: https://www.ai-mindset.ai/courses Conor's AI Newsletter: https://www.ai-mindset.ai/ Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about

Science Friday
‘Artificial General Intelligence' Is Apparently Coming. What Is It?

Science Friday

Play Episode Listen Later Jan 16, 2025 17:44


For years, artificial intelligence companies have heralded the coming of artificial general intelligence, or AGI. OpenAI, which makes the chatbot ChatGPT, has said that their founding goal was to build AGI that “benefits all of humanity” and “gives everyone incredible new capabilities.”Google DeepMind cofounder Dr. Demis Hassabis has described AGI as a system that “should be able to do pretty much any cognitive task that humans can do.” Last year, OpenAI CEO Sam Altman said AGI will arrive sooner than expected, but that it would matter much less than people think. And earlier this week, Altman said in a blog post that the company knows how to build AGI as we've “traditionally understood it.”But what is artificial general intelligence supposed to be, anyway?Ira Flatow is joined by Dr. Melanie Mitchell, a professor at Santa Fe University who studies cognition in artificial intelligence and machine systems. They talk about the history of AGI, how biologists study animal intelligence, and what could come next in the field.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 438: AI News That Matters - January 13th, 2024

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jan 13, 2025 43:13


Send Everyday AI and Jordan a text messageIs Microsoft laying off thousands because of AI? How did one small box from NVIDIA change the future of work? What are Google's big AI shakeups? And why is OpenAI getting into humanoid robots? So many AI questions. We've got the AI answers with the AI news that matters.  Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. OpenAI Robotics Department2. NVIDIA's AI Projects and Tech3. Google's AI Updates and AGI shift4. Microsoft's Open Source Model5. Meta speaks on AI and Software EngineeringTimestamps:00:00 Daily AI news, podcast recaps, expert episodes.04:34 NVIDIA's CES keynote: Major AI GPU announcements.06:52 NVIDIA uses generative AI to enhance GPUs.12:19 Local powerful AI models enhance data security.16:10 NVIDIA forks Meta's Llama for enterprise AI.20:44 Google aims for AGI using advanced world models.22:34 Phi 4: Efficient, powerful, open-source AI model.25:35 Microsoft prioritizes retaining AI talent with bonuses.31:34 OpenAI revives robotics department for versatile robots.35:18 OpenAI urges US to secure AI investments.39:21 Observations connect over time; predictions often accurate.40:01 Prediction on AI agent numbers was impactful.Keywords:OpenAI, robotics, humanoid robots, adaptive robots, AI models, AI supercomputer, NVIDIA GPUs, DeepMind AI, Microsoft's open-source model, AI automation, Meta software engineering, US AI leadership, AI Predictions, AI industry news, RTX 50 series GPUs, Project Digits, NVIDIA's Grace Blackwell superchip, local AI computing, Cosmos, Isaac Robot Simulation, Nemotron Models, Enterprise AI, DeepMind's World Models, Google's Artificial General Intelligence, Google AI projects, Microsoft layoffs, Microsoft Phi-4 model, Hugging Face, Coding automation, Meta's AI advancement. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Leveraging AI
154 | AGI is here, ASI and the singularity are around the corner, NVIDIA is taking over the world, and agents will be everywhere in 2025, and more AI news for the week ending on Jan 10th 2025

Leveraging AI

Play Episode Listen Later Jan 11, 2025 46:03 Transcription Available


Are we on the brink of a technological revolution—or chaos?In this week's episode of Leveraging AI, host Isar Meitis breaks down the fast-paced developments in artificial intelligence that unfolded during the final weeks of the year. This episode unpacks key concepts like Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), exploring their potential to transform industries, solve global challenges, and... maybe even outsmart us. If you're a business leader looking to leverage AI while staying ahead of the curve, this episode is your AI survival guide.Wondering how to train your team or yourself to adopt AI successfully? I share a proven AI business transformation framework and an exclusive opportunity to join a live course designed for leaders. Checkout with $100 off using LEVERAGINGAI100 at https://multiplai.ai/ai-course/In this episode, you'll discover:The surprising milestones in AGI and ASI, including OpenAI's O3 model outperforming humans in key tests.How Sam Altman and Dario Amodei envision AI solving global challenges—while acknowledging the risks.Why “thinking models” are reshaping AI's role in business, and how they might transform the market.The growing influence of AI agents in companies like Google, eBay, and Moody's—and how they're reshaping industries.Why leaders like Sundar Pichai are pushing for AI to become as ubiquitous as Google itself.A behind-the-scenes look at AI-driven innovations from NVIDIA, Meta, and emerging players like DeepSeek.A step-by-step plan to enhance AI literacy and adoption in your business for maximum ROI.BONUS:Sam Altman's Blog Post: "Reflections" - https://blog.samaltman.com/reflections Dario Amodei's Essay: "Machines of Loving Grace" - https://darioamodei.com/machines-of-loving-graceAbout Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Daily Tech Headlines
OpenAI Moves Toward Artificial General Intelligence – DTH

Daily Tech Headlines

Play Episode Listen Later Jan 6, 2025


Samsung intros Vision AI for TVs at CES, Disney and Fubo join forces, Intel unveils new AI chips, Halliday Glasses have a near-eye “DigiWindow” display. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoy what youContinue reading "OpenAI Moves Toward Artificial General Intelligence – DTH"

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 429: AI News That Matters - December 30th, 2024

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 30, 2024 47:27


Send Everyday AI and Jordan a text messageGoogle is using Claude to improve Gemini? Why is OpenAI looking at building humanoids? What does a $100 billion price tag have to do with AGI? AI news and big tech didn't take a holiday break. Get caught up with Everyday AI's AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Impact of Large Language Models2. Google's AI Strategy3. OpenAI's Restructuring and Robotics Research4. AI Manipulation and Concerns5. AGI and its Valuation6. DeepSeek's Open-Source Model7. Meta's AI Plan for Social MediaTimestamps:00:00 Open-source AI competes with proprietary models.04:21 DeepSeek v3: Affordable, open-source model for innovators.07:39 Meta expands AI characters, faces safety risks.10:42 OpenAI restructuring as Public Benefit Corporation (PBC).17:04 Google compares models; Gemini flagged for safety.19:40 Models often use other models for evaluation.21:51 Google prioritizes Gemini AI for 2025 growth.26:29 Google's Gemini lagged behind in updates, ineffective.31:17 AI's intention economy forecasts, manipulates, sells intentions.35:13 Hinton warns AI could outsmart humans, urges regulation.39:24 Microsoft invested in OpenAI; AGI limits tech use.40:36 Microsoft revised AGI use agreement with OpenAI.Keywords:Large Language Models, Google's AI Focus, Gemini language models, AI evaluation, OpenAI Robotics, AI Manipulation Study, Anticipatory AI, Artificial General Intelligence, DeepSeek, Open-source AI, B3 model, Meta, AI Characters, Social Media AI, OpenAI corporate restructuring, Public Benefit Corporation, AI investment, Anthropic's Claude AI, AI Compliance, AI safety, Synthetic Data, AI User Manipulation, Geoffrey Hinton, AI risks, AI regulation, AGI Valuation, Microsoft-OpenAI partnership, Intellectual property in AI, AGI Potential, Sam Altman. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Real Coffee with Scott Adams
Episode 2703 CWSA 12/28/24

Real Coffee with Scott Adams

Play Episode Listen Later Dec 28, 2024 88:22


Find my Dilbert 2025 Calendar at: https://dilbert.com/ God's Debris: The Complete Works, Amazon https://tinyurl.com/GodsDebrisCompleteWorks Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Politics, US Infrastructure Compromised, US Infrastructure Cyber Warfare, 2FA Compromised, Deportee App, CNN Abby Phillip, Biden Crime Family Evidence, Fake News Ratings Crash, J6 Political Prisoners, J6 Class Action Lawsuit, Imminent AI AGI, Artificial General Intelligence, Elon Musk, H-1B System Abuse, Professional Organized X Trolls, Troll Wave Pattern Recognition, H-1B Muddy Thinking, Demographic Hiring, H-1B Reform, Climate Model Assumptions, Scott Adams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 419: Ask me anything AI: Grilling Jordan on everything AI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 10, 2024 41:47


Send Everyday AI and Jordan a text message❓ Will AI take my job?  ❓ What AI skills are most important?  ❓ How should my company be planning our LLM strategy?  ❓ Are you an AI bot?  I've talked to hundreds of the smartest people in the world on AI and asked them all of the burning questions.  We're flipping the tables with this episode -- you can put ME on the hot seat.  Get your questions in now, and I'll be answering them live on this special edition of Everyday AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI News Rapid-Fire Audience QuestionsAI Fears and ApprehensionsEmail Management with AIEveryday AI's Future PlansAI Use in Podcast PlanningAI Effects on Human InteractionsAI in Email ManagementAI tools Usage and PreferencesEveryday AI Team DynamicsAI Use on the WebAI Free MarketOpenAI Future AnnouncementsAI Trends and ModelsAI Model Testing and PreferencesSora Processing PlanTimestamps:00:00 Sign up for daily newsletter, Meta's nuclear AI.06:12 Technical difficulties prevented planned live Q&A.08:22 Transitioning to Windows to use AI for emails.12:05 Relying on sponsors, podcast not profitable.16:25 Prefers MeetGeek and Otter over Google Gemini.19:19 Ensure guests fill form; use narrowed GPT.24:02 AI content ubiquitous; human connection remains irreplaceable.25:55 AI replacing human interaction may become normal.27:43 Expect more value soon from $200 plan.33:14 Primarily uses 3.5 SONNET; plans testing Nova.34:28 Haven't tried Sora $200 plan yet due to busyness.40:13 Everyday AI: Learn, collaborate, and explore together.40:47 Join live stream, newsletter for AI updates.Keywords:Everyday AI, Jordan Wilson, OpenAI, Google, Microsoft, Meta, nuclear energy, AI projects, quantum computing, Willow chip, Sora video tool, technical difficulties, live stream, audience questions, AI in email management, LinkedIn, AI concerns, AI future, AI accessibility, AI in communication, AI in human experience, AI tools, Chat GPT, CoPilot, AI-free services, Artificial General Intelligence (AGI), AI in entertainment, AI in gaming, custom GPTs, notebook LM. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/