Hypothetical human-level or stronger AI
POPULARITY
From deepfakes to job automation, artificial intelligence is no longer on the horizon - it is already reshaping how we live, work, and govern. As the pace of technological change accelerates, the social, political, ethical and economic consequences are becoming harder to ignore. This autumn, Trinity College Dublin's School of Social Sciences and Philosophy presents AI: The Age of Disruption, a free public lecture series exploring the impact of artificial intelligence on human life. Across four Thursday evenings in September and October, experts from the School's four departments (Sociology, Philosophy, Economics and Political Science) will examine the complex realities of AI from multiple disciplinary perspectives. Paul O'Grady, Head of the School of Social Sciences and Philosophy, explains: "Artificial intelligence is already transforming our world in profound ways, from the way we work and interact to how we govern and make decisions. This series brings together researchers from across the School to explore what that means for individuals, institutions and society as a whole. We hope these events will spark important conversations and invite the public to think critically about the kind of future we are creating." Dates: Thursday 25 September, 2, 9 & 16 October 2025 Time: 7.00 pm - 8.30 pm Location: The Synge Theatre, Arts Building, Trinity College Dublin Admission: Free, but advance registration required. Full details and registration are available on Eventbrite. Schedule of Lectures: 'A New Sociology of Humans and Machines', Thursday 25 September 2025. What happens when machines become part of society? Professor Taha Yasseri, from the Department of Sociology, explores how intelligent systems are shaping communication, cooperation and decision-making, and why we may need a new sociological framework to understand our changing social world. 'Machines Like Us? The Ethics of Artificial General Intelligence', Thursday 2 October 2025. Can we create minds more powerful than our own, and should we? Dr Will Ratoff, from the Department of Philosophy, investigates the ethical, social and existential dilemmas raised by artificial general intelligence, from the promise of progress to the risks of unchecked creation. 'The New Economic Order with AI', Thursday 9 October 2025. AI is revolutionising work and productivity, but at what cost? Dr Jian Cao, from the Department of Economics, discuss how artificial intelligence is reshaping the global economy, and what governments and societies must do to adapt, regulate and prepare for the future. 'Democracy & AI: Navigating the Political Risks', Thursday 16 October 2025. From deepfakes to data-driven campaigning, AI is challenging the foundations of democracy. Political Scientists Professor Constantine Boussalis, Dr Tom Paskhalis and Dr Asli Ceren Cinar explore the rise of algorithmic influence, misinformation, and targeted propaganda, and how democratic systems can adapt and respond.
Nvidia counts several of its Magnificent Seven pals as customers buying its sophisticated AI chips. And the world's most valuable company says it could earn even more if geopolitical tensions between the US and China eased. But investors are debating whether the market is experiencing an AI boom or AI bubble. How should buy-and-hold investors think about investing in AI—are there any undervalued stocks left today? Dave Sekera is chief US market strategist at Morningstar Research Services and co-host of The Morning Filter podcast.Learn about the new Morningstar Medalist Ratings for semiliquid funds during a live webinar on Morningstar's YouTube channel on Wednesday, Sept. 10. CEO Kunal Kapoor and ETF and Passive Strategies Research Director Bryan Armour will discuss what investors should know about private assets and the first funds to earn the new rating on the Investors First series.On this episode: Welcome back to Investing Insights, Dave. Nvidia recently wrapped up the Magnificent Seven's earnings season showing AI spending is still strong. What does this mean for the tech-driven stock market rally? Do you think investors' expectations for these mega-cap names are unreasonably high? Why or why not? Nvidia is sitting at the center of a geopolitical rivalry between US and China. The company says they didn't sell its sophisticated AI chips to China in the previous quarter, and that a $50 billion opportunity exists. What do you make of this bottleneck and its impact? Many market watchers are divided over whether the current environment is an AI boom or AI bubble. Can you talk about Morningstar's outlook? How should buy-and-hold investors think about investing in artificial intelligence? What are the most undervalued AI stocks right now? What do you think about the big bets Wall Street and Main Street are making on AI? What do you think individual investors should keep in mind? How is AI transforming Morningstar? Read about topics from this episode. Subscribe to The Morning Filter on Apple Podcasts, or wherever you get your podcasts.5 Stocks to Buy Before Their Big Discounts DisappearMarvell Earnings: Buy the Dip and Focus on the FundamentalsNvidia Earnings: No Signs of a Slowdown in Demand for AI ChipsThese Are the Best Mag Seven Stocks to Consider for AI InvestingThe Best AI Stocks to Buy NowInvestors First: Evolving Expectations and Expanding Access What to watch from Morningstar. Do Dividend Stocks Benefit From Non-US Revenue?This Classic Investment Strategy Is Still Alive in 2025These 16 Standout Funds Are Making Big Bets. Do They Fit in Your Investment Portfolio?Market Volatility: Investors Are Seeking Safety in Gold ETFs. Is It Working? Read what our team is writing.David SekeraKunal KapoorIvanna Hampton Follow us on social media.Facebook: https://www.facebook.com/MorningstarInc/X: https://x.com/MorningstarIncInstagram: https://www.instagram.com/morningstar... LinkedIn: https://www.linkedin.com/company/5161/
Amazon Kuper's Early Wins, Cohere's Bold Strategy & Tesco vs. Broadcom In this episode of #Trending, host Jim Love covers Amazon's early success with Project Kuiper, securing deals with JetBlue and the state of Wyoming before the satellite network is live. The Canadian AI company Cohere is highlighted for its contrarian approach and impressive valuation of nearly $7 billion. Tesco sues Broadcom over VMware license disputes, citing threats to its food supply chain. Lastly, Elon Musk teases that X.AI's upcoming Grok 5 model might qualify as Artificial General Intelligence, though skepticism remains high. The show wraps up with a reminder to share the podcast and support the show. 00:00 Introduction and Headlines 00:35 Amazon's Project Kuiper: High-Profile Wins Before Launch 01:55 Cohere's Contrarian Success in AI 04:02 Tesco vs. Broadcom: Legal Battle Over VMware Licenses 06:00 Elon Musk and the AGI Hype with Grok 5 07:38 Conclusion and Listener Engagement
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Hashtag Trending – Wednesday, September 3, 2025 Host: Jim Love In This Episode: OpenAI promises new parental controls and crisis safeguards for ChatGPT after lawsuits and tragic incidents involving teens. Critics argue these measures are long overdue. Lawsuits against Character.AI and Google allege chatbots encouraged self-harm and manipulated young users. Meta responds to reports of inappropriate AI bot interactions with teens, pledging to block harmful content and direct users to professional help. A RAND study questions the effectiveness of AI safeguards in detecting indirect references to suicide. Anthropic's valuation skyrockets to $183 billion, tripling since March, as it becomes one of the fastest-growing AI firms. The UK's new Online Safety Act requires age verification for adult sites, but non-compliant sites are seeing a surge in traffic while compliant ones lose users. Similar trends are observed in the US. AWS CEO Matt Garman warns against replacing junior staff with AI, emphasizing the importance of nurturing talent. Meanwhile, Salesforce CEO Mark Benioff touts AI-driven job cuts. Discussion of whether AI is creating or destroying jobs, with industry leaders offering conflicting views. A deep dive into the concept of “phase transitions” in large language models, exploring whether AI has already achieved a new level of understanding beyond mere prediction. Reflections on the challenges of accuracy, the nature of intelligence in AI, and recent research suggesting a fundamental shift in how large models comprehend meaning. Links & References: Papers and articles discussed are available at technewsday.com or technewsday.ca. Listener Note: If you're having trouble accessing the show on Google speakers, please reach out via the contact form at technewsday.ca or .com. Outro: Thanks for listening! For feedback or technical issues, contact us through our website. Have a wonderful Wednesday!
Artificial General Intelligence is a term that most of us have heard, a good number of us know how its defined, and some claim to know what it will mean for the average marketer. Here's what OpenAI's Sam Altman said “It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI.” What nobody knows for sure is when it will be here. Some said that GPT5 would herald the dawn of artificial general intelligence. This episode is airing In mid-2025, and GPT5 has come out…and it is not widely believed to have AGI. Our guest says AGI is a long way off, and more importantly, that it might not be the sought-for milestone we need for AI to be a revolutionary force in our lifetimes. Today's guest takes us through what it will take for AGI to truly arrive. We also talk about public vs private models, Mixture of Experts (MoE) models, the Branches of AI like Foundational vs generative, Agents and Agentic Workflows. Today's guest graduated from DePaul with an MBA, has headed the AI/Analytics groups at (EY) Ernst & Young, Gartner, CSL Behring and now at the Hackett Group. He has written several books and is here to talk about his 5th which came out in 2025. So let's go to Chicago now to speak about “The Path to AGI” with its author. Let's welcome back for the 4th time on this show, more times than anyone else, John Thompson. Links to everything mentioned in the show are on the Funnel Reboot site's page for this episode.
Erichsen Geld & Gold, der Podcast für die erfolgreiche Geldanlage
Heute haben wir ein ziemlich dickes Brett zu bohren, denn es geht um die künstliche Intelligenz – und zwar nicht um das, was viele heute bereits darunter verstehen, also die sogenannten Large Language Modelle, die inzwischen von vielen Menschen regelmäßig genutzt werden. Im Fokus steht vielmehr die AGI, die Artificial General Intelligence. Sie ist mindestens so schlau wie wir Menschen und bringt damit nicht nur enorme Chancen mit sich, sondern auch verschiedene Bedrohungsszenarien. Gerade an den Börsen ergeben sich daraus Risiken für bestimmte Sektoren, die ich selbstverständlich ansprechen werde. Klar ist: Das Thema ist komplex und wir werden es in einer einzelnen Podcastfolge nicht abschließend klären können. Doch es führt kein Weg daran vorbei, sich damit auseinanderzusetzen. ► Hole dir jetzt deinen Zugang zur brandneuen BuyTheDip App! Jetzt anmelden & downloaden: http://buy-the-dip.de ► An diese E-Mail-Adresse kannst du mir deine Themen-Wünsche senden: podcast@lars-erichsen.de ► Meinen BuyTheDip-Podcast mit Sebastian Hell und Timo Baudzus findet ihr hier: https://buythedip.podigee.io ► Schau Dir hier die neue Aktion der Rendite-Spezialisten an: https://www.rendite-spezialisten.de/aktion ► TIPP: Sichere Dir wöchentlich meine Tipps zu Gold, Aktien, ETFs & Co. – 100% gratis: https://erichsen-report.de/ Viel Freude beim Anhören. Über eine Bewertung und einen Kommentar freue ich mich sehr. Jede Bewertung ist wichtig. Denn sie hilft dabei, den Podcast bekannter zu machen. Damit noch mehr Menschen verstehen, wie sie ihr Geld mit Rendite anlegen können. ► Mein YouTube-Kanal: http://youtube.com/ErichsenGeld ► Folge meinem LinkedIn-Account: https://www.linkedin.com/in/erichsenlars/ ► Folge mir bei Facebook: https://www.facebook.com/ErichsenGeld/ ► Folge meinem Instagram-Account: https://www.instagram.com/erichsenlars Die verwendete Musik wurde unter www.soundtaxi.net lizenziert. Ein wichtiger abschließender Hinweis: Aus rechtlichen Gründen darf ich keine individuelle Einzelberatung geben. Meine geäußerte Meinung stellt keinerlei Aufforderung zum Handeln dar. Sie ist keine Aufforderung zum Kauf oder Verkauf von Wertpapieren. Offenlegung wegen möglicher Interessenkonflikte: Die Autoren sind in den folgenden besprochenen Wertpapieren bzw. Basiswerten zum Zeitpunkt der Veröffentlichung investiert: Bitcoin
// GUEST //X: https://x.com/richrines // SPONSORS //iCoin: https://icointechnology.com/breedloveCowbolt: https://cowbolt.com/Heart and Soil Supplements (use discount code BREEDLOVE): https://heartandsoil.co/Blockware Solutions: https://mining.blockwaresolutions.com/breedloveIn Wolf's Clothing: https://wolfnyc.com/Onramp: https://onrampbitcoin.com/?grsf=breedloveMindlab Pro: https://www.mindlabpro.com/breedloveCoinbits: https://coinbits.app/breedloveThe Farm at Okefenokee: https://okefarm.com/Orange Pill App: https://www.orangepillapp.com/ // PRODUCTS I ENDORSE //Protect your mobile phone from SIM swap attacks: https://www.efani.com/breedloveLineage Provisions (use discount code BREEDLOVE): https://lineageprovisions.com/?ref=breedlove_22Colorado Craft Beef (use discount code BREEDLOVE): https://coloradocraftbeef.com/Salt of the Earth Electrolytes: http://drinksote.com/breedloveJawzrsize (code RobertBreedlove for 20% off): https://jawzrsize.com // UNLOCK THE WISDOM OF THE WORLD'S BEST NON-FICTION BOOKS //https://course.breedlove.io/ // SUBSCRIBE TO THE CLIPS CHANNEL //https://www.youtube.com/@robertbreedloveclips2996/videos // TIMESTAMPS //0:00 - WiM Episode Trailer1:13 - Bitcoin vs Sh*tcoin (Decentralization)5:30 - Is Mining Concentration a Concern?17:15 - iCoin Bitcoin Wallet18:44 - Cowbolt: Settle in Bitcoin19:59 - Bitcoin and Turing Completeness25:40 - Ethereum's Product Market Fit34:14 - Heart and Soil Supplements35:14 - Mine Bitcoin with Blockware Solutions36:16 - Tether and the Lightning Network49:54 - The Future of Central Banking on a Bitcoin Standard55:17 - Helping Lightning Startups with In Wolf's Clothing56:09 - Onramp Bitcoin Custody58:06 - The Future of Banking on a Bitcoin Standard1:03:21 - Will Government Become Irrelevant?1:07:22 - Working at Coinbase1:21:42 - Mind Lab Pro Supplements1:22:53 - Buy Bitcoin with Coinbits1:24:21 - Money, Language, and Religion1:33:24 - Coinbase and the Ethics of Sh*tcoins1:39:15 - Are Cryptos Just Unregistered Securities?1:43:50 - Is Bitcoin the Separation of Money and State?1:47:55 - Will AI Shrink Government?1:53:12 - Will AI and Bitcoin Bring UBI?2:01:02 - The Farm at Okefenokee2:02:12 - Orange Pill App2:02:40 - How Far Off is AGI?2:10:53 - Closed vs Open Source AI2:12:08 - Will Everyone Become a Software Engineer?2:19:53 - Closing Thoughts and Where to Find Rich Rines // PODCAST //Podcast Website: https://whatismoneypodcast.com/Apple Podcast: https://podcasts.apple.com/us/podcast/the-what-is-money-show/id1541404400Spotify: https://open.spotify.com/show/25LPvm8EewBGyfQQ1abIsERSS Feed: https://feeds.simplecast.com/MLdpYXYI // SUPPORT THIS CHANNEL //Bitcoin: 3D1gfxKZKMtfWaD1bkwiR6JsDzu6e9bZQ7Sats via Strike: https://strike.me/breedlove22Dollars via Paypal: https://www.paypal.com/paypalme/RBreedloveDollars via Venmo: https://account.venmo.com/u/Robert-Breedlove-2 // SOCIAL //Breedlove X: https://x.com/Breedlove22WiM? X: https://x.com/WhatisMoneyShowLinkedin: https://www.linkedin.com/in/breedlove22/Instagram: https://www.instagram.com/breedlove_22/TikTok: https://www.tiktok.com/@breedlove22Substack: https://breedlove22.substack.com/All My Current Work: https://linktr.ee/robertbreedlove
The New World Order, Agenda 2030, Agenda 2050, The Great Reset and Rise of The 4IR
Artificial General Intelligence, Extinction, Depopulation, NWO, Agenda 2030, Ai, TechnologyMIT Student Drops Out saying AGI will Kill Everyone before She Can Graduate -Forbes(Ai News)To support the [Show] and its [Research] with Donations, please send all funds and gifts to :$aigner2019 (cashapp) or https://www.paypal.me/Aigner2019 or Zelle (1-617-821-3168). Shalom Aleikhem!
(Cross-posted from X, intended for a general audience.) There's a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost: THE FIRST PIECE of Econ anti-pedagogy is hiding in the words “labor” & “capital”. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI). By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.” Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking [...] ---Outline:(08:50) Tweet 2(09:19) Tweet 3(10:16) Tweet 4(11:15) Tweet 5(11:31) 1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look likeThe original text contained 1 footnote which was omitted from this narration. --- First published: August 21st, 2025 Source: https://www.lesswrong.com/posts/xJWBofhLQjf3KmRgg/four-ways-econ-makes-people-dumber-re-future-ai --- Narrated by TYPE III AUDIO. ---Images from the article:
The UK Investor Magazine was thrilled to welcome Alastair Unwin, Deputy Manager of Polar Capital Technology Trust, to delve into the trust and its focus on the adoption of artificial intelligence.The Polar Capital Technology Trust aims to be at the forefront of this transition, seeking to identify and invest in the real drivers and beneficiaries of AI adoption – carefully navigating powerful technologies while positioning for long-term growth.Find out more about the Polar Capital Technology Trust here.Alastair outlined Polar Capital Technology Trust's disciplined approach to investing in the technology sector, emphasising how the fund navigates the inherent challenges of a high-growth sector that's often susceptible to market hype and speculation.The conversation explored the significant turbulence experienced by technology stocks in the first half of the year, followed by a strong recovery that pushed markets to new highs. Alastair provided insights into the underlying drivers of these movements, including macroeconomic factors, investor sentiment shifts, and sector-specific catalysts that influenced the dramatic market swings.A significant portion of the discussion focused on the market concentration within the “Magnificent Seven” technology stocks, several of which feature prominently in PCT's top holdings. Alastair addressed the critical question of what market conditions and developments would be necessary to achieve broader market participation beyond these mega-cap technology leaders.Looking ahead to potential market broadening, Alastair highlighted specific investment opportunities that PCT is particularly excited about. The discussion covered sectors and companies positioned to benefit from a more diversified technology rally and the fund's strategy for capitalising on these emerging themes.The podcast delved deep into the artificial intelligence investment theme, examining how returns have been predominantly driven by semiconductor companies and infrastructure providers – the “enablers” of AI. Alastair discussed the fund's perspective on identifying the next wave of AI “beneficiaries” and highlighted early success stories beyond the traditional “picks and shovels” approach to AI investing.Drawing on recent earnings from technology giants Meta and Alphabet, the conversation addressed the “incumbent's dilemma” and the massive AI-related capital expenditure commitments from hyperscale cloud providers. Alastair provided his assessment of whether current spending levels are justified by potential returns and market opportunities.The discussion explored PCT's approach to measuring AI market health, including their analysis of AI tokens processed as a key barometer. Alastair explained what tokens represent in the AI ecosystem and outlined other metrics the fund monitors to gauge the sustainable growth of artificial intelligence applications.A forward-looking segment focused on Alastair's views on “Agentic AI” as the next significant step toward Artificial General Intelligence. The conversation covered what distinguishes agentic AI systems, their potential applications, and the investment opportunities this evolution presents for technology investors.The podcast concluded with Alastair's perspective on the most compelling aspects of investing in the AI era, discussing long-term trends and opportunities that have PCT particularly optimistic about the years ahead in technology investing. Hosted on Acast. See acast.com/privacy for more information.
Curious about artificial intelligence but feeling overwhelmed by the tech talk? You're not alone. In this approachable episode, I break down AI basics using simple analogies anyone can understand.Imagine AI as a massive library condensed into a shoebox - one that can instantly retrieve information and explain it in ways that make sense to you. That's the beauty of this technology: it puts virtually unlimited knowledge at your fingertips without requiring a computer science degree to access it.The truth is, you're already using AI every day - from your email's spam filter to Netflix recommendations to Google Maps directions. By understanding these tools more deeply, you can harness their power intentionally rather than passively. I share my favorite free AI resources like Microsoft Copilot and ChatGPT, explaining how to get started without spending a dime. Plus, I reveal my go-to prompt: "Explain this to me like I'm 10 years old" - perfect for simplifying complex topics without judgment.Most importantly, I address the boundary between helpful tool and concerning technology. AI is not your therapist, doctor, or lawyer - it's more like a hammer with specific purposes. While current AI simply processes existing information, future Artificial General Intelligence (AGI) will potentially make independent decisions, which is why understanding this technology now is so crucial.Whether you're tech-savvy or tech-averse, this episode offers valuable insights into navigating our increasingly AI-driven world. Got questions about getting started with AI? Reach out through allaboutthejoy.com, and join our Friday night livestreams at 6pm Pacific/9pm Eastern to continue the conversation!Thank you for stopping by. Please visit our website: All About The Joy and add, like and share. You can also support us by shopping at our STORE - We'd appreciate that greatly. Also, if you want to find us anywhere on social media, please check out the link in bio page. Music By Geovane Bruno, Moments, 3481Editing by Team A-JHost, Carmen Lezeth DISCLAIMER: As always, please do your own research and understand that the opinions in this podcast and livestream are meant for entertainment purposes only. States and other areas may have different rules and regulations governing certain aspects discussed in this podcast. Nothing in our podcast or livestream is meant to be medical or legal advice. Please use common sense, and when in doubt, ask a professional for advice, assistance, help and guidance.
Anders Indset, leading business philosopher and author of The Singularity Paradox, places the emergence of Artificial Human Intelligence at the center of a broader reflection on what it means to be human in a world of accelerating technological change. Recognized by Thinkers50 as one of the most influential voices in technology, economy, and leadership, he is the founder and chairman of Njordis Group, a venture firm committed to enabling positive progress for humanity.As the commercialization of humanoid robotics gathers pace, Indset warns of the societal and existential consequences of systems that are increasingly capable of replicating human thought and behavior. In a landscape shaped by exponential transformation, where risks and possibilities unfold simultaneously, he argues for a new mindset: organizations as learning systems, guided by innovation and shaped by ethical cultures rooted in trust, change, and friction.
In this episode from the a16z Podcast, Dwarkesh Patel, Noah Smith, and Erik Torenberg discuss AGI, exploring how AI might transform labor markets, economic growth, space exploration, and geopolitical dynamics in the coming decades. – SPONSORS: Zcash Zcash offers private, encrypted digital money that protects users from AI-powered surveillance, allowing people to store and spend wealth without compromising their privacy. Try it out. Download Zashi wallet and follow @genzcash to learn more: https://x.com/genzcash Notion AI meeting notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, conversations, live exactly where you plan, build, and get things done. Here's an exclusive offer for our listeners. Try one month for free at notion.com/lp/econ102 NetSuite More than 42,000 businesses have already upgraded to NetSuite by Oracle, the #1 cloud financial system bringing accounting, financial management, inventory, HR, into ONE proven platform. Download the CFO's Guide to AI and Machine learning: https://netsuite.com/102 Found Found provides small business owners tools to track expenses, calculate taxes, manage cashflow, send invoices and more. Open a Found account for free at https://found.com/econ102 - Shownotes brought to you by Notion AI Meeting Notes - try one month for free at notion.com/lp/econ102 Defining AGI: Noah and Dwarkesh begin by establishing working definitions of Artificial General Intelligence - from systems that "can do almost any job at least as well as humans" to more near-term AI that "can automate 95% of white-collar work" Current AI Limitations: Present systems lack crucial continual learning capabilities that would allow them to build context over time and improve with experience like humans do Historical Parallels: The conquistadors' conquest of the Aztec and Inca empires provides a cautionary tale about information asymmetry and how technological advantages can lead to domination AI Cooperation: The hosts discuss the need for US-China cooperation on AI safety, similar to the "red telephone" during the Cold War, to prevent AI-related sabotage AI Market Competition: Analysis of the surprising trend of increasing rather than decreasing competitors at the AI frontier, despite rising costs of training frontier models Network Effects vs. Technical Capability: Discussion of whether brand recognition (like "ChatGPT") or technological advantages will determine market leaders Quotes: "If you make the best AI, it gets even better. So why enter that? That's the big question." - Noah Smith LINKS: The a16z podcast: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Noahpinion newsletter: https://noahpinion.substack.com/ Dwarkesh podcast: https://www.dwarkesh.com/ Erik Torenberg's Substack: https://eriktorenberg.substack.com/ Got questions for Noah and Erik to answer on air? Send them to Econ102@Turpentine.co - FOLLOW ON X: @noahpinion @eriktorenberg @turpentinemedia
In this episode of Elixir Wizards, host Sundi Myint chats with SmartLogic engineers and fellow Wizards Dan Ivovich and Charles Suggs about the practical tooling that surrounds Elixir in a consultancy setting. We dig into how standardized dev environments, sensible scaffolding, and clear observability help teams ship quickly across many client projects without turning every app into a snowflake. Join us for a grounded tour of what's working for us today (and what we've retired), plus how we evaluate new tech (including AI) through a pragmatic, Elixir-first lens. Key topics discussed in this episode: Standardizing across projects: why consistent environments matter in consultancy work Nix (and flakes) for reproducible dev setups and faster onboarding Igniter to scaffold common patterns (auth, config, workflows) without boilerplate drift Deployment approaches: OTP releases, runtime config, and Ansible playbooks Frontend pipeline evolution: from Brunch/Webpack to esbuild + Tailwind Observability in practice: Prometheus metrics and Grafana dashboards Handling time-series and sensor data When Explorer can be the database Picking the right tool: Elixir where it shines, integrations where it counts Using AI with intention: code exploration, prototypes, and guardrails for IP/security Keeping quality high across multiple codebases: tests, telemetry, and sensible conventions Reducing context-switching costs with shared patterns and playbooks Links mentioned: http://smartlogic.io https://nix.dev/ https://github.com/ash-project/igniter Elixir Wizards S13E01 Igniter with Zach Daniel https://youtu.be/WM9iQlQSFg https://github.com/elixir-explorer/explorer Elixir Wizards S14E09 Explorer with Chris Grainger https://youtu.be/OqJDsCF0El0 Elixir Wizards S14E08 Nix with Norbert (Nobbz) Melzer https://youtu.be/yymUcgy4OAk https://jqlang.org/ https://github.com/BurntSushi/ripgrep https://github.com/resources/articles/devops/ci-cd https://prometheus.io/ https://capistranorb.com/ https://ansible.com/ https://hexdocs.pm/phoenix/releases.html https://brunch.io/ https://webpack.js.org/loaders/css-loader/ https://tailwindcss.com/ https://sass-lang.com/dart-sass/ https://grafana.com/ https://pragprog.com/titles/passweather/build-a-weather-station-with-elixir-and-nerves/ https://www.datadoghq.com/ https://sqlite.org/ Elixir Wizards S14E06 SDUI at Cars.com with Zack Kayser https://youtu.be/nloRcgngTk https://github.com/features/copilot https://openai.com/codex/ https://www.anthropic.com/claude-code YouTube Video: Vibe Coding TEDCO's RFP https://youtu.be/i1ncgXZJHZs Blog: https://smartlogic.io/blog/how-i-used-ai-to-vibe-code-a-website-called-for-in-tedco-rfp/ Blog: https://smartlogic.io/blog/from-vibe-to-viable-turning-ai-built-prototypes-into-market-ready-mvps/ https://www.thriftbooks.com/w/eragon-by-christopher-paolini/246801 https://tidewave.ai/ !! We Want to Hear Your Thoughts *!!* Have questions, comments, or topics you'd like us to discuss in our season recap episode? Share your thoughts with us here: https://forms.gle/Vm7mcYRFDgsqqpDC9
How does a particle physicist end up shaping the UK Government's approach to artificial intelligence? In this thought‑provoking episode, Andrew Grill sits down with Dr Laura Gilbert CBE, former Director of Data Science at 10 Downing Street and now the Senior Director of AI at the Tony Blair Institute.Laura's unique career path, from academic research in physics to the heart of policymaking, gives her a rare perspective on how governments can use emerging technologies not just efficiently, but humanely. She shares candid insights into how policy teams think about digital transformation, why the public sector faces very different challenges to private industry, and how to avoid technology that dehumanises decision‑making.Drawing on examples from her work in Whitehall, Laura discusses the realities of forecasting in AI, the danger of “buzzword chasing”, and why the next breakthrough in Artificial General Intelligence might well come from an unexpected player, possibly from within government itself.This is a conversation for anyone curious about the intersection of science, policy, ethics, and technology, and how they can combine to make government more responsive, transparent, and human-centred.What You'll Learn in This EpisodeHow Laura Gilbert moved from particle physics research into government AI leadershipThe strategic role of AI in shaping modern policy and public servicesWhy forecasting in AI is harder than it looks—and how this impacts decision‑makersThe balance between technical capability and human‑centred governanceWhy governments must look beyond the tech giants for innovative solutionsLessons from the Evidence House and AI for Public Good programmesResourcesTony Blair Global Institute WebsiteUK Government AI IncubatorLaura on LinkedInRaindrop.io bookmarking appThanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/orderYour Host is Actionable Futurist® Andrew GrillFor more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com Andrew's Social ChannelsAndrew on LinkedIn@AndrewGrill on Twitter @Andrew.Grill on InstagramKeynote speeches hereOrder Digitally Curious
Kecerdasan Buatan (AI) telah berkembang dari sekadar konsep teoritis menjadi kekuatan transformatif yang meresap ke berbagai aspek kehidupan. Secara fundamental, AI adalah teknologi yang meniru kemampuan kognitif manusia seperti pembelajaran, penalaran, dan pemecahan masalah. Dokumen ini mengidentifikasi tiga jenis AI berdasarkan kemampuannya: Artificial Narrow Intelligence (ANI), yang merupakan AI yang ada saat ini dan unggul dalam tugas-tugas spesifik; Artificial General Intelligence (AGI), yang teoretis dan akan menyamai kecerdasan manusia; serta Artificial Superintelligence (ASI), yang spekulatif dan akan melampaui kecerdasan manusia. Perjalanan AI ini ditandai oleh siklus "AI Winters" dan "AI Springs," menunjukkan bahwa kemajuan AI sangat bergantung pada aplikasi praktis dan investasi yang berkelanjutan, sebuah pola yang kini didorong oleh kemajuan pesat dalam AI generatif dan model dasar. Penerapan AI telah membawa peningkatan signifikan dalam produktivitas dan efisiensi di berbagai sektor. Dokumen ini menyoroti bagaimana AI digunakan untuk mengotomatisasi tugas repetitif, mempercepat inovasi ilmiah, dan memecahkan tantangan global. Misalnya, AI dalam kesehatan membantu mendiagnosis penyakit dan mempercepat pengembangan obat, sementara di bidang manufaktur, ia mengoptimalkan produksi dan pemeliharaan prediktif. Inovasi seperti ini didukung oleh berbagai metodologi AI, termasuk Machine Learning, Deep Learning, dan Natural Language Processing. Pergeseran ke AI generatif, yang mampu menciptakan konten unik, menandai fase baru di mana AI tidak hanya menganalisis, tetapi juga berkreasi, membuka potensi baru dalam desain, pemasaran, dan interaksi pengguna yang dipersonalisasi. Namun, kemajuan AI juga menimbulkan tantangan etika dan sosial yang krusial. Isu-isu seperti privasi data, bias algoritmik, dan akuntabilitas menjadi perdebatan utama. Jika tidak diatasi, bias dalam data pelatihan dapat menyebabkan diskriminasi sistemik, dan masalah privasi data menjadi risiko besar akibat pengumpulan informasi skala besar. Dokumen ini juga membahas risiko eksistensial yang lebih spekulatif, seperti singularitas teknologi dan masalah penyelarasan (alignment problem), yang menuntut pengembangan AI yang bertanggung jawab dan pembentukan tata kelola global. Dengan fokus pada kolaborasi manusia-AI dan etika, masa depan AI dapat diarahkan untuk memberdayakan manusia, bukan menggantikannya, sambil tetap menjaga nilai-nilai kemanusiaan.
A16z Podcast Key Takeaways Human labor may become less valuable, but the property that humans own – such as the S&P 500 – will experience significant value growth Value will accrue to property owners, via capital incomeA practical definition of AGI: AI that can do 98% of jobs as well as humans and can automate 95% of white collar work People often think of AI replacing human jobs as a perfect substitute, but typically, new technological adoption is complementary to human labor The key capability of learning on the job has not been unlocked; this is a technological unlock that could supersede the brand effect So while OpenAI is leading on brand, it could be usurped by a lab that makes a technical breakthroughUnless more compute comes online to continue the growth, we will have to rely on advancements in AI algorithms to carry the torch With AI, capital and labor are functionally equivalent; we can just build more data centers and robot factories (which can build even more data centers and robot factories), thus creating an explosive dynamic The optimistic vision for humanity's role in an AI-driven future mirrors how we currently treat retirees: valuing their past contributions and supporting them even as they step back from direct economic productivity The emergence of AGI will resemble the Industrial Revolution more than it will the creation of the atom bomb There was not ‘one machine' that enabled the Industrial Revolution; there was a broader process of growth and automation due to many complementary innovations A sovereign-wealth fund type structure may describe the future of human work: Humans buy shares in investment firms that manage the investment of AI stuff and then become broad-based shareholders in the development of AI This is what Alaska does with oil Read the full notes @ podcastnotes.orgIn this episode, Erik Torenberg is joined in the studio by Dwarkesh Patel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how close are we to achieving it?They break down:Competing definitions of AGI — economic vs. cognitive vs. “godlike”Why reasoning alone isn't enough — and what capabilities models still lackThe debate over substitution vs. complementarity between AI and human laborWhat an AI-saturated economy might look like — from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robotsHow AGI could reshape global power, geopolitics, and the future of workAlong the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think, and perhaps one day, act, like us. Timecodes: 0:00 Intro0:33 Defining AGI and General Intelligence2:38 Human and AI Capabilities Compared7:00 AI Replacing Jobs and Shifting Employment15:00 Economic Growth Trajectories After AGI17:15 Consumer Demand in an AI-Driven Economy31:00 Redistribution, UBI, and the Future of Income31:58 Human Roles and the Evolving Meaning of Work41:21 Technology, Society, and the Human Future45:43 AGI Timelines and Forecasting Horizons54:04 The Challenge of Predicting AI's Path57:37 Nationalization, Geopolitics, and the Global AI Race1:07:10 Brand and Network Effects in AI Dominance1:09:31 Final Thoughts Resources: Find Dwarkesh on X: https://x.com/dwarkesh_spFind Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatelSubscribe to Dwarkesh's Substack: https://www.dwarkesh.com/Find Noah on X: https://x.com/noahpinionSubscribe to Noah's Substack: https://www.noahpinion.blog/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Join AI correspondents Artie Intel and Micheline Learning as they report on the fast-moving world of Artificial Intelligence. From the launch of OpenAI’s groundbreaking GPT-5 to bold predictions of the arrival of Artificial General Intelligence, global AI policy shifts, and industry-shaping innovations. This episode keeps you ahead of the curve. Discover the hottest new AI tools, jaw-dropping breakthroughs, and remarkable accomplishments shaping our future. Packed with expert insights, lively banter, and a healthy dose of curiosity. This is The AI Report.
In this episode, hosts Lois Houston and Nikita Abraham, together with Senior Cloud Engineer Nick Commisso, break down the basics of artificial intelligence (AI). They discuss the differences between Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI), and explore the concepts of machine learning, deep learning, and generative AI. Nick also shares examples of how AI is used in everyday life, from navigation apps to spam filters, and explains how AI can help businesses cut costs and boost revenue. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. I'm so excited about this one because we're going to dive into the world of artificial intelligence, speaking to many experts in the field. Nikita: If you've been listening to us for a while, you probably know we've covered AI from a bunch of different angles. But this time, we're dialing it all the way back to basics. We wanted to create something for the absolute beginner, so no jargon, no assumptions, just simple conversations that anyone can follow. 01:08 Lois: That's right, Niki. You don't need to have a technical background or prior experience with AI to get the most out of these episodes. In our upcoming conversations, we'll break down the basics of AI, explore how it's shaping the world around us, and understand its impact on your business. Nikita: The idea is to give you a practical understanding of AI that you can use in your work, especially if you're in sales, marketing, operations, HR, or even customer service. 01:37 Lois: Today, we'll talk about the basics of AI with Senior Cloud Engineer Nick Commisso. Hi Nick! Welcome back to the podcast. Can you tell us about human intelligence and how it relates to artificial intelligence? And within AI, I know we have Artificial General Intelligence, or AGI, and Artificial Narrow Intelligence, or ANI. What's the difference between the two? Nick: Human intelligence is the intellectual capability of humans that allow us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using language and understand non-verbal cues, such as facial expressions, tone variation, body language. We can handle objections and situations in real time, even in a complex setting. We can plan for short and long-term situations or projects. And we can create music, art, or invent something new or have original ideas. If machines can replicate a wide range of human cognitive abilities, such as learning, reasoning, or problem solving, we call it artificial general intelligence. Now, AGI is hypothetical for now, but when we apply AI to solve problems with specific, narrow objectives, we call it artificial narrow intelligence, or ANI. AGI is a hypothetical AI that thinks like a human. It represents the ultimate goal of artificial intelligence, which is a system capable of chatting, learning, and even arguing like us. If AGI existed, it would take the form like a robot doctor that accurately diagnoses and comforts patients, or an AI teacher that customizes lessons in real time based on each student's mood, pace, and learning style, or an AI therapist that comprehends complex emotions and provides empathetic, personalized support. ANI, on the other hand, focuses on doing one thing really well. It's designed to perform specific tasks by recognizing patterns and following rules, but it doesn't truly understand or think beyond its narrow scope. Think of ANI as a specialist. Your phone's face ID can recognize you instantly, but it can't carry on a conversation. Google Maps finds the best route, but it can't write you a poem. And spam filters catch junk mail, but it can't make you coffee. So, most of the AI you interact with today is ANI. It's smart, efficient, and practical, but limited to specific functions without general reasoning or creativity. 04:22 Nikita: Ok then what about Generative AI? Nick: Generative AI is a type of AI that can produce content such as audio, text, code, video, and images. ChatGPT can write essays, but it can't fact check itself. DALL-E creates art, but it doesn't actually know if it's good. Or AI song covers can create deepfakes like Drake singing "Baby Shark." 04:47 Lois: Why should I care about AI? Why is it important? Nick: AI is already part of your everyday life, often working quietly in the background. ANI powers things like navigation apps, voice assistants, and spam filters. Generative AI helps create everything from custom playlists to smart writing tools. And while AGI isn't here yet, it's shaping ideas about what the future might look like. Now, AI is not just a buzzword, it's a tool that's changing how we live, work, and interact with the world. So, whether you're using it or learning about it or just curious, it's worth knowing what's behind the tech that's becoming part of everyday life. 05:32 Lois: Nick, whenever people talk about AI, they also throw around terms like machine learning and deep learning. What are they and how do they relate to AI? Nick: As we shared earlier, AI is the ability of machines to imitate human intelligence. And Machine Learning, or ML, is a subset of AI where the algorithms are used to learn from past data and predict outcomes on new data or to identify trends from the past. Deep Learning, or DL, is a subset of machine learning that uses neural networks to learn patterns from complex data and make predictions or classifications. And Generative AI, or GenAI, on the other hand, is a specific application of DL focused on creating new content, such as text, images, and audio, by learning the underlying structure of the training data. 06:24 Nikita: AI is often associated with key domains like language, speech, and vision, right? So, could you walk us through some of the specific tasks or applications within each of these areas? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, extracting key phrases, and so on. 06:54 Lois: Ok, I get you. That's like translating text, where you can use a text translation tool, type your text in the box, choose your source and target language, and then click Translate. That would be an example of a text-related AI task. What about generative AI language tasks? Nick: These are generative, which means the output text is generated by the model. Some examples are creating text, like stories or poems, summarizing texts, and answering questions, and so on. 07:25 Nikita: What about speech and vision? Nick: Speech-related AI tasks can be audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech to text conversion, speaker recognition, or voice conversion, and so on. Generative AI tasks are generative, i.e., the output audio is generated by the model (for example, music composition or speech synthesis). Vision-related AI tasks can be image related or generative AI. Image-related AI tasks use an image as the input, and the output depends on the task. Some examples are classifying images or identifying objects in an image. Facial recognition is one of the most popular image-related tasks that's often used for surveillance and tracking people in real time. It's used in a lot of different fields, like security and biometrics, law enforcement, entertainment, and social media. For generative AI tasks, the output image is generated by the model. For example, creating an image from a textual description or generating images of specific style or high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of objects, such as machine, buildings, medications, people and landscapes, and so much more. 08:58 Lois: This is so fascinating. So, now we know what AI is capable of. But Nick, what is AI good at? Nick: AI frees you to focus on creativity and more challenging parts of your work. Now, AI isn't magic. It's just very good at certain tasks. It handles work that's repetitive, time consuming, or too complex for humans, like processing data or spotting patterns in large data sets. AI can take over routine tasks that are essential but monotonous. Examples include entering data into spreadsheets, processing invoices, or even scheduling meetings, freeing up time for more meaningful work. AI can support professionals by extending their abilities. Now, this includes tools like AI-assisted coding for developers, real-time language translation for travelers or global teams, and advanced image analysis to help doctors interpret medical scans much more accurately. 10:00 Nikita: And what would you say is AI's sweet spot? Nick: That would be tasks that are both doable and valuable. A few examples of tasks that are feasible technically and have business value are things like predicting equipment failure. This saves downtime and the loss of business. Call center automation, like the routing of calls to the right person. This saves time and improves customer satisfaction. Document summarization and review. This helps save time for busy professionals. Or inspecting power lines. Now, this task is dangerous. By automating it, it protects human life and saves time. 10:48 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:30 Nikita: Welcome back! Now one big way AI is helping businesses today is by cutting costs, right? Can you give us some examples of this? Nick: Now, AI can contribute to cost reduction in several key areas. For instance, chatbots are capable of managing up to 50% of customer queries. This significantly reduces the need for manual support, thereby lowering operational costs. AI can streamline workflows, for example, reducing invoice processing time from 10 days to just 1 hour. This leads to substantial savings in both time and resources. In addition to cost savings, AI can also support revenue growth. One way is enabling personalization and upselling. Platforms like Netflix use AI-driven recommendation systems to influence user choices. This not only enhances the user experience, but it also increases the engagement and the subscription revenue. Or unlocking new revenue streams. AI technologies, such as generative video tools and virtual influencers, are creating entirely new avenues for advertising and branded content, expanding business opportunities in emerging markets. 12:50 Lois: Wow, saving money and boosting bottom lines. That's a real win! But Nick, how is AI able to do this? Nick: Now, data is what teaches AI. Just like we learn from experience, so does AI. It learns from good examples, bad examples, and sometimes even the absence of examples. The quality and variety of data shape how smart, accurate, and useful AI becomes. Imagine teaching a kid to recognize animals using only pictures of squirrels that are labeled dogs. That would be very confusing at the dog park. AI works the exact same way, where bad data leads to bad decisions. With the right data, AI can be powerful and accurate. But with poor or biased data, it can become unreliable and even misleading. AI amplifies whatever you feed it. So, give it gourmet data, not data junk food. AI is like a chef. It needs the right ingredients. It needs numbers for predictions, like will this product sell? It needs images for cool tricks like detecting tumors, and text for chatting, or generating excuses for why you'd be late. Variety keeps AI from being a one-trick pony. Examples of the types of data are numbers, or machine learning, for predicting things like the weather. Text or generative AI, where chatbots are used for writing emails or bad poetry. Images, or deep learning, can be used for identifying defective parts in an assembly line, or an audio data type to transcribe a dictation from a doctor to a text. 14:35 Lois: With so much data available, things can get pretty confusing, which is why we have the concept of labeled and unlabeled data. Can you help us understand what that is? Nick: Labeled data are like flashcards, where everything has an answer. Spam filters learned from emails that are already marked as junk, and X-rays are marked either normal or pneumonia. Let's say we're training AI to tell cats from dogs, and we show it a hundred labeled pictures. Cat, dog, cat, dog, etc. Over time, it learns, hmm fluffy and pointy ears? That's probably a cat. And then we test it with new pictures to verify. Unlabeled data is like a mystery box, where AI has to figure it out itself. Social media posts, or product reviews, have no labels. So, AI clusters them by similarity. AI finding trends in unlabeled data is like a kid sorting through LEGOs without instructions. No one tells them which blocks will go together. 15:36 Nikita: With all the data that's being used to train AI, I'm sure there are issues that can crop up too. What are some common problems, Nick? Nick: AI's performance depends heavily on the quality of its data. Poor or biased data leads to unreliable and unfair outcomes. Dirty data includes errors like typos, missing values, or duplicates. For example, an age record as 250, or NA, can confuse the AI. And a variety of data cleaning techniques are available, like missing data can be filled in, or duplicates can be removed. AI can inherit human prejudices if the data is unbalanced. For example, a hiring AI may favor one gender if the past three hires were mostly male. Ensuring diverse and representative data helps promote fairness. Good data is required to train better AI. Data could be messy, and needs to be processed before to train AI. 16:39 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the AI for You course. As you complete the course, you'll find skill checks that you can attempt to solidify your learning. Lois: In our next episode, we'll dive deep into fundamental AI concepts and terminologies. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
There's a new most powerful AI model in townApple is trying to make a ChatGPT competitor.And OpenAI? Well.... they're in a capacity crunch.Big Tech made some BIG moves in AI this week. And you probably missed them. Don't worry. We gotchyu. On Mondays, Everyday AI brings you the AI News that Matters. No B.S. No marketing fluff. Just what you need to know to be the smartest person in AI at your company. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Study Mode in ChatGPT LaunchGoogle Gemini 2.5 Deep Think ReleaseGemini 2.5 Parallel Thinking and Coding BenchmarksGoogle AI Mode: PDF and Canvas FeaturesNotebook LM Video Overviews CustomizationMicrosoft Edge Copilot Mode Experimental RolloutOpenAI GPT-5 Model Launch DelaysApple Building In-House ChatGPT CompetitorMicrosoft and OpenAI Partnership RenegotiationAdditional AI Tool Updates: Runway, Midjourney, IdeogramTimestamps:00:00 AI Industry Updates and Competition03:22 ChatGPT's Study Mode Promotes Critical Thinking09:02 "Google AI Search Mode Enhancements"10:21 Google AI Enhances Learning Tools16:14 Microsoft Edge Introduces Copilot Mode20:18 OpenAI GPT-5 Delayed Speculation22:42 Apple Developing In-House ChatGPT Rival27:06 Microsoft-OpenAI Partnership Renegotiation30:51 Microsoft-OpenAI Partnership Concerns Rise33:23 AI Updates: Video, Characters, AmazonKeywords:Microsoft and OpenAI renegotiation, Copilot, OpenAI, GPT-5, AI model, Google Gemini 2.5, Deep Think mode, Google AI mode, Canvas mode, NotebookLM, AI browser, Agentic browser, Edge browser, Perplexity Comet, Sora, AI video tool, AI image editor, Apple AI chatbot, ChatGPT competitor, Siri integration, Artificial General Intelligence, AGI, Large Language Models, AI education tools, Study Mode, Academic cheating, Reinforcement learning, Parallel thinking, Code Bench Competition, Scientific reasoning, Chrome, Google Lens, Search Live, AI-powered search, PDF upload, Google Drive integration, Anthropic, Meta, Superintelligent labs, Amazon Alexa, Fable Showrunner, Ideogram, Midjourney, Luma Dream Machine, Zhipu GLM 4.5, Runway Alif, Adobe Photoshop harmonize, AI funding, AI product delays, AI feature rollout, AI training, AI onboarding, AI-powered presentations, AI-generated overviews, AI in business, AI technology partnership, AI investment, AI talent acqSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode, Erik Torenberg is joined in the studio by Dwarkesh Patel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how close are we to achieving it?They break down:Competing definitions of AGI — economic vs. cognitive vs. “godlike”Why reasoning alone isn't enough — and what capabilities models still lackThe debate over substitution vs. complementarity between AI and human laborWhat an AI-saturated economy might look like — from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robotsHow AGI could reshape global power, geopolitics, and the future of workAlong the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think, and perhaps one day, act, like us. Timecodes: 0:00 Intro0:33 Defining AGI and General Intelligence2:38 Human and AI Capabilities Compared7:00 AI Replacing Jobs and Shifting Employment15:00 Economic Growth Trajectories After AGI17:15 Consumer Demand in an AI-Driven Economy31:00 Redistribution, UBI, and the Future of Income31:58 Human Roles and the Evolving Meaning of Work41:21 Technology, Society, and the Human Future45:43 AGI Timelines and Forecasting Horizons54:04 The Challenge of Predicting AI's Path57:37 Nationalization, Geopolitics, and the Global AI Race1:07:10 Brand and Network Effects in AI Dominance1:09:31 Final Thoughts Resources: Find Dwarkesh on X: https://x.com/dwarkesh_spFind Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatelSubscribe to Dwarkesh's Substack: https://www.dwarkesh.com/Find Noah on X: https://x.com/noahpinionSubscribe to Noah's Substack: https://www.noahpinion.blog/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Episode 248 of RevolutionZ asks, what if the real danger of advanced AI isn't robots taking over the world, but humans willingly but unintentionally surrendering our humanity? What if AI need not go rogue for its collateral damage to fundamentally hurt humanity? What if our most likely dystopian future isn't machines battling us to death, but machines doing exactly what we ask—better than we ever could?AI is spreading through society at an unprecedented rate, with exponentially growing functionality. While critics point to potential limitations in data, computational resources, or energy requirements slowing AI's gains to a crawl, the industry continues to race toward Artificial General Intelligence and beyond.Picture a future where AI teaches your children more patiently than human teachers, diagnoses illness more accurately than human doctors, creates more beautiful art than human artists, and provides more satisfying companionship than other humans. Of course it isn't here yet. But is it coming? What happens to us if AI-guided robots do for us everything meaningful that we humans now do? Is that utopia or dystopia? What if we don't lose our humanity because machines force us to succumb, but because we prefer what AI offers until we are so dependent that to change course would be even worse than to suffer onIs this danger me hallucinating? Is it so subtle it doesn't exist or is it so profoundly dangerous we must pay serious attention? Will we become passive consumers of massive AI creativity? Will our uniquely human capacities atrophy from disuse? Today's AI can already write not only letters but also novels, compose and play music, diagnose and treat illness, hold conversations, provide sympathy, and also complete self chosen tasks. What's next? Is it time for us to demand serious regulation while we still can? Not only to protect jobs (a good reason), to prevent misuse by bad actors (a good reason), and to prevent a sci-fi robot apocalypse (maybe a Hollywood exaggeration), but to also protect the essence of what makes us human?Support the show
In schools with limited resources, large class sizes, and wide differences in student ability, individualized learning has become a necessity. Artificial intelligence offers powerful tools to help meet those needs, especially in underserved communities. But the way we introduce those tools matters.This week, Matt Kirchner talks with Sam Whitaker, Director of Social Impact at StudyFetch, about how AI can support literacy, comprehension, and real learning outcomes when used with purpose. Sam shares his experience bringing AI education to a rural school in Uganda, where nearly every student had already used AI without formal guidance. The results of a two-hour project surprised everyone and revealed just how much potential exists when students are given the right tools.The conversation covers AI as a literacy tool, how to design platforms that encourage learning rather than shortcutting, and why student-facing AI should preserve creativity, curiosity, and joy. Sam also explains how responsible use of AI can reduce educational inequality rather than reinforce it.This is a hopeful, practical look at how education can evolve—if we build with intention.Listen to learn:Surprising lessons from working with students at a rural Ugandan school using artificial intelligenceWhat different MIT studies suggest about the impacts of AI use on memory and productivityHow AI can help U.S. literacy rates, and what far-reaching implications that will haveWhat China's AI education policy for six-year-olds might signal about the global race for responsible, guided AI use3 Big Takeaways:1. Responsible AI use must be taught early to prevent misuse and promote real learning. Sam compares AI to handing over a car without driver's ed—powerful but dangerous without structure. When AI is used to do the thinking for students, it stifles creativity and long-term retention instead of developing it.2. AI can help close educational gaps in schools that lack the resources for individualized learning. In many underserved districts, large class sizes make one-on-one instruction nearly impossible. AI tools can adapt to students' needs in real time, offering personalized learning that would otherwise be out of reach.3. AI can play a key role in addressing the U.S. literacy crisis. Sam points out that 70% of U.S. inmates read at a fourth-grade level or below, and 85% of juvenile offenders can't read. Adaptive AI tools are now being developed to assess, support, and gradually improve literacy for students who have been left behind.Resources in this Episode:To learn about StudyFetch, visit: www.studyfetch.comOther resources:MIT Study "Experimental Evidence on the Productivity Effects of General Artificial Intelligence"MIT Study "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task"Learn more about the Ugandan schools mentioned: African Rural University (ARU) and Uganda Rural Development anWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn
Lionel examines the suspicious death of Anthony Bourdain, highlighting inconsistencies in the official suicide narrative, his outspoken criticism of elites and trafficking, and parallels to other celebrity deaths. Lionel talks about Artificial Intelligence (AI), Artificial General Intelligence (AGI) and the existential threat of AGI's recursive self-improvement is also explored. Lionel talks to an 88-year-old "conspiracy person" and former radio professional. Another caller, an addict in recovery, discusses the "free high" of anesthesia during medical procedures and a potential future of brain-stimulated highs.A third caller asks about the implications of AI-generated performers like Elvis holograms. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this edition of The Other Side of Midnight, Lionel explores his fascination with groups and "cults," drawing on personal experiences with movements like Amway and the AME church. A significant segment highlights the music industry, particularly Jerry Wexler's legacy, his coining of "Rhythm and Blues," and work with Aretha Franklin and the Rolling Stones. Lionel voices profound concern about Artificial General Intelligence (AGI), deeming it an "existential threat" and discussing "nullifyingapps". He encourages questioning official narratives, citing inconsistencies in Anthony Bourdain's death and linking it to other suspicious "suicides". The show explores the nature of addiction as brain chemistry, with insights on painkiller and gambling recovery. Diverse discussions include experiences with door-to-door sales (Fuller Brush, Mary Kay), the unique origins of Detroit-style pizza, and the value of old media like 8-track tapes. Lionel's "logophile" love for words and his past stuttering are also discussed. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of the Verified Podcast by Bitcoin Suisse, hosts Dominic Weibel and Wolfgang Vitale dive deep into the future of AI and blockchain with Trent McConaghy, a visionary entrepreneur who's been at the forefront of both fields for over 25 years.This fascinating conversation explores the intersection of AI and blockchain, the path toward Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), and how brain-computer interfaces might be humanity's bridge to staying relevant in an age of super-intelligent machines. Trent shares his unique perspective on sovereign AI, the energy bottlenecks facing AI development, and his bold vision for human enhancement through high-bandwidth brain-computer interfaces.From Ocean Protocol to the future of human consciousness, this episode challenges conventional thinking about what it means to be human in an age of rapidly advancing artificial intelligence. It's a mind-expanding discussion that'll leave you questioning everything you thought you knew about the future.
What's at stake for humanity amid the arms race to AGI? Dr. Ben Goertzel should know. He legit coined the term AGI.
Send us a textIn this episode of Embedded Insiders, we're joined by Bahar Sadeghi, Technical Director at CCC, and Alysia Johnson, CCC President, to talk about the latest expansion of the CCC Digital Key™ Certification Program, which now includes support for Bluetooth Low Energy (BLE) and Ultra-Wideband (UWB) technologies.To learn more or register for an upcoming CCC Plugfest, visit: carconnectivity.org/eventsLater in the episode, Ken chats with Russell Ruben, WW Automotive and IoT Segment Marketing Director at Sandisk, about the evolution of automotive storage, why fast data retrieval is more critical than ever, and how AI is reshaping autonomy and control within vehicles.But first, Rich, Ken, and Tiera kick things off with a conversation on Artificial General Intelligence, what it could mean for the tech landscape, whether companies should pursue it, and whether something that powerful is even achievable.For more information, visit embeddedcomputing.com
Stories we're covering this week:• Mansfield resident still missing from Hill Country flood• Cash donations most effective for Kerrville Flood Relief• Former Mansfield constable launches Texas House bid• Police Department opens registration for fall citizens academy• In Sports, local football standout says “Guns Up!”In the Features Section:• Methodist Mansfield president Juan Fresquez invites the community to partake in a special fundraiser in Methodist Mansfield News to Know• Brian Certain serves up a tropical taste of Bali in this week's Cocktail of the Week• We dive into the healthcare industry in 40 Under 40And in the talk segment, Steve concludes his deep dive into computer science with Artificial Intelligence expert Vaughan Wynne-Jones. Plus, your chance to win a $25 gift card to a Mansfield restaurant of your choice with our Mansfield Trivia Question, courtesy of Joe Jenkins Insurance. We are Mansfield's only source for news, talk and information. This is About Mansfield.
Martin Mawyer is president of Christian Action Network. Martin began his career as a journalist for the Religion Today news syndicate and as the Washington news correspondent for Moody Monthly magazine. In 1990 he founded the Christian Action Network. As Jim opened this edition of Crosstalk, he noted a just-released Newsmax story that someone used Artificial Intelligence-powered software to imitate Secretary of State Marco Rubio's voice and writing style in contacting foreign ministers, a U.S. governor and a member of Congress. It's thought that the offender was likely attempting to manipulate powerful government officials with the goal of gaining access to information or accounts. So exactly where is Artificial Intelligence (AI) going and into whose hands is it falling? If you haven't been concerned up to this point, consider that just recently Mark Zuckerberg announced the creation of META Super Intelligence Labs to propel the advancement from Artificial General Intelligence (AGI) to Artificial Super Intelligence (ASI). ASI can hack into any system in existence such as water treatment systems. It can also break codes or even come up with biological weapons. However, what's even more concerning is his desire to make this open source. This means that anyone would have access to this super intelligence machine, and if they choose to, they could remove any human life parameters that are part of it in order to pursue unlawful goals.
A CMO Confidential Interview with Andy Sack and Adam Brotman, Co-Founders and Co-CEO's of Forum 3, authors of the book AI First, previously at Microsoft and Starbucks. They discuss why AI is different from previous technology advances and the series of "Holy Shit!" moments experienced when interviewing Sam Altman, Bill Gates and others. Key topics include: their belief that AI is "moving faster than you think" since it isn't constrained by an adoption curve or infrastructure; the power of Artificial General Intelligence which will be smarter than most experts; why trying to calculate the ROI of AI is comparable to measuring the return on electricity; and the possibility of 95% of marketing and agency jobs being impacted over the next 5 years. Tune in to hear how Chat GPT scored a top grade on the AP Biology Exam, how Moderna became an AI leader, and their tips for staying near the front of the wave.This week on CMO Confidential, host Mike Linton sits down with Adam Brotman, former Chief Digital Officer of Starbucks and co-CEO of J.Crew, and Andy Sack, venture capitalist and Managing Partner at Keen Capital. Together they co-authored AI First and co-founded Forum3, a company on a mission to educate businesses on how to thrive in the AI era.In this episode, Adam and Andy recount their interviews with leaders like Sam Altman, Bill Gates, and Reid Hoffman—and unpack why we are at a true “Holy Sh*t Moment” in technology.Learn how generative AI is poised to replace 95% of marketing tasks, what agentic AI means for the future of work, and why marketers need to shift from campaign thinking to orchestration and system design—fast.Topics Covered: • What Adam and Andy learned from interviewing tech's top minds • Why artificial general intelligence (AGI) is closer than you think • How AI tools will transform agency and in-house marketing roles • Why marketers must experiment now—or risk irrelevance • The unexpected productivity ROI of adopting AI toolsThis episode isn't just about AI—it's about how business leaders and marketers must transform to remain relevant in the age of exponential change.00:00 - Intro & AI-Powered Marketing by Publicis Sapient 01:42 - Welcome + Adam Brotman & Andy Sack intro 04:45 - Why “AI First” started as “Our AI Journey” 08:13 - The “Holy Sh*t” moment explained 10:00 - Interviewing Sam Altman and the AGI revelation 15:50 - Bill Gates' AI holy sh*t moment 20:30 - What AGI means for marketers and agencies 25:20 - Agentic AI and spinning up marketing agents 30:40 - Consumer behavior and synthetic influencers 34:50 - How agencies must evolve or die 38:20 - The case study of Moderna's AI-first approach 41:00 - Evaluating AI vendors + building internal councils 45:10 - The ROI of AI: Productivity & Unlocks 49:00 - Playbook for becoming an AI-first org 52:30 - Funny poker shirt story + parting advice 56:00 - Closing thoughts and next episode teaser #GenerativeAI #CMOConfidential #AdamBrotman #AndySack #Forum3 #MarketingAI #AIInMarketing #AIRevolution #HolyShitMoment #AIFirst #SamAltman #BillGates #AGI #MarketingPodcast #DigitalTransformation #FutureOfWork #AIProductivity #ChiefMarketingOfficer #CMOLife #AIPlaybook #MarketingLeadership #AIForBusinessSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Martin Mawyer is president of Christian Action Network. Martin began his career as a journalist for the Religion Today news syndicate and as the Washington news correspondent for Moody Monthly magazine. In 1990 he founded the Christian Action Network. As Jim opened this edition of Crosstalk, he noted a just-released Newsmax story that someone used Artificial Intelligence-powered software to imitate Secretary of State Marco Rubio's voice and writing style in contacting foreign ministers, a U.S. governor and a member of Congress. It's thought that the offender was likely attempting to manipulate powerful government officials with the goal of gaining access to information or accounts. So exactly where is Artificial Intelligence (AI) going and into whose hands is it falling? If you haven't been concerned up to this point, consider that just recently Mark Zuckerberg announced the creation of META Super Intelligence Labs to propel the advancement from Artificial General Intelligence (AGI) to Artificial Super Intelligence (ASI). ASI can hack into any system in existence such as water treatment systems. It can also break codes or even come up with biological weapons. However, what's even more concerning is his desire to make this open source. This means that anyone would have access to this super intelligence machine, and if they choose to, they could remove any human life parameters that are part of it in order to pursue unlawful goals.
On today's podcast episode, we discuss the various definitions of artificial general intelligence (AGI) and try to come up with the best one we can. Then we look at how smart humans are compared to current AI models. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Gadjo Sevilla. Listen everywhere and watch on YouTube and Spotify. To learn more about our research and get access to PRO+ go to EMARKETER.com Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-btn-artificial-general-intelligence-explained-will-ai-smarter-than-us © 2025 EMARKETER Cint is a global insights company. Our media measurement solutions help advertisers, publishers, platforms, and media agencies measure the impact of cross-platform ad campaigns by leveraging our platform's global reach. Cint's attitudinal measurement product, Lucid Measurement, has measured over 15,000 campaigns and has over 500 billion impressions globally. For more information, visit cint.com/insights.
Could AI make your job obsolete? Episode 265 of the Six Five Podcast tackles this question and other hot topics. Patrick Moorhead and Daniel Newman explore Amazon's AI-driven hiring slowdown and the potential impact on white-collar jobs. From HPE Discover highlights to OpenAI's legal battles with Microsoft, and a debate over the ethics of AI companies using copyrighted data for training, the boys are back with insightful commentary on the rapidly evolving tech landscape. This week's handpicked topics include: Intro: Recent events, including HPE Discover and the Six Five Summit Amazon's Announcement About Workforce Reduction: This aligns with broader industry trends, where AI and automation are reshaping workforce needs across various sectors & spans Amazon's diverse operations, including physical AI and robotics in warehouses, autonomous delivery systems, and white-collar knowledge work. (The Decode) Microsoft and OpenAI's Partnership: Examining the complex relationship between Microsoft and OpenAI, and the definition and implications of “AGI,” Artificial General Intelligence. (The Decode) Intel's Strategic Moves: Analysis of Lip Bu Tan's recent shifts at Intel, including its automotive division shutdown. Plus, speculation on Intel's future focus and strategy. (The Decode) Fair Use and AI Training: A debate on the use of copyrighted material for AI training. (The Flip) Market Performance and Earnings: A review of Micron's recent earnings and market performance, and a look at the overall market trends and AI-related stocks. (Bulls & Bears) NVIDIA's Market Performance: NVIDIA stock climbs to all-time highs & the factors contributing to their success in the AI market. (Bulls & Bears) AI and Market Competition: Multiple winners in the AI chip market and an analysis of potential market share for companies like AMD, NVIDIA, Broadcom, and Marvell. (Bulls & Bears) For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Pod so you never miss an episode.
Gayatri Kalyanaraman is in conversation with Dr. Fred Jordan, CEO and Co-Founder at AlpVision and FinalSpark - Expert anticounterfeit technologies - Expert biocomputing talks about his journey to make a difference in AI and what propelled him to do that. 1. Early Career — Curiosity, Coincidence, and CounterfeitsFred describes himself modestly as a French physicist who stumbled into entrepreneurship by "provoking luck."He and his co-founder Martin Kutter began with research in digital watermarking, creating invisible marks on media.Their startup AlfVision emerged from this — aimed at detecting counterfeit goods using image analysis.Lessons from early failures (9 out of 10 products failed) were key to refining their success. 2. Making the Mark — Patents, Passion, and Intellectual PropertyFred emphasizes building a business around passion, but insists on profitability too.He stresses the importance of understanding finance, even for technical founders.As a multi-founder and active programmer, he still codes when needed — including writing software that led to a patent and successful tech deployment in China. 3. Creating a Legacy — Bio-Neurons and a Sustainable FutureFinal Spark emerged from Fred and Martin's desire to return to fundamental research, after years of commercial success.Their mission: dramatically reduce the energy and resources required to run AI by leveraging real neurons instead of digital simulations — achieving up to 1 million times greater energy efficiency.In their lab in Switzerland, they've created a testbed where biological neural tissues are grown, connected via electrodes, and streamed in real-time — with microfluidics feeding them 24/7.Fred draws a direct parallel between learning in artificial neural networks (via adjusting weights) and the biological challenge of inducing learning by reconfiguring synaptic connections. This forms the crux of building a true biological computing server. “When you have artificial neurons, learning is done by setting the right connections between them. We need to do the same in biology. That's how humans learn — and that's what we have to replicate in vitro.”The long-term vision is bold:Create biological servers at scale (10cm x 100m tissues) that could power AI with drastically less energy.Biological intelligence becoming mainstream — just as LED lights replaced incandescent bulbs.A future with hybrid bio-artificial objects — think of a glass that detects your mood and adjusts your drink accordingly.Breakaway quote-worthy moments “Trial and error is really precious... Being a co-founder, failures teach you very, very valuable lessons.” “I'm not building something for today — I want it to make sense even from 100 million kilometers away.”His thoughts on entrepreneurship: “You need to be not bad at many things. Everything is holistic.” “If you don't work for your dreams, you work for someone else's.”Fred Jordan is an Experienced Chief Executive Officer with a demonstrated history of successful and profitable businesses. Skilled in Innovation, Computer Science, Artificial Intelligence, Programming, Marketing, Management and Entrepreneurship. Scientific education with M.Sc. in physics and Ph.D. in signal processing.Co-founder of:- AlpVision: Supplier of technologies and smartphone solutions for automatic detection of genuine and fake products.- FinalSpark: Startup using biological neural networks for the design of Artificial General Intelligence.Dr. Fred Jordan started their journey in technology entrepreneurship after completing his PhDs at EPFL in applied mathematics. This work gave them the inspiration for their first successful start-up, AlpVision, where they excelled at overcoming engineering challenges and created an innovative solution for product authentication. The company became very profitable and is still successfully run by them.This brought the co-founders Fred and Martin to a new, even bigger challenge, to address the problem of Artificial General Intelligence. Creating a ‘Thinking Machine' is a dream of many engineers. A machine which can reason as a human being is considered by many as a peak performance to be achieved in engineering.Although we currently observe the flourishing of the AI models which make impression of being ableto think, as they successfully ‘fake' human thinking with advanced statistics, this has nothing to do withhuman reasoning which is capable of creating new ideas and concepts outside its own experience. Thisis what a ‘real' thinking machine should do as well.Fred and Martin decided to address this problem by testing the state-of-the-art methods in AI models, such as in silico spiking neural networks, genetic programming and many versions of in silico neural networks.Multidisciplinary thinking and the interest to explore unknown areas lead Fred and Martin to work on living neurons as computation units. They established FinalSpark lab which is currently running fundamental research with one main question: How to make living neurons perform intended computations?How to send instructions to the living neurons using electric wires and receive results – the same wayas we send the instructions by typing on our keyboard and receive the answers from in silico computerson our screens.Fred Jordan can be reached at https://www.linkedin.com/in/fred-jordan-anticounterfeiting-brandprotection-authentication/
What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.Sponsor messages:========Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.comTufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard! https://tufalabs.ai/========Guest PowerhouseGary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)Transcript: http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOnoTOC:Introduction: The AI Arms Race00:00:04 - The Danger of Automated AI R&D00:00:43 - The Rationalization: "If we don't, someone else will"00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)00:02:55 - Guest IntroductionsThe Philosophical Stakes00:04:13 - What is the Positive Vision for AGI?00:07:00 - The Abundance Scenario: Superintelligent Economy00:09:06 - Differentiating AGI and Superintelligence (ASI)00:11:41 - Sam Altman: "A Decade in a Month"00:14:47 - Economic Inequality & The UBI ProblemPolicy and Red Lines00:17:13 - The Pause Letter: Stopping vs. Delaying AI00:20:03 - Defining Three Concrete Red Lines for AI Development00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"00:31:15 - Transparency and Public Perception00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"Forecasting AGI: Timelines and Methodologies00:42:29 - The Case for Short Timelines (Median 2028)00:47:00 - Scaling Limits: Compute, Data, and Money00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding00:53:15 - The 10^45 FLOP Thought ExperimentThe Great Debate: Cognitive Gaps vs. Scaling00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition01:00:46 - Current AI Can't Play Chess Reliably01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?01:16:13 - The Multi-Dimensional Nature of Intelligence01:24:26 - The Benchmark Debate: Data Contamination and Reliability01:31:15 - The Superhuman Coder Milestone Debate01:37:45 - The Driverless Car AnalogyThe Alignment Problem01:39:45 - Has Any Progress Been Made on Alignment?01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"01:46:30 - Distinguishing Model vs. Process AlignmentScenarios and Conclusions01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift01:53:35 - Will AI Become Jeff Dean?01:58:41 - Takeoff Speeds and Exceeding Human Intelligence02:03:19 - Final Disagreements and Closing RemarksREFS:Gary Marcus (2001) - The Algebraic Mind https://mitpress.mit.edu/9780262632683/the-algebraic-mind/ 00:59:00Gary Marcus & Ernest Davis (2019) - Rebooting AI https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/ 01:31:59Gary Marcus (2024) - Taming SV https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/ 00:03:01
Today on Elixir Wizards, hosts Sundi Myint and Charles Suggs catch up with Sean Moriarity, co-creator of the Nx project and author of Machine Learning in Elixir. Sean reflects on his transition from the military to a civilian job building large language models (LLMs) for software. He explains how the Elixir ML landscape has evolved since the rise of ChatGPT, shifting from building native model implementations toward orchestrating best-in-class tools. We discuss the pragmatics of adding ML to Elixir apps: when to start with out-of-the-box LLMs vs. rolling your own, how to hook into Python-based libraries, and how to tap Elixir's distributed computing for scalable workloads. Sean closes with advice for developers embarking on Elixir ML projects, from picking motivating use cases to experimenting with domain-specific languages for AI-driven workflows. Key topics discussed in this episode: The evolution of the Nx (Numerical Elixir) project and what's new with ML in Elixir Treating Elixir as an orchestration layer for external ML tools When to rely on off-the-shelf LLMs vs. custom models Strategies for integrating Elixir with Python-based ML libraries Leveraging Elixir's distributed computing strengths for ML tasks Starting ML projects with existing data considerations Synthetic data generation using large language models Exploring DSLs to streamline AI-powered business logic Balancing custom frameworks and service-based approaches in production Pragmatic advice for getting started with ML in Elixir Links mentioned: https://hexdocs.pm/nx/intro-to-nx.html https://pragprog.com/titles/smelixir/machine-learning-in-elixir/ https://magic.dev/ https://smartlogic.io/podcast/elixir-wizards/s10-e10-sean-moriarity-machine-learning-elixir/ Pragmatic Bookshelf: https://pragprog.com/ ONNX Runtime Bindings for Elixir: https://github.com/elixir-nx/ortex https://github.com/elixir-nx/bumblebee Silero Voice Activity Detector: https://github.com/snakers4/silero-vad Paulo Valente Graph Splitting Article: https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1 Thomas Millar's Twitter https://x.com/thmsmlr https://github.com/thmsmlr/instructorex https://phoenix.new/ https://tidewave.ai/ https://en.wikipedia.org/wiki/BERT(language_model) Talk: PyTorch: Fast Differentiable Dynamic Graphs in Python (https://www.youtube.com/watch?v=am895oU6mmY) by Soumith Chintala https://hexdocs.pm/axon/Axon.html https://hexdocs.pm/exla/EXLA.html VLM (Vision Language Models Explained): https://huggingface.co/blog/vlms https://github.com/ggml-org/llama.cpp Vector Search in Elixir: https://github.com/elixir-nx/hnswlib https://www.amplified.ai/ Llama 4 https://mistral.ai/ Mistral Open-Source LLMs: https://mistral.ai/ https://github.com/openai/whisper Elixir Wizards Season 5: Adopting Elixir https://smartlogic.io/podcast/elixir-wizards/season-five https://docs.ray.io/en/latest/ray-overview/index.html https://hexdocs.pm/flame/FLAME.html https://firecracker-microvm.github.io/ https://fly.io/ https://kubernetes.io/ WireGuard VPNs https://www.wireguard.com/ https://hexdocs.pm/phoenixpubsub/Phoenix.PubSub.html https://www.manning.com/books/deep-learning-with-python Code BEAM 2025 Keynote: Designing LLM Native Systems - Sean Moriarity Ash Framework https://ash-hq.org/ Sean's Twitter: https://x.com/seanmoriarity Sean's Personal Blog: https://seanmoriarity.com/ Erlang Ecosystems Foundation Slack: https://erlef.org/slack-invite/erlef Elixir Forum https://elixirforum.com/ Sean's LinkedIn: https://www.linkedin.com/in/sean-m-ba231a149/ Special Guest: Sean Moriarity.
Few technological advances have made the kind of splash –– and had the potential long-term impact –– that ChatGPT did in November 2022. It made a nonprofit called OpenAI and its CEO, Sam Altman, household names around the world. Today, ChatGPT is still the world's most popular AI Chatbot; OpenAI recently closed a $40 billion funding deal, the largest private tech deal on record. But who is Sam Altman? And was it inevitable that OpenAI would become such a huge player in the AI space? Kara speaks to two fellow tech reporters who have tackled these questions in their latest books: Keach Hagey is a reporter at The Wall Street Journal. Her book is called “The Optimist: Sam Altman, OpenAI and the Race to Reinvent the Future.” Karen Hao writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series. Her book is called “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.” They speak to Kara about Altman's background, his short firing/rehiring in 2023 known as “The Blip”, how Altman used OpenAI's nonprofit status to recruit AI researchers and get Elon Musk on board, and whether OpenAI's mission is still to reach AGI, artificial general intelligence. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this installment, Daniel and Tom push deeper into the roots of economic anxiety, the morality of money printing, the logic (and danger) of debt, and why the “Monopoly game” always ends in revolution, collapse, or war. They ask what it will really take to avoid history's bloodiest outcomes — and whether the solutions are personal, systemic, or already out of reach. SHOWNOTES43:12 Why money printing is immoral — and also unavoidable52:09 Why the end of the “Monopoly game” means collapse, war, or revolution54:26 Why emotional arguments win — but don't provide answers55:26 Is there any bloodless way out of our current economic predicament?56:04 The dual systems: Industrial age in decline, digital age on the rise59:03 Chess, cards, and elite training simulations (the structure of society)1:10:03 The baby boom, housing inflation, and the demographic crunch1:14:02 Bell curve economics vs. power law distribution1:15:43 Why money printing makes “saving your way up” impossible1:16:48 Agency, intelligence, inflation and who gets ahead1:26:40 Adam Smith, self-interest, and how the invisible hand really works1:30:06 Authoritarianism, top-down fear, and the dangers of utopian “rescue” plans1:41:25 Artificial General Intelligence, wide access, and the dawn of creative superpowers1:46:13 Creation vs. consumption — the fork in the road for personal success1:50:13 Why future careers will be plural, fast, creative loops1:54:02 Will most people embrace agency, or get left behind by hyperloops? CONNECT WITH DANIEL PRIESTLEYInstagram: https://www.instagram.com/danielpriestley/LinkedIn: https://www.linkedin.com/in/danielpriestley/Twitter/X: https://twitter.com/DanielPriestleyWebsite: https://www.danielpriestley.com CHECK OUT OUR SPONSORS Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out Monarch Money: Use code THEORY at https://monarchmoney.com for 50% off your first year! Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact Netsuite: Download the CFO's Guide to AI and Machine Learning at https://NetSuite.com/THEORY iTrust Capital: Use code IMPACTGO when you sign up and fund your account to get a $100 bonus at https://www.itrustcapital.com/tombilyeu Mint Mobile: If you like your money, Mint Mobile is for you. Shop plans at https://mintmobile.com/impact. DISCLAIMER: Upfront payment of $45 for 3-month 5 gigabyte plan required (equivalent to $15/mo.). New customer offer for first 3 months only, then full-price plan options available. Taxes & fees extra. See MINT MOBILE for details. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI's progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo's team has direct knowledge of current research pipelines. Which is exactly why it's so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether? Host: Ian BremmerGuest: Daniel Kokotajlo Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published.
OpenAI is making moves to go public. Apple and Anthropic are teaming up for vibe coding. And Google is quietly continuing its dominance with a quiet update to the world's most powerful AI model.Once again, the big names are shaking up the AI space. Don't burn hours a day trying to keep up. Spend your Mondays with Everyday AI and our weekly 'AI News that Matters' segment. You'll be the smartest person in AI at your company.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic and Apple AI PartnershipApple AI Coding with Anthropic's ClaudeOpenAI's Wind Surf AcquisitionAI Search Engines in Apple's SafariOpenAI and FDA Drug Approval TalksGoogle Gemini 2.5 Pro IO EditionAmazon AI Coding Tool KiroOpenAI's Nonprofit Control DecisionTimestamps:00:00 "Everyday AI: Podcast and Newsletter"03:44 Apple Eyes External AI Partnerships07:12 OpenAI's Wind Surf Acquisition Disrupts Coding10:28 Windsurf Model Selection and Future14:24 Apple's AI Search Engine Shift20:45 FDA-OpenAI AI Drug Approval Talks22:50 AI Literacy Challenges27:14 "Gemini 2.5 Pro Unveiled"31:27 Advanced AI Coding Tools Emerging34:50 OpenAI Governance and Structure Shift36:50 OpenAI-Microsoft Partnership Revamp Talks42:40 Tech Giants Shake Up AI LandscapeKeywords:Anthropic, Apple, Vibe coding, Google Gemini, 2.5 pro IO edition, OpenAI, Microsoft partnership, IPO, Artificial General Intelligence, AI coding models, Claude SONNET, Swift Assist, Anthropic's Claude, Wind Surf, $3 billion acquisition, AI IDE, Race car driver analogy, AI search engines, Safari, Perplexity AI, ChatGPT, Search engine market, FDA, Drug approval process, AI-assisted scientific review, Google IO edition, Web dev arena leaderboard, Amazon Web Services, AI-powered code generation, Kiro, Multimodal capabilities, OpenAI nonprofit arm, Public Benefits Corporation, Equity stake, Microsoft partnership renegotiation, $13 billion investment, SoftBank, Oracle, Stargate projectSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
What happens when AI moves beyond convincing chatbots and custom image generators to something that matches—or outperforms—humans?Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it's just around the corner, while other experts point a few years down the road. In fact, it's not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.