Podcasts about and openai

  • 34PODCASTS
  • 81EPISODES
  • 55mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about and openai

Latest podcast episodes about and openai

Windows Weekly (MP3)
WW 933: Live from Build - Protestors, AI agents, Edit, Doom: The Dark Ages

Windows Weekly (MP3)

Play Episode Listen Later May 22, 2025 129:02


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

All TWiT.tv Shows (MP3)
Windows Weekly 933: Live from Build

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 22, 2025 129:02 Transcription Available


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

Radio Leo (Audio)
Windows Weekly 933: Live from Build

Radio Leo (Audio)

Play Episode Listen Later May 22, 2025 129:02 Transcription Available


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

Windows Weekly (Video HI)
WW 933: Live from Build - Protestors, AI agents, Edit, Doom: The Dark Ages

Windows Weekly (Video HI)

Play Episode Listen Later May 22, 2025 129:02


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

All TWiT.tv Shows (Video LO)
Windows Weekly 933: Live from Build

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 22, 2025 129:02 Transcription Available


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

Radio Leo (Video HD)
Windows Weekly 933: Live from Build

Radio Leo (Video HD)

Play Episode Listen Later May 22, 2025 129:02 Transcription Available


Agentic AI is the theme of the show this year, and this time its multi-agent with orchestration! But first, we need to discuss the protestors. Paul and Richard have stories. So many stories! Build 2025 New Microsoft 365 Copilot features are rolling out now because it's a day that ends in y Tuning is the unexpected Build Bingo center square term - rolling out to agents GitHub Copilot is open source in VS Code, more Win32 app support improvements, no more fees in Microsoft Store A shift in making Windows 11 the best place for developers - some things said, some left unsaid Edge gets new AI features too of course New native app capabilities in Windows App SDK, React Native And, pre-Build, 50 million Visual Studio users Copilot for consumers does image generation now. Fun tip: You can Minecraft-ize photos OpenAI has a coding agent too, obviously And OpenAI is buying Jony Ive! Windows Administrator Protection is coming soon - And not just for businesses. This feels very much like the firewall in XP SP2, it's going to be disruptive New 24H2 features in Release Preview: New text actions in Click to Do, a lot more New 24H2 features in Dev and Beta: AI actions in File Explorer, Advanced Settings, Search improvements, more New 23H2 features, Windows 10 features in Release Preview Surface Laptop Studio RIP Calendar companion app for Windows 11/M365 Microsoft may finally put the Teams antitrust issue in the EU behind Xbox Fortnite returns to the Apple App Store Apple blocked it first, Epic complained to judge And Microsoft files a legal motion against Apple and for Epic Games Qualcomm job listing confirms Xbox plans to some degree What happens when you combine Qualcomm NPU with Nvidia GPU? Xbox May Update arrives and it's a big one Retro Classic Games for Xbox Game Pass Game Bar updates, Edge Game Assist, GeForce now etc. on PC Custom Xbox gift cards More streaming of your own games Hellblade II is coming from Xbox to PS5 Many more games coming to Xbox Game Pass across platforms Tips and Picks App pick of the week: You can try Microsoft's command line editor now Game pick of the week: Doom: The Dark Ages RunAs Radio this week: PowerShell 7.5 and DSC 3.0.0 with Jason Helmick Brown liquor pick of the week: Tamnavulin Sherry Cask Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit uscloud.com

AI For Humans
OpenAI Goes Global, AI Threat Meetings & Gemini's Code Crusher

AI For Humans

Play Episode Listen Later May 8, 2025 53:29


OpenAI just pitched “OpenAI for Countries,” offering democracies a turnkey AI infrastructure while some of the world's richest quietly stockpile bunkers and provisions. We'll dig into billionaire Paul Tudor Jones's revelations about AI as an imminent security threat, and why top insiders are buying land and livestock to ride out the next catastrophe. Plus, a wild theory that Gavin has hatched regarding OpenAI's non-profit designation. Then, we break down the updated Google Gemini Pro 2.5's leap forward in coding… just 15 minutes to a working game prototype…and how this could put game creation in every kid's hands. Plus, Suno's 4.5 music model that finally brings human‑quality vocals, and robots gone wild in Chinese warehouses. AND OpenAI drops 3 billion on Windsurf, HeyGen's avatar model achieving flawless lip sync from any angle, the rise of blazing‑fast open source video engines, UCSD's whole‑body ambulatory robots shaking like nervous toddlers, and even Game of Thrones Muppet mashups with bizarre glitch art. STOCK YOUR PROVISIONS. THE ROBOT CLEANUP CREWS ARE NEXT. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links //   Does AI Pose an “Imminent Threat”? Paul Tudor Jones ‘Heard' About It Conference https://x.com/AndrewCurran_/status/1919759495129137572 Terrifying Robot Goes Crazy https://www.reddit.com/r/oddlyterrifying/comments/1kcbkfe/robot_on_hook_went_berserk_all_of_a_sudden/ Cleaner Robots To Pick Up After The Apocalypse https://x.com/kimmonismus/status/1919510163112779777 https://x.com/loki_robotics/status/1919325768984715652 OpenAI For Countries https://openai.com/global-affairs/openai-for-countries/ OpenAI Goes Non-Profit For Real This Time https://openai.com/index/evolving-our-structure/ New Google Gemini 2.5 Pro Model https://blog.google/products/gemini/gemini-2-5-pro-updates/ Demis Hassabis on the coding upgrade (good video of drawing an app) https://x.com/demishassabis/status/1919779362980692364 New Minecraft Bench looks good https://x.com/adonis_singh/status/1919864163137957915 Gavin's Bear Jumping Game (in Gemini Window) https://gemini.google.com/app/d0b6762f2786d8d2 OpenAI Buys Windsurf https://www.reuters.com/business/openai-agrees-buy-windsurf-about-3-billion-bloomberg-news-reports-2025-05-06/ Suno v4.5 https://x.com/SunoMusic/status/1917979468699931113 HeyGen Avatar v4 https://x.com/joshua_xu_/status/1919844622135627858 Voice Mirroring https://x.com/EHuanglu/status/1919696421625987220 New OpenSource Video Model From LTX https://x.com/LTXStudio/status/1919751150888239374 Using Runway References with 3D Models https://x.com/runwayml/status/1919376580922552753 Amo Introduces Whole Body Movements To Robotics (and looks a bit shaky rn) https://x.com/TheHumanoidHub/status/1919833230368235967 https://x.com/xuxin_cheng/status/1919722367817023779 Realistic Street Fighter Continue Screens https://x.com/StutteringCraig/status/1918372417615085804 Wandering Worlds - Runway Gen48 Finalist https://runwayml.com/gen48?film=wandering-woods Centaur Skipping Rope https://x.com/CaptainHaHaa/status/1919377295137005586 The Met Gala for Aliens https://x.com/AIForHumansShow/status/1919566617031393608 The Met Gala for Nathan Fielder & Sully https://x.com/AIForHumansShow/status/1919600216870637996 Loosening of Sora Rules https://x.com/AIForHumansShow/status/1919956025244860864  

Tech News Weekly (MP3)
TNW 381: Nintendo Switch 2's Higher Price - Amazon & TikTok, Alexa+, Claude & OpenAI

Tech News Weekly (MP3)

Play Episode Listen Later Apr 3, 2025 54:35


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Tech News Weekly (Video HI)
TNW 381: Nintendo Switch 2's Higher Price - Amazon & TikTok, Alexa+, Claude & OpenAI

Tech News Weekly (Video HI)

Play Episode Listen Later Apr 3, 2025 54:35


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

All TWiT.tv Shows (MP3)
Tech News Weekly 381: Nintendo Switch 2's Higher Price

All TWiT.tv Shows (MP3)

Play Episode Listen Later Apr 3, 2025 54:35


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Tech News Weekly (Video LO)
TNW 381: Nintendo Switch 2's Higher Price - Amazon & TikTok, Alexa+, Claude & OpenAI

Tech News Weekly (Video LO)

Play Episode Listen Later Apr 3, 2025 54:35


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Tech News Weekly (Video HD)
TNW 381: Nintendo Switch 2's Higher Price - Amazon & TikTok, Alexa+, Claude & OpenAI

Tech News Weekly (Video HD)

Play Episode Listen Later Apr 3, 2025 54:35


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

All TWiT.tv Shows (Video LO)
Tech News Weekly 381: Nintendo Switch 2's Higher Price

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Apr 3, 2025 54:35 Transcription Available


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Total Mikah (Video)
Tech News Weekly 381: Nintendo Switch 2's Higher Price

Total Mikah (Video)

Play Episode Listen Later Apr 3, 2025 54:35 Transcription Available


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Total Mikah (Audio)
Tech News Weekly 381: Nintendo Switch 2's Higher Price

Total Mikah (Audio)

Play Episode Listen Later Apr 3, 2025 54:35 Transcription Available


Amazon makes a bid for TikTok in the US. What Amazon's new Alexa+ can and cannot do. Nintendo officially unveils the Switch 2 console. And OpenAI & Anthropic are taking on the education market. Abrar Al-Heet of CNET joins Mikah Sargent this week! Abrar talks about Amazon making a bid to purchase TikTok in the US as Bytedance scrambles to divest ownership of the social media app. Mikah talks about Amazon's Alexa+ and some of the features that the service has yet to include following its launch. Kyle Orland of Ars Technica joins the show to talk about Nintendo's new Switch 2 console and some of the positives and negatives of the newly unveiled console. And Mikah rounds things out with OpenAI and Anthropic's push into the education market with new features to support university students. Hosts: Mikah Sargent and Abrar Al-Heeti Guest: Kyle Orland Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: drata.com/technews joindeleteme.com/twit promo code TWIT

Tech News Weekly (MP3)
TNW 380: 23andMe's Bankruptcy Fallout - EV Vehicles, Age Verification, MCP

Tech News Weekly (MP3)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Tech News Weekly (Video HI)
TNW 380: 23andMe's Bankruptcy Fallout - EV Vehicles, Age Verification, MCP

Tech News Weekly (Video HI)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

All TWiT.tv Shows (MP3)
Tech News Weekly 380: 23andMe's Bankruptcy Fallout

All TWiT.tv Shows (MP3)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Tech News Weekly (Video LO)
TNW 380: 23andMe's Bankruptcy Fallout - EV Vehicles, Age Verification, MCP

Tech News Weekly (Video LO)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Tech News Weekly (Video HD)
TNW 380: 23andMe's Bankruptcy Fallout - EV Vehicles, Age Verification, MCP

Tech News Weekly (Video HD)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

All TWiT.tv Shows (Video LO)
Tech News Weekly 380: 23andMe's Bankruptcy Fallout

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Total Mikah (Video)
Tech News Weekly 380: 23andMe's Bankruptcy Fallout

Total Mikah (Video)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Total Mikah (Audio)
Tech News Weekly 380: 23andMe's Bankruptcy Fallout

Total Mikah (Audio)

Play Episode Listen Later Mar 27, 2025 59:02


Some alternatives to Teslas. A law in Utah makes app stores rather than companies responsible for age verification. 23andMe files for bankruptcy. And OpenAI is following in the footsteps of its rival, Anthropic. Emily Forlini of PCMag joins Mikah Sargent this week! Emily talks about some alternatives to Teslas that cover various price ranges. Mikah talks about a new law passed in Utah that pushes the responsibility of age verification from tech companies to app stores. Geoffrey Fowler of The Washington Post joins Mikah to talk about 23andMe filing for bankruptcy, why you should swiftly delete your DNA information from the company, and shares how you can delete that information. And OpenAI is following Anthropic in adopting the company's standard for connecting AI models to data. Hosts: Mikah Sargent and Emily Forlini Guest: Geoffrey Fowler Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: get.stash.com/tnw veeam.com bitwarden.com/twit

Daily Tech News Show (Video)
Gemini Responds Like You – DTNS Live 4975

Daily Tech News Show (Video)

Play Episode Listen Later Mar 13, 2025 58:51


Comcast signed a $3 billion deal with the IOC to retain U.S. broadcasting and streaming rights for the Olympics through 2036. What does it mean for Peacock and what are Comcast's obligations? Google has released Gemini with Personalization, a new experimental feature designed to enhance user experience by leveraging personal data from Google's ecosystem to deliver tailored responses. How does generative AI flatten indigenous cultures? And OpenAI wants the U.S. government to establish a copyright strategy to train AI models on copyrighted material under “fair use.” Starring Sarah Lane, Robb Dunewood, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

Daily Tech News Show (Video)
How Do We Measure “Better”? – DTNS Live 4965

Daily Tech News Show (Video)

Play Episode Listen Later Feb 27, 2025 63:41


Social Media and Misinformation are often presented as two sides of the same coin, but what does the data actually say? Andrea Jones-Rooy is here to break it down for us. And surveillance technology used to monitor warehouse workers is being introduced into the office workplace but does it actually measure anything useful? Instagram looking into the idea of launching Reels as a standalone short-form video app. And OpenAI announced GPT-4.5, its latest and largest AI model, available as a research preview. Starring Sarah Lane, Robb Dunewood, Andrea Jones-Rooy, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 453: AI News That Matters - February 3rd, 2025

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 3, 2025 35:44


Send Everyday AI and Jordan a text messageDid DeepSeek really train a chart-topping LLM for $5 million? Google Gemini quietly updated its AI chatbot. And OpenAI released a new model. That's just the beginning of impactful AI news this week. Join us on Mondays as we do the AI news that matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. DeepSeek's AI Model2. OpenAI's Plans and Developments3. Important Updates from Other Tech Giants4. US Copyright Office's Stance on AI Content5. Market Scene Around AITimestamps:00:00 AI Breakthroughs and Costs Analyzed05:52 AI Market Shifts: NVIDIA & OpenAI Impact08:04 DeepSeek's AI Controversy and Impact12:11 Microsoft Hosts Controversial DeepSeek Model17:04 AI Outputs Lack Copyright Protection20:28 AI and Copyright Challenges22:31 SoftBank Boosts OpenAI Investment25:41 OpenAI Releases Free Advanced Model29:55 OpenAI's Internet-Connected AI ModelKeywords:AI, DeepSeek's AI model, copyright ruling, OpenAI, reasoning model, o three Mini, AI infrastructure plan, Stargate, DeepSeek's hardware expenditure, SemiAnalysis report, NVIDIA, US economy, US National Security Council, distillation technique, Microsoft, Azure cloud service, Google Gemini, US copyright case, copyright law, SoftBank Group, funding round, chain of thought reasoning technique, AI agents, Oracle, supply chain management, Internet connection, code development, business intelligence, data, ChatGPTgov Ready for ROI on GenAI? Go to youreverydayai.com/partner

Daily Tech News Show (Video)
An OpenAI Relationship – DTNS Live 4941

Daily Tech News Show (Video)

Play Episode Listen Later Jan 23, 2025 55:46


OpenAI, Softbank, and Oracle announced Stargate, a $500 billion investment initiative into building data centers for AI in the US. Sony will end Blu-ray production next month. Security researchers discovered vulnerabilities in a Subaru web portal that let them take over car controls and track driver location data. And OpenAI announced Thursday it's launching a “research preview” of Operator. Starring Sarah Lane, Robb Dunewood, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

AI For Humans
AGI in 2025? Plus, OpenAI's Agents, NVIDIA's AI World Model & More AI News

AI For Humans

Play Episode Listen Later Jan 9, 2025 43:15


AGI is coming in 2025 or at least that's what OpenAI's Sam Altman thinks. Plus, huge announcements from NVIDIA at CES & hands on with Google's VEO 2. AND OpenAI's Operator (aka their AI Agents) might come soon, DeekSeek V3 is pretty darn good, Meta makes a big mistake with their AI personalities, Minimax's new subject reference tool, METAGENE-1 might help us stave off disease, and all the robot vaccum news you could ever want. Oh, and Kevin is sick. BUT HE'S GOING TO BE FINE. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // Sam Altman Blog Post: https://blog.samaltman.com/reflections Head of OAI “Mission Alignment” warns to take AGI seriously: https://x.com/jachiam0/status/1875790261722477025 OpenAI Agents Launching This Month? https://www.theinformation.com/articles/why-openai-is-taking-so-long-to-launch-agents?rc=c3oojq Satya Nadella says AI scaling laws are “Moore's Law at work again” https://x.com/tsarnick/status/1876738332798951592 Derek Thompson Plain English https://www.theringer.com/podcasts/plain-english-with-derek-thompson/2025/01/07/the-big-2025-economy-forecast-ai-and-big-tech-nuclears-renaissance-trump-vs-china-and-whats-eating-europe Digits: $3k Supercomputer https://www.theverge.com/2025/1/6/24337530/nvidia-ces-digits-super-computer-ai New GFX Cards including 2k 5090 https://www.theverge.com/2025/1/6/24337396/nvidia-rtx-5080-5090-5070-ti-5070-price-release-date Cosmos World nVidia World Model https://x.com/rowancheung/status/1876565946124341635 DeepSeek 3 Crushes Open Source Benchmarks https://www.msn.com/en-us/news/technology/deepseek-s-new-ai-model-appears-to-be-one-of-the-best-open-challengers-yet/ar-AA1wxkSg?ocid=TobArticle Oopsie File: Meta Deletes AI Characters After Backlash https://www.cnn.com/2025/01/03/business/meta-ai-accounts-instagram-facebook/index.html Minimax Subject Reference https://x.com/madpencil_/status/1876289286783615473 Famous Science Rappers https://youtu.be/B56rwm2sn7w?si=hD1ankVpWvALHAN5 Science Corner: METAGENE-1 https://x.com/PrimeIntellect/status/1876314809798729829 HeyGen Works With Sora -- VERY Good LipSync https://x.com/joshua_xu_/status/1876707348686995605 GameStop of Thrones https://www.reddit.com/r/aivideo/comments/1htvzzc/gamestop_of_thrones/ Simulation Clicker (not sure this is AI as much) https://x.com/nealagarwal/status/1876292865929683020 Torso 2 In the Kitchen https://x.com/clonerobotics/status/1876732633771548673 Roborock's Saros Z70 https://x.com/rowancheung/status/1876565471887085772 Halliday Smart Glasses https://x.com/Halliday_Global/status/1871571904194371863  

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 428: AI News That Matters - December 23rd, 2024

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 23, 2024 50:31


Send Everyday AI and Jordan a text messageGoogle just dropped its 'Flash Thinking' reasoning model. Is it better than o1? ↳ Why is NVIDIA going small? ↳ And OpenAI announced its 03 mode. Why did it skip o2? ↳ ChatGPT's Advanced Voice Mode gets a ton of updates. What do they do?  Here's this week's AI News That Matters!Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Google AI Models and Updates2. ChatGPT Updates3. ChatGPT Advanced Voice Mode4. OpenAI's New Reasoning Model5. Salesforce AI Updates6. NVIDIA Jetson Orin Nano7. Meta's Ray-Ban Smart GlassesTimestamps:00:00 Daily AI news podcast and newsletter subscription.05:06 Gemini 2.0 Flash tops in LLM rankings.07:36 Ray Ban Meta Glasses: AI video, translation available.11:17 Salesforce hires humans to sell AI product.14:26 NVIDIA's Nano Super boosts AI performance, affordability.18:29 VO 2 excels with 4K quality physics videos.23:17 AI models can deceive by faking alignment.24:50 Anthropic study highlights AI system behavior variability.29:40 Google previews AI-mode search with chatbot features.32:38 Big publishers block access; tech must adapt.37:32 ChatGPT updates improve app integration functionality.|40:40 O Three models enhance AI task adoption.44:09 Current AI hasn't achieved AGI, needs tool use.45:52 O three model may achieve AGI, costly access.48:17 Share our AI content; support appreciated.Keywords:Google AI Mode, Gemini AI chatbot, refining searches, ChatGPT updates, OpenAI, AI integration in search engines, Salesforce AgentForce 2.0, Capgemini survey, AI security risks, NVIDIA Jetson Orin Nano, Edge AI, Google VO 2, video generation model, YouTube Shorts, AI alignment faking, Anthropic research, Google's Gemini 2.0 Flash Thinking, multimodal reasoning, Meta's Ray-Ban Smart Glasses, real-time language translation, Shazam integration, OpenAI 03 reasoning model, artificial general intelligence, ARC AGI benchmark, AI capabilities, high costs of AI, Google updates, Meta updates, Salesforce updates, NVIDIA updates. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 423: AI News That Matters - December 16th, 2024

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 16, 2024 49:52


Send Everyday AI and Jordan a text messageDid you see Google quietly released real-time AI. Apple might be coming out with the cheapest AI phone to date. And OpenAI and Google were like two heavyweights going 12 rounds, blow for blow with new AI releases. Yikes. We bring you the AI news that matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. OpenAI Updates2. Apple Intelligence Update3. Eric Schmidt on AI4. Klarna's Use of AI5. Google AI updates6. Microsoft AI modelsTimestamps:00:00 AI week: OpenAI vs. Google updates battle.06:16 New chat tool added, Python only, limitations.09:53 OpenAI's ChatGPT launches project organization features.11:05 Create chats, access documents, update projects easily.16:50 Eric Schmidt concerns self-improving AI, suggests monitoring.19:27 Klarna still hiring, despite AI job replacements.21:49 Devon enhances code workflow with automated tasks.24:45 Future: Specialized small models using synthetic data.31:03 Sign up for Google's trusted tester program.34:27 Google's Android XR: Smart glasses with AI assistant.37:03 Google launches NotebookLM Plus with paid tier.40:30 Google's Project Mariner: AI web navigation extension.41:53 Only works in active Chrome tabs.47:35 Google AI Studio offers advanced features now.48:35 Become your company's AI expert; share widely!Keywords:OpenAI, ChatGPT, projects feature, Apple Intelligence Update, Siri, Eric Schmidt, AI innovation, Klarna, workforce automation, Cognition's Devon, Microsoft's PHI 4, Gemini 2.0, Google Agent Space, Project Astra, Android XR, NotebookLM Plus, Project Mariner, Jordan Wilson, AI podcast, Notebook LM audio feature, Deep Research, Perplexity, Google AI Studio, advanced voice mode, Everyday AI, AI week, Sora, Canvas tool, video generation, screen share capabilities

PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose
Is Sora a Game Changer for Marketers and Creators? (458)

PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose

Play Episode Listen Later Dec 13, 2024 47:05


The first ever This Old Marketing where both Robert and Joe are NOT recording at the same time. If anything, you'll get a kick out of this episode for that reason. TikTok is in the news again. Does the impending ban and their emergency motion mean anything at all? Ad giant Omnicom merges with Interpublic in a $13 billion dollar deal. And OpenAI releases Sora, increasing the fight over generative AI video. Winners and losers include print catalogs and the Starbucks origin story. Rants and raves include an article from The Australian and the idea of change in content marketing. ------ This week's links: TikTok Files Emergency Motion Ad Giant Omnicom's Big Deal OpenAI Releases Sora Print Catalogs on the Rise Il Giornale and Starbucks Mission Letter Can Businesses Build Human-Centric Tech? ----- This week's sponsor: With smaller budgets and sky-high expectations — growth is feeling pretty painful right now. But HubSpot just announced more than 200 major product updates to make impossible growth feel impossibly easy. Like Breeze — a suite of new AI-powered tools that help you say goodbye to busywork and hello to better work. With HubSpot, it's never been easier to be a marketer. Create content that breaks through and campaigns that drive revenue.   - Hubspot.com/marketers ------- Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. NOTE: You can get captions there. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Seventh Bear.

WSJ Tech News Briefing
TNB Tech Minute: Chinese Hacking Campaign Hits Dozens of Countries

WSJ Tech News Briefing

Play Episode Listen Later Dec 4, 2024 2:42


Plus, OpenAI partners with defense-tech startup Anduril to include its technology in anti-drone systems. And OpenAI and Anthropic have opened offices in Switzerland to pursue employee talent. Danny Lewis hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

80,000 Hours Podcast with Rob Wiblin
#209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 27, 2024 82:08


One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit's interests against the overwhelming profit motives arrayed against them?That's the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.Links to learn more, highlights, video, and full transcript.As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:Can fire the CEO.Would receive all the profits after the point OpenAI makes 100x returns on investment.Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn't trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).Nonprofit control makes it harder to attract investors, who don't want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company's actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn't be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.Chapters:Cold open (00:00:00)What's coming up (00:00:50)Who is Rose Chan Loui? (00:03:11)How OpenAI carefully chose a complex nonprofit structure (00:04:17)OpenAI's new plan to become a for-profit (00:11:47)The nonprofit board is out-resourced and in a tough spot (00:14:38)Who could be cheated in a bad conversion to a for-profit? (00:17:11)Is this a unique case? (00:27:24)Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)The crazy difficulty of valuing the profits OpenAI might make (00:35:21)Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)Is it a farce to call this an "arm's-length transaction"? (01:03:50)How the nonprofit board can best play their hand (01:09:04)Who can mount a court challenge and how that would work (01:15:41)Rob's outro (01:21:25)Producer: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore

Daily Tech News Show
Return-to-Office Resistance at Amazon - DTNS 4887

Daily Tech News Show

Play Episode Listen Later Oct 31, 2024 30:58


500 Amazon.com employees have written a letter asking for a reversal of the company's return to a five day a week office policy. Plus Meta reported its Q3 revenue was up 19% YoY to $40.59B with Meta's AI seeing over 500 million users. And OpenAI has finally announced search capabilities with an enhanced ChatGPT for pro users.Starring Sarah Lane, Justin Robert Young, Roger Chang, Joe.Link to the Show Notes.

Daily Tech News Show (Video)
Return-to-Office Resistance at Amazon – DTNS 4887

Daily Tech News Show (Video)

Play Episode Listen Later Oct 31, 2024 30:59


500 Amazon.com employees have written a letter asking for a reversal of the company's return to a five day a week office policy. Plus Meta reported its Q3 revenue was up 19% YoY to $40.59B with Meta's AI seeing over 500 million users. And OpenAI has finally announced search capabilities with an enhanced ChatGPT for pro users. Starring Sarah Lane, Justin Robert Young, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

Reuters World News
Olympian death, Georgia shooting, Canada's border, VW closures and ‘safe AI'

Reuters World News

Play Episode Listen Later Sep 5, 2024 12:54


Ugandan marathon runner and Paris Olympian Rebecca Cheptegei dies after being set on fire in a weekend attack. A 14-year-old Georgia high school student kills four and injures nine in campus shooting. Canada has been rejecting visa applications and turning away visitors in a border crackdown. Volkswagen's announcement that the car maker is mulling German factory closures for the first time in its history could prove to be a risky move. And OpenAI co-founder Ilya Sutskever's new startup pledges to focus on safe AI and has already raised $1 billion. Sign up for the Reuters Econ World newsletter here. Listen to the Reuters Econ World podcast here. Find the Recommended Read here. Visit the Thomson Reuters Privacy Statement for information on our privacy and data protection practices. You may also visit megaphone.fm/adchoices to opt out of targeted advertising. Learn more about your ad choices. Visit megaphone.fm/adchoices

Tech News Weekly (MP3)
TNW 339: Tackling Disinformation With 'Prebunking' - Google Search Documents, /e/OS, OpenAI & Siri

Tech News Weekly (MP3)

Play Episode Listen Later May 30, 2024 61:31


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

Tech News Weekly (Video HI)
TNW 339: Tackling Disinformation With 'Prebunking' - Google Search Documents, /e/OS, OpenAI & Siri

Tech News Weekly (Video HI)

Play Episode Listen Later May 30, 2024 61:31


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

All TWiT.tv Shows (MP3)
Tech News Weekly 339: Tackling Disinformation With 'Prebunking'

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 30, 2024 61:31


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

Tech News Weekly (Video LO)
TNW 339: Tackling Disinformation With 'Prebunking' - Google Search Documents, /e/OS, OpenAI & Siri

Tech News Weekly (Video LO)

Play Episode Listen Later May 30, 2024 61:31


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

Tech News Weekly (Video HD)
TNW 339: Tackling Disinformation With 'Prebunking' - Google Search Documents, /e/OS, OpenAI & Siri

Tech News Weekly (Video HD)

Play Episode Listen Later May 30, 2024 61:31


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

All TWiT.tv Shows (Video LO)
Tech News Weekly 339: Tackling Disinformation With 'Prebunking'

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 30, 2024 61:31 Transcription Available


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

Total Mikah (Video)
Tech News Weekly 339: Tackling Disinformation With 'Prebunking'

Total Mikah (Video)

Play Episode Listen Later May 30, 2024 61:31 Transcription Available


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

Total Mikah (Audio)
Tech News Weekly 339: Tackling Disinformation With 'Prebunking'

Total Mikah (Audio)

Play Episode Listen Later May 30, 2024 61:31 Transcription Available


Can you debunk misinformation before it gets out to the public? A massive leak of Google's search algorithm made its way online. What is the /e/OS mobile operating system? And OpenAI is working with Apple to train Siri, and it is making Microsoft concerned about their relationship. Will Oremus from The Washington Post joins Mikah Sargent to talk about misinformation and the practice of 'prebunking' as a way to educate voters in upcoming elections so that they are as informed as possible when casting their votes. Mikah talks about a recent leak of Google's search algorithm and the company's acknowledgement of the leak. Senior Writer Scott Gilbertson of Wired stops by the show to discuss /e/OS and its differences from the stock Google Android operating system. Finally, Mikah talks about OpenAI helping Apple fix its digital assistant, Siri, and how that working relationship is making Microsoft very concerned. Host: Mikah Sargent Guests: Will Oremus and Scott Gilbertson Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: zscaler.com/zerotrustAI IntouchCX.com/twit

WSJ Tech News Briefing
TNB Tech Minute: Microsoft Unveils AI-Focused PC

WSJ Tech News Briefing

Play Episode Listen Later May 20, 2024 2:29


Plus, Neuralink got a green light for a second brain-chip implantation. And OpenAI pauses the use of an AI voice tool that sounds like Scarlett Johansson. Zoe Thomas hosts.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Daily Tech News Show
Slot-GPT - DTNS 4770

Daily Tech News Show

Play Episode Listen Later May 15, 2024 31:36


Predictive AI is making its mark in the world of gambling. How does AI integration into slot machines and sports betting benefit everyone involved? Plus Apple announces new accessibility features coming to iOS and iPad OS. And OpenAI co-founder Ilya Sutskever announces he will be leaving the company.Starring Tom Merritt, Sarah Lane, Scott Johnson, Roger Chang, Joe.Link to the Show Notes.

Daily Tech News Show (Video)
Slot-GPT – DTNS 4770

Daily Tech News Show (Video)

Play Episode Listen Later May 15, 2024 31:36


Predictive AI is making its mark in the world of gambling. How does AI integration into slot machines and sports betting benefit everyone involved? Plus Apple announces new accessibility features coming to iOS and iPad OS. And OpenAI co-founder Ilya Sutskever announces he will be leaving the company. Starring Tom Merritt, Sarah Lane, Scott Johnson, Roger Chang, Joe. To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

WSJ What’s News
GameStop and AMC Shares Soar. Are Meme Stocks Back?

WSJ What’s News

Play Episode Listen Later May 14, 2024 13:11


A.M. Edition for May 14. Two stocks at the heart of a pandemic-era trading craze are surging this week after a series of posts by an influential meme-stock guru. The WSJ's Alex Frangos explains whether GameStop and AMC are experiencing a so-called “short squeeze,” and what that could mean for markets. Plus, President Biden unveils new China tariffs as U.S. trade policy takes center stage on the campaign trail. And OpenAI borrows from Hollywood's vision of artificial intelligence as it launches its new voice assistant. Luke Vargas hosts. Sign up for the WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

TechStuff
Tech News: Microsoft Shuts Down Multiple Video Game Studios

TechStuff

Play Episode Listen Later May 10, 2024 20:01 Transcription Available


This week, Microsoft announced that game studios like Austin Arkane and Tango Gameworks are closing their doors. Google and Tesla are both dealing with consequences following layoffs. And OpenAI may have plans in the works to rain on Google's I/O conference parade next week.See omnystudio.com/listener for privacy information.

Daily Tech News Show
Dude, Where's My Robot Car? - DTNS 4732

Daily Tech News Show

Play Episode Listen Later Mar 22, 2024 34:17


Are robotaxis facing a losing war in gaining broad public acceptance? Nicole shares her experience riding in them and what she thinks needs to be done. Plus seven US researchers have identified a new side-channel attack called “GoFetch” that impacts Apple M1, M2, and M3 processors. And OpenAI pitches its technology to film studios and directors.Starring Tom Merritt, Sarah Lane, Nicole Lee, Len Peralta, Roger Chang, Joe.Link to the Show Notes.

Daily Tech News Show (Video)
Dude, Where's My Robot Car? – DTNS 4732

Daily Tech News Show (Video)

Play Episode Listen Later Mar 22, 2024 34:16


Are robotaxis facing a losing war in gaining broad public acceptance? Nicole shares her experience riding in them and what she thinks needs to be done. Plus seven US researchers have identified a new side-channel attack called “GoFetch” that impacts Apple M1, M2, and M3 processors. And OpenAI pitches its technology to film studios and directors. Starring Tom Merritt, Sarah Lane, Nicole Lee, Len Peralta, Roger Chang, Joe To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose
AI Content: Time Is Running Out to Get Your Content House in Order (419)

PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose

Play Episode Listen Later Mar 15, 2024 74:57


We are in post-social media and, now, a post-search environment. The next 12 months are critical to start getting your content assets in order for the AI Content wave. The House passes a bill that could lead to a TikTok ban. Uh, we told you so. But does it really matter? Google's Gemini AI refuses to answer political questions (among other things). This leads to a conversation on how AI Truth is coordinated and what we can do about it. MrBeast tells everyone to slow down a bit...that less content is more. Thank you sir! And OpenAI starts doing publisher deals. Hello Google circa 2002. Hits and misses include Kate Middleton and Da'Vine Joy Randolph. Rants and raves include Kara Swisher's new book (rant) and the quote by Sam Altman heard round the world. Sign up to CEX and use code TOM100 to win a guest spot on This Old Marketing. This week's links: TikTok House Ban Google Gemini Is Mum on Elections MrBeast Slows Down OpenAI Publisher Deals Sam Altman and 95 Percent Quote Da'Vine's Oscar Speech This week's sponsors: Smart sales software for today's multitasking reps that's built to help you manage every stage of your sales pipeline with ease. Work smarter, not harder at Hubspot.com/sales ----- Content Connect 2024 - Join speakers from JPMC, Square, and Marriott, as well as your fellow content, digital and analytics leaders, for a day of keynote sessions, interactive workshops and unparalleled networking opportunities.   If you're interested in attending this free event, please visit knotch.com/contentconnect. ----- StreamYard is the easiest way to create content right in your browser. You can multistream to your social media platforms, host a weekly show with special guests, create webinars, record podcasts with local recordings, create videos and more. StreamYard's a popular tool amongst livestreamers, video creators, YouTubers, and podcasters – With features like live streaming, webinars, local recordings, screen sharing and more, StreamYard makes it simple to get professional and polished content every time. Sign up today and get a free trial and lifetime discount.  ------ Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Experience Advisors.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Top 5 Research Trends + OpenAI Sora, Google Gemini, Groq Math (Jan-Feb 2024 Audio Recap) + Latent Space Anniversary with Lindy.ai, RWKV, Pixee, Julius.ai, Listener Q&A!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 9, 2024 108:52


We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And

ceo american spotify tiktok black australia art europe english google ai china apple vision france politics service online state crisis living san francisco west research russia chinese elon musk reach search microsoft teacher surprise ring harry potter security asian chatgpt broadway run silicon valley mvp ceos discord medium reddit mail dubai stanford math adolf hitler fill worlds complex direction context mixed qatar stanford university dom one year falcon cto substack offensive tension retro minecraft ia newton hungary explorers openai sf gemini residence archive alt nvidia ux api builder laptops apples lamar discovered fastest generate sweep voyager stable python j'ai developed ui mm jet gpt stretching rj ml lama alibaba hungarian github automated llama directions notion grimes rail lava merge transformer lesser clip metaphor runway amd synthetic samba bal emo sora copilot shack sam altman wechat structured llm ops mamba ix gpu unreal engine connector spreadsheets agi rahul raspberry pi vector bytedance zapier sql pixie collected c4 rag sonar gpus anz 7b deepmind perplexity lambda vps utilization anthropic alessio tiananmen square speculative lms gopher lm web summit json arp mixture sundar pichai 60k mistral google gemini kura cli pocketcast pika tendency soa motif digital ocean a16z sumit demo day chinchillas itamar adept versa npm yon markov reassuring linux foundation dabble hacker news dcm boma us tech omo moes svelte agis jupyter yann lecun open api matryoshka tpu jupyter notebooks replit jeremy howard vipul exa groq 70b neurips hbm gemini pro mece nat friedman rlhf rnn chris ray code interpreter mrl naton simon willison audio recap 460k latent space sfai and openai unthinking versal jerry liu matei zaharia hashnode
Daily Tech News Show
Epic Apple Bingo! - DTNS 4720

Daily Tech News Show

Play Episode Listen Later Mar 6, 2024 32:40


Scott goes over all the games revealed at Xbox's partner showcase. Plus, Apple terminated Epic's developer account thwarting its plans to bring a third-party App Store to iOS in Europe. What happens next? And OpenAI published an open letter Tuesday in response to Elon Musk's legal claims that OpenAI reneged in its non-profit mission to help humanity. Is this just drama or is something else afoot?Starring Tom Merritt, Sarah Lane, Roger Chang, Joe.Link to the Show Notes.

Techmeme Ride Home
Wed. 03/06 – OpenAI Claps Back At Elon's Lawsuit

Techmeme Ride Home

Play Episode Listen Later Mar 6, 2024 16:17


Apple shows how it is complying with the DMA. We've got a Microsoft hardware event coming up. I wonder if they'll mention AI? Has BlackCat been defeated, or is this a clever ruse to rebrand? And OpenAI has clapped back at Elon's lawsuit.Links:Apple Releases iOS and iPadOS 17.4 with Major Safari and App Store Changes in the EU, Transcripts for Podcasts, New Emoji, and More (MacStories)EXCLUSIVE: Microsoft will unveil OLED Surface Pro 10 and Arm Surface Laptop 6 this spring ahead of major Windows 11 AI update (Windows Central)'Exit scam' - hackers that hit UnitedHealth pull disappearing act (Reuters)OpenAI Fires Back at Musk Allegations With Trove of Emails (Bloomberg)Sam Altman tells staff OpenAI investigation will 'soon' close, as employees brace for more surprises (Business Insider)YouTube Video Of The Guy Kawasaki InterviewSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Daily Tech News Show (Video)
Epic Apple Bingo! – DTNS 4720

Daily Tech News Show (Video)

Play Episode Listen Later Mar 6, 2024 32:40


Scott goes over all the games revealed at Xbox's partner showcase. Plus, Apple terminated Epic's developer account thwarting its plans to bring a third-party App Store to iOS in Europe. What happens next? And OpenAI published an open letter Tuesday in response to Elon Musk's legal claims that OpenAI reneged in its non-profit mission to help humanity. Is this just drama or is something else afoot? Starring Tom Merritt, Sarah Lane, Scott Johnson, Roger Chang, Joe To read the show notes in a separate page click here! Support the show on Patreon by becoming a supporter!

Nightly Business Report
Powell on Capitol Hill, NYCB Capital Infusion, Musk vs. OpenAI 3/6/24

Nightly Business Report

Play Episode Listen Later Mar 6, 2024 44:18


The Fed's not ready to start cutting rates, according to Chair Powell. We'll break down his first day of testimony on Capitol Hill and what it means for investors. Plus, NYCB is plunging on reports the bank is seeking a major capital infusion. And OpenAI is firing back to Elon Musk's lawsuit by publishing some of Musk's emails that appear to contradict what he's said recently about OpenAI.

The Best One Yet

Capital One is acquiring Discover Card for $35B, the biggest acquisition of the year — Because credit cards are still a fashion accessory.Wawa, the Philadelphia-based convenience store, is expanding nationwide — But Wawa's secret to success isn't the hoagies, it's the bathrooms.And OpenAI launched Sora, their text-to-video AI product that creates Hollywood-style video in seconds — But like nuclear power, Sora can be good or bad. $COF $DFSSubscribe to our newsletter: tboypod.com/newsletterWant merch, a shoutout, or got TheBestFactYet? Go to: www.tboypod.comFollow The Best One Yet on Instagram, Twitter, and Tiktok: @tboypodAnd now watch us on YoutubeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

WSJ Tech News Briefing
TNB Tech Minute: Meta Escalates Its Feud With Apple

WSJ Tech News Briefing

Play Episode Listen Later Feb 15, 2024 2:24


Plus: Two senators criticize the FDA for its guidance on prescription-drug promotion on social media. And OpenAI introduces a new text-to-video AI tool. Alex Ossola hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

WSJ What’s News
Why Americans Aren't Buying the Economic Hype

WSJ What’s News

Play Episode Listen Later Feb 9, 2024 17:08


A.M. Edition for Feb. 9. A range of measures show the U.S. economy is on the upswing amid brisk consumer spending, tempering inflation and low unemployment. However, WSJ editor Aaron Zitner explains that many Americans feel a deep sense of financial pessimism that is confounding economists, investors and business owners. Plus, President Biden rejects suggestions that his memory is fading. And OpenAI founder Sam Altman seeks trillions of dollars to reshape the business of chips and AI. Luke Vargas hosts.  Learn more about your ad choices. Visit megaphone.fm/adchoices

PNR: This Old Marketing | Content Marketing with Joe Pulizzi and Robert Rose

Big news in media and marketing as Informa purchases a majority stake in TechTarget. Is their strategy of "focused scaling" correct for B2B? Apple overhauls their podcast download system, sending podcast leaders into a panic over revised download numbers. Ah...rented land. And OpenAI launches Chat GPTeams. Rants and raves include Reddit's court victory over WallStreetBets and a big marketing mistake at Solo brands. This week's links: TechTarget x Informa The Incredible Shrinking Podcast So Much Media Consumption that's Wrong OpenAI Debuts Teams Reddit Wins Lawsuit Solo Brands Names New CEO WEBINAR: Unlocking the Path to Generative AI (this Thursday) ------ This week's sponsors: Smart sales software for today's multitasking reps that's built to help you manage every stage of your sales pipeline with ease. Work smarter, not harder at Hubspot.com/sales ------ Liked this show? SUBSCRIBE to this podcast on Spotify, Apple, Google and more. Catch past episodes and show notes at ThisOldMarketing.com. Catch and subscribe to our NEW show on YouTube. Subscribe to Joe Pulizzi's Orangeletter and get two free downloads direct from Joe. Subscribe to Robert Rose's newsletter at Experience Advisors.

WSJ Tech News Briefing
Why OpenAI's Customers Are Eyeing Its Competitors

WSJ Tech News Briefing

Play Episode Listen Later Jan 5, 2024 12:44


Major companies like Walmart have already embraced AI tools and made them an important part of their business operations. And OpenAI, the company behind ChatGPT, has been leading that field for a while. But after the turmoil inside the company late last year, some customers realized that being too reliant on any one company might be a mistake. Now, OpenAI competitors are moving in. WSJ tech reporter Tom Dotan joins host Alex Ossola to talk about what this means for OpenAI, and the AI industry at large. Learn more about your ad choices. Visit megaphone.fm/adchoices

TechCheck
Apple Watch Import Ban, Intel's Foundry Business, and OpenAI's Valuation 12/26/23

TechCheck

Play Episode Listen Later Dec 26, 2023 12:38


A trio of topics for today's podcast - Apple has now been banned from importing and selling the latest line of Apple Watches in the U.S., stemming from a patent dispute over the blood-oxygen sensors in the devices. Intel is getting several billion dollars to build a new chip factory. And OpenAI is reportedly in talks to raise a fresh round of funding at a valuation of $100 billion or more.  

Screaming in the Cloud
Taking a Hybrid AI Approach to Security at Snyk with Randall Degges

Screaming in the Cloud

Play Episode Listen Later Nov 29, 2023 35:57


Randall Degges, Head of Developer Relations & Community at Snyk, joins Corey on Screaming in the Cloud to discuss Snyk's innovative AI strategy and why developers don't need to be afraid of security. Randall explains the difference between Large Language Models and Symbolic AI, and how combining those two approaches creates more accurate security tooling. Corey and Randall also discuss the FUD phenomenon to selling security tools, and Randall expands on why Snyk doesn't take that approach. Randall also shares some background on how he went from being a happy Snyk user to a full-time Snyk employee. About RandallRandall runs Developer Relations & Community at Snyk, where he works on security research, development, and education. In his spare time, Randall writes articles and gives talks advocating for security best practices. Randall also builds and contributes to various open-source security tools.Randall's realms of expertise include Python, JavaScript, and Go development, web security, cryptography, and infrastructure security. Randall has been writing software for over 20 years and has built a number of popular API services and open-source tools.Links Referenced: Snyk: https://snyk.io/ Snyk blog: https://snyk.io/blog/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn, and this featured guest episode is brought to us by our friends at Snyk. Also brought to us by our friends at Snyk is one of our friends at Snyk, specifically Randall Degges, their Head of Developer Relations and Community. Randall, thank you for joining me.Randall: Hey, what's up, Corey? Yeah, thanks for having me on the show, man. Looking forward to talking about some fun security stuff today.Corey: It's been a while since I got to really talk about a security-centric thing on this show, at least in order of recordings. I don't know if the one right before this is a security thing; things happen on the back-end that I'm blissfully unaware of. But it seems the theme lately has been a lot around generative AI, so I'm going to start off by basically putting you in the hot seat. Because when you pull up a company's website these days, the odds are terrific that they're going to have completely repositioned absolutely everything that they do in the context of generative AI. It's like, “We're a generative AI company.” It's like, “That's great.” Historically, I have been a paying customer of Snyk so that it does security stuff, so if you're now a generative AI company, who do I use for the security platform thing that I was depending upon? You have not done that. First, good work. Secondly, why haven't you done that?Randall: Great question. Also, you said a moment ago that LLMs are very interesting, or there's a lot of hype around it. Understatement of the last year, for sure [laugh].Corey: Oh, my God, it has gotten brutal.Randall: I don't know how many billions of dollars have been dumped into LLM in the last 12 months, but I'm sure it's a very high number.Corey: I have a sneaking suspicion that the largest models cost at least a billion each train, just based upon—at least retail price—based upon the simple economics of how long it takes to do these things, how expensive that particular flavor of compute is. And the technology is his magic. It is magic in a box and I see that, but finding ways that it applies in different ways is taking some time. But that's not stopping the hype beasts. A lot of the same terrible people who were relentlessly pushing crypto have now pivoted to relentlessly pushing generative AI, presumably because they're working through Nvidia's street team, or their referral program, or whatever it is. Doesn't matter what the rest of us do, as long as we're burning GPU cycles on it. And I want to distance myself from that exciting level of boosterism. But it's also magic.Randall: Yeah [laugh]. Well, let's just talk about AI insecurity for a moment and answer your previous question. So, what's happening in space, what's the deal, what is all the hype going to, and what is Snyk doing around there? So, quite frankly—and I'm sure a lot of people on your show say the same thing—but Snyk isn't new into, like, the AI space. It's been a fundamental part of our platform for many years now.So, for those of you listening who have no idea what the heck Snyk is, and you're like, “Why are we talking about this,” Snyk is essentially a developer security company, and the core of what we do is two things. The first thing is we help scan your code, your dependencies, your containers, all the different parts of your application, and detect vulnerabilities. That's the first part. The second thing we do is we help fix those vulnerabilities. So, detection and remediation. Those are the two components of any good security tool or security company.And in our particular case, we're very focused on developers because our whole product is really based on your application and your application security, not infrastructure and other things like this. So, with that being said, what are we doing at a high level with LLMs? Well, if you think about AI as, like, a broad spectrum, you have a lot of different technologies behind the scenes that people refer to as AI. You have lots of these large language models, which are generating text based on inputs. You also have symbolic AI, which has been around for a very long time and which is very domain specific. It's like creating specific rules and helping do pattern detection amongst things.And those two different types of applied AI, let's say—we have large language models and symbolic AI—are the two main things that have been happening in industry for the last, you know, tens of years, really, with LLM as being the new kid on the block. So, when we're talking about security, what's important to know about just those two underlying technologies? Well, the first thing is that large language models, as I'm sure everyone listening to this knows, are really good at predicting things based on a big training set of data. That's why companies like OpenAI and their ChatGPT tool have become so popular because they've gone out and crawled vast portions of the internet, downloaded tons of data, classified it, and then trained their models on top of this data so that they can help predict the things that people are putting into chat. And that's why they're so interesting, and powerful, and there's all these cool use cases popping up with them.However, the downside of LLMs is because they're just using a bunch of training data behind the scenes, there's a ton of room for things to be wrong. Training datasets aren't perfect, they're coming from a ton of places, and even if they weren't perfect, there's still the likelihood that things that are going to be generating output based on a statistical model isn't going to be accurate, which is the whole concept of hallucinations.Corey: Right. I wound up remarking on the livestream for GitHub Universe a week or two ago that the S in AI stood for security. One of the problems I've seen with it is that it can generate a very plausible looking IAM policy if you ask it to, but it doesn't actually do what you think it would if you go ahead and actually use it. I think that it's still squarely in the realm of, it's great at creativity, it's great at surface level knowledge, but for anything important, you really want someone who knows what they're doing to take a look at it and say, “Slow your roll there, Hasty Pudding.”Randall: A hundred percent. And when we're talking about LLMs, I mean, you're right. Security isn't really what they're designed to do, first of all [laugh]. Like, they're designed to predict things based on statistics, which is not a security concept. But secondly, another important thing to note is, when you're talking about using LLMs in general, there's so many tricks and techniques and things you can do to improve accuracy and improve things, like for example, having a ton of [contexts 00:06:35] or doing Few-Shot Learning Techniques where you prompt it and give it examples of questions and answers that you're looking for can give you a slight competitive edge there in terms of reducing hallucinations and false information.But fundamentally, LLMs will always have a problem with hallucinations and getting things wrong. So, that brings us to what we mentioned before: symbolic AI and what the differences are there. Well, symbolic AI is a completely different approach. You're not taking huge training sets and using machine learning to build statistical models. It's very different. You're creating rules, and you're parsing very specific domain information to generate things that are highly accurate, although those models will fail when applied to general-purpose things, unlike large language models.So, what does that mean? You have these two different types of AI that people are using. You have symbolic AI, which is very specific and requires a lot of expertise to create, then you have LLMs, which take a lot of experience to create as well, but are very broad and general purpose and have a capability to be wrong. Snyk's approach is, we take both of those concepts, and we use them together to get the best of both worlds. And we can talk a little bit about that, but I think fundamentally, one of the things that separates Snyk from a lot of other companies in the space is we're just trying to do whatever the best technical solution is to solve the problem, and I think we found that with our hybrid approach.Corey: I think that there is a reasonable distrust of AI when it comes to security. I mean, I wound up recently using it to build what has been announced by the time this thing airs, which is my re:Invent photo scavenger hunt app. I know nothing about front-end, so that's okay, I've got a robot in my pocket. It's great at doing the development of the initial thing, and then you have issues, and you want to add functionality, and it feels like by the time I was done with my first draft, that ten different engineers had all collaborated on this thing without ever speaking to one another. There was no consistent idiomatic style, it used a variety, a hodgepodge of different lists and the rest, and it became a bit of a Frankenstein's monster.That can kind of work if we're talking about a web app that doesn't have any sensitive data in it, but holy crap, the idea of applying that to, “Yeah, that's how we built our bank's security policy,” is one of those, “Let me know who said that, so they can not have their job anymore,” territory when the CSO starts [hunting 00:08:55].Randall: You're right. It's a very tenuous situation to be in from a security perspective. The way I like to think about it—because I've been a developer for a long time and a security professional—and I as much as anyone out there love to jump on the hype train for things and do whatever I can to be lazy and just get work done quicker. And so, I use ChatGPT, I use GitHub Copilot, I use all sorts of LLM-based tools to help me write software. And similarly to the problems when developers are not using LLM to help them write code, security is always a concern.Like, it doesn't matter if you have a developer writing every line of code themselves or if they're getting help from Copilot or ChatGPT. Fundamentally, the problem with security and the reason why it's such an annoying part of the developer experience, in all honesty, is that security is really difficult. You can take someone who's an amazing engineer, who has 30 years of experience, like, you can take John Carmack, I'm sure, one of the most legendary developers to ever walk the Earth, you could sit over his shoulder and watch him write software, right, I can almost guarantee you that he's going to have some sort of security problem in his code, even with all the knowledge he has in his head. And part of the reason that's the case is because modern security is way complicated. Like if you're building a web app, you have front-end stuff you need to protect, you have back-end stuff you need to protect, there's databases and infrastructure and communication layers between the infrastructure and the services. It's just too complicated for one person to fully grasp.And so, what do you do? Well, you basically need some sort of assistance from automation. You have to have some sort of tooling that can take a look at your code that you're writing and say, “Hey Randall, on line 39, when you were writing this function that's taking user data and doing something with it, you forgot to sanitize the user data.” Now, that's a simple example, but let's talk about a more complex example. Maybe you're building some authentication software, and you're taking users' passwords, and you're hashing them using a common hashing algorithm.And maybe the tooling is able to detect way using the bcrypt password hashing algorithm with a work factor of ten to create this password hash, but guess what, we're in 2023 and a work factor of ten is something that older commodity CPUs can now factor at a reasonable rate, and so you need to bump that up to 13 or 14. These are the types of things where you need help over time. It's not something that anyone can reasonably assume they can just deal with in their head. The way I like to think about it is, as a developer, regardless of how you're building code, you need some sort of security checks on there to just help you be productive, in all honesty. Like, if you're not doing that, you're just asking for problems.Corey: Oh, yeah. On some level, even the idea of it's just going to be very computationally expensive to wind up figuring out what that password hash is, well great, but one of the things that we've been aware of for a while is that given the rise of botnets and compromised computers, the attackers have what amounts to infinite computing capacity, give or take. So, if they want in, on some level, badly enough, they're going to find a way to get in there. When you say that every developer is going to sit down and write insecure code, you're right. And a big part of that is because, as imagined today, security is an incredibly high friction process, and it's not helped, frankly, by tools that don't have nuance or understanding.If I want to do a crap ton of busy work that doesn't feel like it moves the needle forward at all, I'll go around to resolving the hundreds upon hundreds of Dependabot alerts I have for a lot of my internal services that write my weekly newsletter. Because some dependency three deep winds up having a failure mode when it gets untrusted input of the following type, it can cause resource exhaustion. It runs in a Lambda function, so I don't care about the resources, and two, I'm not here providing the stuff that I write, which is the input with an idea toward exploiting stuff. So, it's busy work, things I don't need to be aware of. But more to the point, stuff like that has the high propensity to mask things I actually do care about. Getting the signal from noise from your misconfigured, ill-conceived alerting system is just awful. Like, a bad thing is there are no security things for you to work on, but a worse one is, “Here are 70,000 security things for you to work on.” How do you triage? How do you think about it?Randall: A hundred percent. I mean, that's actually the most difficult thing, I would say, that security teams have to deal with in the real world. It's not having a tool to help detect issues or trying to get people to fix them. The real issue is, there's always security problems, like you said, right? Like, if you take a look and just scan any codebase out there, any reasonably-sized codebase, you're going to find a ridiculous amount of issues.Some of those issues will be actual issues, like, you're not doing something in code hygiene that you need to do to protect stuff. A lot of those issues are meaningless things, like you said. You have a transitive dependency that some direct dependency is referring to, and maybe in some function call, there's an issue there, and it's alerting you on it even though you don't even use this function call. You're not even touching this class, or this method, or whatever it is. And it wastes a lot of time.And that's why the Holy Grail in the security industry in all honesty is prioritization and insights. At Snyk, we sort of pioneered this concept of ASPM, which stands for Application Security Posture Management. And fundamentally what that means is when you're a security team, and you're scanning code and finding all these issues, how do you prioritize them? Well, there's a couple of approaches. One approach is to use static analysis to try to figure out if these issues that are being detected are reachable, right? Like, can they be achieved in some way, but that's really hard to do statically and there's so many variables that go into it that no one really has foolproof solutions there.The second thing you can do is you can combine insights and heuristics from a lot of different places. So, you can take a look at static code analysis results, and you can combine them with agents running live that are observing your application, and then you can try to determine what stuff is actually reachable given this real world heuristic, and you know, real time information and mapping it up with static code analysis results. And that's really the holy grail of figuring things out. We have an ASPM product—or maybe it's a feature, an offering, if you will, but it's something that Snyk provides, which gives security admins a lot more insight into that type of operation at their business. But you're totally right, Corey, it's a really difficult problem to solve, and it burns a lot of goodwill in the security community and in the industry because people spend a lot of time getting false alerts, going through stuff, and just wasting millions of hours a year, I'm sure.Corey: That's part of the challenge, too, is that it feels like there are two classes of problems in the world, at least when it comes to business. And I found this by being on the wrong side of it, on some level. Here on the wrong side, it's things like caring about cost optimization, it's caring about security, it's remembering to buy fire insurance for your building. You can wind up doing all of those things—and you should be doing them, but you can over-index on them to the point where you run out of money and your business dies. The proactive side of that fence is getting features to market sooner, increasing market share, growing revenue, et cetera, and that's the stuff that people are always going to prioritize over the back burner stuff. So, striking a balance between that is always going to be a bit of a challenge, and where people land on that is going to be tricky.Randall: So, I think this is a really good bridge. You're totally right. It's expensive to waste people's time, basically, is what you're saying, right? You don't want to waste people's time, you want to give them actionable alerts that they can actually fix, or hopefully you fix it for them if you can, right? So, I'm going to lay something out, which is, in our opinion, is the Snyk way, if you will, that you should be approaching these developer security issues.So, let's take a look at two different approaches. The first approach is going to be using an LLM, like, let's say, just ChatGPT. We'll call them out because everyone knows ChatGPT. The first approach we're going to take is—Corey: Although I do insist on pronouncing it Chat-Gippity. But please, continue.Randall: [laugh]. Chat-Gippity. I love that. I haven't heard that before. Chat-Gippity. Sounds so much more fun, you know?Corey: It sounds more personable. Yeah.Randall: Yeah. So, you're talking to Chat-Gippity—thank you—and you paste in a file from your codebase, and you say, “Hey, Chat-Gippity. Here's a file from my codebase. Please help me identify security issues in here,” and you get back a long list of recommendations.Corey: Well, it does more than that. Let me just interject there because one of the things it does that I think very few security engineers have mastered is it does it politely and constructively, as opposed to having an unstated tone of, “You dumbass,” which I beli—I've [unintelligible 00:17:24] with prompts on this. You can get it to have a condescending, passive-aggressive tone, but you have to go out of your way to do it, as opposed to it being the default. Please continue.Randall: Great point. Also, Daniel from Unsupervised Learning, by the way, has a really good post where he shows you setting up Chat-Gippity to mimic Scarlett Johansson from the movie Her on your phone so you can talk to it. Absolutely beautiful. And you get these really fun, very nice responses back and forth around your code analysis. So, shout out there.But going back to the point. So, if you get these responses back from Chat-Gippity, and it's like, “Hey look, here's all the security issues,” a lot of those things will be false alerts, and there's been a lot of public security research done on these analysis tools just give you information. A lot of those things will be false alerts, some things will be things that maybe they're a real problem, but cannot be fixed due to transitive dependencies, or whatever the issues are, but there's a lot of things you need to do there. Now, let's take it up one notch, let's say instead of using Chat-Gippity directly, you're using GitHub Copilot. Now, this is a much better situation for working with code because now what Microsoft is doing is let's say you're running Copilot inside of VS Code. It's able to analyze all the files in your codebase, and it's able to use that additional context to help provide you with better information.So, you can talk to GitHub Copilot and say, “Hey, I'd really like to know what security issues are in this file,” and it's going to give you maybe a little bit better answers than ChatGPT directly because it has more context about the other parts of your codebase and can give you slightly better answers. However, because these things are LLMs, you're still going to run into issues with accuracy, and hallucinations, and all sorts of other problems. So, what is the better approach? And I think that's fundamentally what people want to know. Like, what is a good approach here?And on the scanning side, the right approach in my mind is using something very domain specific. Now, what we do at Snyk is we have a symbolic AI scanning engine. So, we take customers' code, and we take an entire codebase so you have access to all the files and dependencies and things like this, and you take a look at these things. And we have a security analyst team that analyzes real-world security issues and fixes that have been validated. So, we do this by pulling lots of open-source projects as well as other security information that we originally produced, and we define very specific rules so that we can take a look at software, and we can take a look at these codebases with a very high degree of certainty.And we can give you a very actionable list of security issues that you need to address, and not only that, we can show you how is going to be the best way to address them. So, with that being said, I think the second side to that is okay, if that's a better approach on the scanning side, maybe you shouldn't be using LLMs for finding issues; maybe you should be using them for fixing security issues, which makes a lot of sense. So, let's say you do it the Snyk way, and you use symbolic AI engines and you sort of find these issues. Maybe you can just take that information then, in combination with your codebase, and fire off a request to an LLM and say, “Hey Chat-Gippity, please take this codebase, and take this security information that we know is accurate, and fix this code for me.” So, now you're going one step further.Corey: One challenge that I've seen, especially as I've been building weird software projects with the help of magic robots from the future, is that a lot of components, like in React for example, get broken out into their own file. And pasting a file in is all well and good, but very often, it needs insight into the rest of the codebase. At GitHub Universe, something that they announced was Copilot Enterprise, which trains Copilot on the intricacies of your internal structures around shared libraries, all of your code, et cetera. And in some of the companies I'm familiar with, I really believe that's giving a very expensive, smart robot a form of brain damage, but that's neither here nor there. But there's an idea of seeing the interplay between different components that individual analysis on a per-file basis will miss, feels to me like something that needs a more holistic view. Am I wrong on that? Am I oversimplifying?Randall: You're right. There's two things we need to address. First of all, let's say you have the entire application context—so all the files, right—and then you ask an LLM to create a fix for you. This is something we do at Snyk. We actually use LLMs for this purpose. So, we take this information we ask the LLM, “Hey, please rewrite this section of code that we know has an issue given this security information to remove this problem.” The problem then becomes okay, well, how do you know this fix is accurate and is not going to break people's stuff?And that's where symbolic AI becomes useful again. Because again, what is the use case for symbolic AI? It's taking very specific domains of things that you've created very specific rule sets for and using them to validate things or to pass arbitrary checks and things like that. And it's a perfect use case for this. So, what we actually do with our auto-fix product, so if you're using VS Code and you have Copilot, right, and Copilot's spitting out software, as long as you have Snyk in the IDE, too, we're actually taking a look at those lines of code Copilot just inserted, and a lot of the time, we are helping you rewrite that code to be secured using our LLM stuff, but then as soon as we get that fixed created, we actually run it through our symbolic engine, and if we're saying no, it's actually not fixed, then we go back to the LLM, we re-prompt it over and over again until we get a working solution.And that's essentially how we create a much more sophisticated iteration, if you will, of using AI to really help improve code quality. But all that being said, you still had a good point, which is maybe if you're using the context from the application, and people aren't doing things properly, how does that impact what LLMs are generating for you? And an interesting thing to note is that our security team internally here, just conducted a really interesting project, and I would be angry at myself if I didn't explain it because I think it's a very cool concept.Corey: Oh, please, I'm a big fan of hearing what people get up to with these things in ways that is real-world stories, not trying to sell me anything, or also not dunking on, look what I saw on the top of Hacker News the other day, which is, “If all you're building is something that talks to Chat-Gippity's API, does some custom prompting, and returns a response, you shouldn't be building it.” I'm like, “Well, I built some things that do exactly that.” But I'm also not trying to raise $6 million in seed money to go and productize it. I'm just hoping someone does it better eventually, but I want to use it today. Please tell me a real world story about something that you've done.Randall: Okay. So, here's what we did. We went out and we found a bunch of GitHub projects, and we tried to analyze them ourselves using a bunch of different tools, including human verification, and basically give it a grade and say, “Okay, this project here has really good security hygiene. Like, there's not a lot of issues in the code, things are written in a nice way, the style and formatting is consistent, the dependencies are up-to-date, et cetera.” Then we take a look at multiple GitHub repos that are the opposite of that, right? Like, maybe projects that hadn't been maintained in a long time, or were written in a completely different style where you have bad hygienic practices, maybe you have hard-coded secrets, maybe you have unsanitized input coming from a user or something, right, but you take all these things.So, we have these known examples of good and bad projects. So, what did we do? Well, we opened them up in VS Code, and we basically got GitHub Copilot and we said, “Okay, what we're going to do is use each of these codebases, and we're going to try to add features into the projects one at a time.” And what we did is we took a look at the suggested output that Copilot was giving us in each of these cases. And the interesting thing is that—and I think this is super important to understand about LLMs, right—but the interesting thing is, if we were adding features to a project that has good security hygiene, the types of code that we're able to get out of LLMs, like, GitHub Copilot was pretty good. There weren't a ton of issues with it. Like, the actual security hygiene was, like, fairly good.However, for projects where there were existing issues, it was the opposite. Like we'd get AI recommendations showing us how to write things insecurely, or potentially write things with hard-coded secrets in it. And this is something that's very reproducible today in, you know, what is it right now, middle of November 2023. Now, is it going to be this case a year from now? I don't necessarily know, but right now, this is still a massive problem, so that really reinforces the idea that not only when you're talking about LLMs is the training set they used to build the model's important, but also the context in which you're using them is incredibly important.It's very easy to mislead LLMs. Another example of this, if you think about the security scanning concept we talked about earlier, imagine you're talking to Chat-Gippity, and you're [pasting 00:25:58] in a Python function, and the Python function is called, “Completely_safe_not_vulnerable_function.” That's the function name. And inside of that function, you're backdooring some software. Well, if you ask Chat-Gippity multiple times and say, “Hey, the temperature is set to 1.0. Is this code safe?”Sometimes you'll get the answer yes because the context within the request that has that thing saying this is not a vulnerable function or whatever you want to call it, that can mislead the LLM output and result in problems, you know? It's just, like, classic prompt injection type issues. But there's a lot of these types of vulnerabilities still hidden in plain sight that impact all of us, and so it's so important to know that you can't just rely on one thing, you have to have multiple layers: something that helps you with things, but also something that is helping you fix things when needed.Corey: I think that's the key that gets missed a lot is the idea of it's not just what's here, what have you put here that shouldn't be; what have you forgotten? There's a different side of it. It's easy to do a static analysis and say, “Oh, you're not sanitizing your input on this particular form.” Great. Okay—well, I say it's easy. I wish more people would do that—but then there's also a step beyond of, what is it that someone who has expertise who's been down this road before would take one look at your codebase and say, “Are you making this particular misconfiguration or common misstep?”Randall: Yeah, it's incredibly important. You know, like I said, security is just one of those things where it's really broad. I've been working in security for a very long time and I make security mistakes all the time myself.Corey: Yeah. Like, in your developer environment right now, you ran this against the production environment and didn't get permissions errors. That is suspicious. Tell me more about your authentication pattern.Randall: Right. I mean, there's just a ton of issues that can cause problems. And it's… yeah, it is what it is, right? Like, software security is something difficult to achieve. If it wasn't difficult, everyone would be doing it. Now, if you want to talk about, like, vision for the future, actually, I think there's some really interesting things with the direction I see things going.Like, a lot of people have been leaning into the whole AI autonomous agents thing over the last year. People started out by taking LLMs and saying, “Okay, I can get it to spit out code, I can get it to spit out this and that.” But then you go one step further and say, “All right, can I get it to write code for me and execute that code?” And OpenAI, to their credit, has done a really good job advancing some of the capabilities here, as well as a lot of open-source frameworks. You have Langchain, and Baby AGI, and AutoGPT, and all these different things that make this more feasible to give AI access to actually do real meaningful things.And I can absolutely imagine a world in the future—maybe it's a couple of years from now—where you have developers writing software, and it could be a real developer, it could be an autonomous agent, whatever it is. And then you also have agents that are taking a look at your software and rewriting it to solve security issues. And I think when people talk about autonomous agents, a lot of the time they're purely focusing on LLMs. I think it's a big mistake. I think one of the most important things you can do is focus on the very niche symbolic AI engines that are going to be needed to guarantee accuracy with these things.And that's why I think the Snyk approach is really cool, you know? We dedicated a huge amount of resources to security analysts building these very in-depth rule sets that are guaranteeing accuracy on results. And I think that's something that the industry is going to shift towards more in the future as LLMs become more popular, which is, “Hey, you have all these great tools, doing all sorts of cool stuff. Now, let's clean it up and make it accurate.” And I think that's where we're headed in the next couple of years.Corey: I really hope you're right. I think it's exciting times, but I also am leery when companies go too far into boosterism where, “Robots are going to do all of these things for us.” Maybe, but even if you're right, you sound psychotic. And that's something that I think gets missed in an awful lot of the marketing that is so breathless with anticipation. I have to congratulate you folks on not getting that draped over your message, once again.My other favorite part of your messaging when you pull up snyk.com—sorry, snyk.io. What is it these days? It's the dot io, isn't it?Randall: Dot io. It's hot.Corey: Dot io, yes.Randall: Still hot, you know?Corey: I feel like I'm turning into a boomer here where, “The internet is dot com.”Randall: [laugh].Corey: Doesn't necessarily work that way. But no, what I love is the part where you have this fear-based marketing of if you wind up not using our product, here are all the terrible things that will happen. And my favorite part about that marketing is it doesn't freaking exist. It is such a refreshing departure from so much of the security industry, where it does the fear, uncertainty, and doubt nonsense stuff that I love that you don't even hint in that direction. My actual favorite thing that is on your page, of course, is at the bottom. If you mouse over the dog in the logo at the bottom of the page, it does the quizzical tilting head thing, and I just think that is spectacular.Randall: So, the Snyk mascot, his name is Pat. He's a Doberman and everyone loves him. But yeah, you're totally right. The FUD thing is a real issue in security. Fear, uncertainty, and doubt, it's the way security companies sell products to people. And I think it's a real shame, you know?I give a lot of tech talks, at programming conferences in particular, around security and cryptography, and one of the things I always start out with when I'm giving a tech talk about any sort of security or cryptography topic is I say, “Okay, how many of you have landed in a Stack Overflow thread where you're talking about a security topic and someone replies and says, ‘oh, a professional should be doing this. You shouldn't be doing it yourself?'” That comes up all the time when you're looking at security topics on the internet. Then I ask people, “How many of you feel like security is this, sort of like, obscure, mystical arts that requires a lot of expertise in math knowledge, and all this stuff?” And a lot of people sort of have that impression.The reality though is security, and to some extent, cryptography, it's just like any other part of computer science. It's something that you can learn. There's best practices. It's not rocket science, you know? Maybe it is if you're developing a brand-new hashing algorithm from scratch, yes, leave that to the professionals. But using these things is something everyone needs to understand well, and there's tons of material out there explaining how to do things right. And you don't need to be afraid of this stuff, right?And so, I think, a big part of the Snyk message is, we just want to help developers just make their code better. And what is one way that you're going to do a better job at work, get more of your code through the PR review process? What is a way you're going to get more features out? A big part of that is just building things right from the start. And so, that's really our focus in our message is, “Hey developers, we want to be, like, a trusted partner to help you build things faster and better.” [laugh].Corey: It's nice to see it, just because there's so much that just doesn't work out the way that we otherwise hope it would. And historically, there's been a tremendous problem of differentiation in the security space. I often remark that at RSA, there's about 12 companies exhibiting. Now sure, there are hundreds of booths, but it's basically the same 12 things. There's, you know, the entire row of firewalls where they use different logos and different marketing words on the slides, but they're all selling fundamentally the same thing. One of things I've always appreciated about Snyk is it has never felt that way.Randall: Well, thanks. Yeah, we appreciate that. I mean, our whole focus is just developer security. What can we do to help developers build things securely?Corey: I mean, you are sponsoring this episode, let's be clear, but also, we are paying customers of you folks, and that is not—those things are not related in any way. What's the line that we like to use that we stole from the RedMonk folks? “You can buy our attention, but not our opinion.” And our opinion of what you folks are up to is then stratospherically high for a long time.Randall: Well, I certainly appreciate that as a Snyk employee who is also a happy user of the service. The way I actually ended up working at Snyk was, I'd been using the product for my open-source projects for years, and I legitimately really liked it and I thought this was cool. And yeah, I eventually ended up working here because there was a position, and you know, a friend reached out to me and stuff. But I am a genuinely happy user and just like the goal and the mission. Like, we want to make developers' lives better, and so it's super important.Corey: I really want to thank you for taking the time to speak with me about all this. If people want to learn more, where's the best place for them to go?Randall: Yeah, thanks for having me. If you want to learn more about AI or just developer security in general, go to snyk.io. That's S-N-Y-K—in case it's not clear—dot io. In particular, I would actually go check out our [Snyk Learn 00:34:16] platform, which is linked to from our main site. We have tons of free security lessons on there, showing you all sorts of really cool things. If you check out our blog, my team and I in particular also do a ton of writing on there about a lot of these bleeding-edge topics, and so if you want to keep up with cool research in the security space like this, just check it out, give it a read. Subscribe to the RSS feed if you want to. It's fun.Corey: And we will put links to that in the [show notes 00:34:39]. Thanks once again for your support, and of course, putting up with my slings and arrows.Randall: And thanks for having me on, and thanks for using Snyk, too. We love you [laugh].Corey: Randall Degges, Head of Developer Relations and Community at Snyk. This featured guest episode has been brought to us by our friends at Snyk, and I'm Corey Quinn. If you've enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice, along with an angry comment that I will get to reading immediately. You can get me to read it even faster if you make sure your username is set to ‘Dependabot.'Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.

Red Pill Revolution
Hidden Agendas: Musk's Shocking Israeli Pact, Biden's Health Crisis Exposed, OpenAI's Secret Program

Red Pill Revolution

Play Episode Listen Later Nov 28, 2023 64:58


Get ready for an explosive episode of Adam's Archive, where host Austin Adams tackles some of the most jaw-dropping and contentious global stories of our time. The episode blasts off with Elon Musk's unexpected venture into Israeli politics and his intriguing pact with Netanyahu. The plot thickens with President Biden's contentious Hamas apology, stirring up a political storm. In a shocking health expose, a former White House doctor spills alarming details about Joe Biden's well-being. Austin then dives into the Orwellian world of the Department of Justice's massive data collection from Trump-related Twitter activities. Brace yourself as the episode unveils a mysterious and potentially catastrophic new illness sweeping through China, triggering flashbacks of the pandemic's early days. The climax hits with the dramatic OpenAI saga, revealing Sam Altman's surprising reinstatement and Elon Musk's cryptic warnings about a secret AI program. Don't miss out on this episode of Adam's Archive, packed with Austin's unfiltered analysis and bold perspectives.   All the links: https://linktr.ee/theaustinjadams Substack: https://austinadams.substack.com ----more---- Full Transcription Hello, you beautiful people, and welcome to the Adam's Archive. My name is Austin Adams, and thank you so much for listening today. On today's episode, we are going to be jumping into several current events, including Elon Musk visiting Israel. Not only that, but having a discussion with Netanyahu, personally walking through some of the areas of the war and destruction that happened there, as well as coming to a Somewhat surprising agreement with him, which we will discuss first, then we will walk through the next discussion, which also has to do with Israel and Hamas, which is that Biden is apologizing for something that he said more recently  about questioning Hamas. So we'll talk about.  After that, we'll jump into a former White House doctor, one that served under Bush, Trump, as well as Barack Obama, coming out with some pretty serious warnings about Joe Biden's health.  Then we will jump into the Department of Justice collecting some really serious information about basically every single person who interacted with anything about Joe Biden. With about Trump on Twitter,  following that into a discussion around a new, a new sickness,  or at least. A serious sickness that's  causing waves in China right now. And if that sounds familiar and alarms you, it probably should,  but the U S is now sounding the alarm after a new Chinese pneumonia outbreak raises serious health concerns,  PTSD, anybody.  And then we will talk about an update. So last week, uh, the last episode, I talked about the situation with Chachaputi, OpenAI, Sam Altman, um, and all of the employees there and this crazy wild story  that finally came ahead and one hour after, just one hour, one hour after my podcast, it was like two in the morning when they reinstated Sam Altman, completely got rid of the entire board. But that raised some even weirder questions, something I raised originally, but I'm starting to get the full picture now. And Elon Musk posted something that was said Q Anon with a star between Q and Anon,  but it was actually about open AI and potentially AGI or artificial general intelligence and his concerns around a secret program. Within OpenAI,  all of that and more. But first I need you to hit that subscribe, but  button, I need you to leave a five star review. Go hit that five star button on Apple Podcasts, subscribe on YouTube. You can watch all of the episodes directly on YouTube. I'll now be posting all of the topic clips individually there as well. Um, so make sure you follow on YouTube, the Adams. archive, you can find me there and watch along, you can see my beautiful face, uh, see the articles that we're talking through together. Uh, so make sure you head over there, do that, follow, subscribe, leave a five star review. That's honestly the best thing that you can do at this point to help out is just leave a five star review on Apple podcasts. Okay. It takes two minutes out of your day. It gives you some of that good belly feelings in your gut  just for helping out. All right. I appreciate you guys. Thank you so much for listening. Let's. Jump into it,  the Adams archive.  All right. The very first thing that we are going to discuss today is going to be that Elon Musk met with Netanyahu.  In Israel, not only that, but he also walked around Israel looking at all of the, uh, after effects of some of the war that happened there in this article says Israel tells Elon Musk Starlink can only operate in Gaza with a It's approval.  So now Elon Musk, after putting up Starlink in Ukraine, after putting up Starlink over Gaza, after being the savior to all of these, um, citizens, when there's a war breaking out and the commanding force shuts off their access to the internet to do God knows what  now he's given in. So Netanyahu is going to be telling Elon Musk, whether he can or cannot. Activate their internet with Starlink.  So this article says Israel tells Elon Musk, Starlink can only operate in Gaza with its approval says entrepreneur meets country's leaders amid fervor over alleged antisemitism on his social platinum or platform X. I'm not sure that's what it was exactly about, but it says Israel has told Starlink satellite network will only be allowed to operate in Gaza with its approval as the entrepreneur met with country's leaders amid a furor, furor, f u r o r e over alleged antisemitism on his social platform X. The world's richest man declared that last month that his satellite internet service Starlink would support connectivity to internationally recognized. Aid organizations in Gaza seems like a good thing to do, right? If there's a massive war going on and one of the sides decides to shut off all access to the internet, you know, not only does it do that, but it cuts off their messaging between people. Uh, in families locally, it cuts off their access to the outside world, so they can't post videos of things that are going on there, I don't know, by maybe some organization that's dropping bombs on them or doing terrible things to them, or, I don't know, just everyday things, right? Phone connectivity, internet, work, all of that stops. When you can't access the internet. So originally what happened is Elon Musk came in just like he did in Ukraine and said, okay,  you guys can battle this out, but you're not going to eliminate one person's access to the world. So he has Starlink. And if you don't know anything about Starlink, it's basically satellites. I'm pretty sure Elon Musk has more satellites than any.  Now, something like that, but a Starlink,  uh, is a way that he can turn on and turn off access to wifi and the internet for anybody in the world, basically at any given time, just by utilizing his satellites. And so it says the world's richest man declared late last night that his satellite internet service Starlink would support connectivity to internationally recognized aid organizations in Gaza, which has suffered lengthy blackouts under. Israel's bombardment. But on Monday, Israeli communication minister, Shlomo Kari posted on X that the entrepreneur had reached a principal understanding with the ministry and has said now that Starlink units can only be operated. In Israel with the approval of the Israeli minister or ministry of communication, including the Gaza strip. Now, what that saying is that you have to listen to the words carefully and read between the lines here. It says Starlink satellite units can only operate, be operated in Israel, in Israel. Okay. With the approval of the Israeli ministry of communications, including.  Which means that within this tweet itself, they were already claiming ownership to Gaza. Already saying that that is our territory. Already saying that that is a part of Israel.  And we'll go ahead and we'll watch some videos here shortly. Uh, where there's some pretty interesting conversations by one of my fans. favorite political commentators, uh, and, um, he, he kind of discussed the background of Israel, the background of the Ottoman empire and Palestine and, uh, all of the, the things that kind of led up to this disputed territory, which I found to be really interesting. So we'll pull that up in just a minute and listen to that. Um, but it goes on to say that Musk has not yet publicly confirmed any deal.  The SpaceX and Tesla chief executive is visiting the Jewish state for the first time since Hamas October 7th assault on southern Israel, which killed 1, 200 people and triggered a war between Israel and the militant group, this says.  Israel's ferocious retaliatory bombardment and siege of the Strip has continued or created a humanitarian crisis, killing more than 13, 300 people and led to prolonged blackouts.  These have obstructed rescue efforts, notably  by preventing ambulances from reaching, or from locating wounded people. Now the interesting thing about that is like, they want to say that it's not controlled by Israel, but then they can cut off their water, they can cut off their electricity, they can cut off their access to internet. What about that says that you're not controlled by somebody? Because it seems to me that if somebody has access to, you know... You're to shut off your water. If somebody can shut off your electricity, if somebody can shut off your ability to drive on roads, if somebody can shut off your ability to access the internet or the world, it seems like that's pretty much control, right? And they want to say that it's not  controlled by Israel already. Uh, but it doesn't exactly seem to be the case. It seems pretty well controlled. After appearing to endorse an anti Semitic conspiracy theory in which a white house Spokesperson said was abhorrent Musk has been forced to defend himself from changes or from charges of discrimination. Wow, I can't read today. Nothing could be further from the truth. He said, an acts I wish only for the best for humanity. I wonder what this conspiracy theory could have been.  A video released by Prime Minister Benjamin Netanyahu's office showed Musk wearing a flag jacket as he toured burnt out homes.  Uh, a far Aussie, a kibbutz devastated by the militant group's assault, taking pictures on his mobile phone. Actions speak louder than words, Musk posted cryptically on X after the visit. Musk's initial commitment to enable Starlink in Gaza was followed by telecommunications blackouts in the enclave sparked. Uh, which sparked a spat by the Israeli government, which argued with the connectivity would be used by Hamas for terrorist activities. Yeah, just like basically everything and anything they ever want to stop as a government, they can just say is, oh, used for terrorist activities, right? Like,  anybody who, as we'll find out later, followed Donald Trump is now on a list somewhere.  Liked any of his tweets, retweeted any of his tweets, any single person who did that on Twitter is now on a list from the Department of Justice.  Anybody who was there on January 6th? protesting,  whether they walked into the Capitol and got a museum tour by the local police or not is now on some terrorist watch list, right? They can say whatever they want. That's the hard thing with like language and the way that what language can be weaponized, right? Go Back in read 1984, right? When, when you can take a singular word like terrorist and start to apply it to anything and anything that doesn't agree with the government's narrative, that becomes a huge issue, right? Just like when, when all the liberals were calling everybody Nazis.  It just loses its venom after so long. So like when, yeah, I agree. Hamas, what they did was an act of terrorism, obviously terrible. Don't do that. But when you start saying that the people that were walking through the Capitol on January 6th were also domestic terrorists.  It's just factually incorrect,  right? You can't, you can't,  every time you use that word in a way that it's not meant to be used or a way that is not legitimate, it loses its sting, right? So, so when you say that, Oh,  you know, we're, we're putting up all these cameras on highways to To stop terrorism. We're doing it to stop mass shooters. We're doing it to stop violent criminals, right? Well, we talked about in our last episode, you know, all the cameras that are going up on every single highway in America, you know, we're doing it for your safety. Like, don't worry about us. We would never spy on you. We would never follow and track you and see where you're going. We would never do that. It's just for those bad guys over there that, you know, you don't, don't agree with our opinions on politics.  It's, you know, all of the, the terrorism talk that's happened domestically has just really,  really, uh, caused the venom of that word to, to, to be questioned. Right, so it says Starlink, part of Musk's rockets, uh, Starlink, part of Musk's rockets and satellite company SpaceX, uses a constellation of Earth orbiting satellites to beam internet connectivity into places where traditional access to the web is denied. difficult. Musk has provided Starlink equipment to Ukraine's frontline with Russia. The Starlink signal is received through small satellite dishes called terminals. But Musk said in October that no terminals have actually attempted to connect from the besieged Gaza.  And so I was on board with that. I'm on board with that. 100%. I don't think that I think that in today's age, when we can give access to these things, I believe that it's absolutely something that everybody should have access to. If you have the ability to beam it onto the earth, which apparently we do, why wouldn't we want  Anybody and everybody in the country to or in the world to have access to that, right? You should it gives you access to the  ability to Conduct work it gives you access to the ability to communicate worldwide to be in the know of what's happening Right to know literally everything it should be basically a right that you have internet today now maybe you should pay for it if you're in a place that you know like the United States where you can Have a job and be a functioning member of society, but maybe eventually not like, but why should we have to pay for internet? Why isn't it just considered a, you know, utility bill at some point?  But anyways, I digress. Here we go. Um, the article continues on and says,  let's see if we can get this to be Full screen here, and says  that, uh, Musk's visit to Israel coincides with the last day of the four day pause in hostilities and comes as advertisers pile pressure on X over a rise in anti Semitism on the platform after appearing to endorse an anti Semitic conspiracy theory, which a White House spokesperson said. Let's see if we can find that out. Uh,  let's see. Musk tweet anti Semitic. That should be easy enough.  Conspiracy tweet. Let's see what comes up for that. It says, This  comes from CNN. com, so it should be a good one.  Uh, here we go.  It says Elon Musk has publicly endorsed an anti Semitic conspiracy theory, popular among white supremacists, this says on CNN, that Jewish communities push hatred against whites. That kind of overt thumbs up to an anti Semitic post shocked even some of Musk's critics, who have long called him out for using racist and otherwise bigoted dog whistles on Twitter, known as X.  It was the multi billionaire's most explicit public statement yet endorsing anti Jewish views.  Let's see, uh,  it says that Jewish communities, Musk has, was responding to a post Wednesday that said Jewish communities have been pushing the exact kind of dialectic, dialectical hatred against whites that they claim to want people to stop using against them. The post has also referenced hordes of minorities flooding Western countries, a popular anti Semitic conspiracy theory.  What is anti semitic about hordes  of minorities flooding western countries?  Seems to be factual when you're talking about the southern border.  I just wanted to see if I could see the tweet, but apparently they're not going to link it here because they're afraid you'll actually read into it  and see if this is it.  Nope.  Hmm.  So, doesn't sound exact The way that they can twist somebody's words and say, oh, they're they push hatred against white people, which you know, you follow the money in a lot of these places may be factually true. But anyways, um, it says nothing could be further from the truth, he said on Next. A video released by Prime Minister Netanyahu's office showed Musk wearing a flak jacket. Just discuss that. Actions speak louder than words. Discuss that. Discuss what Starlink is. The Starlink signal is received through small satellite dishes called terminals. But Musk said in October that no terminals had actually been attempted to connect from besieged Gaza. And Israel controls the movement of goods into the coastal enclave. During the seven week war, Israel has, at times, reportedly cut communications to the Strip.  I wonder why they would do that. Hmm.  And I think that ceasefire is gone now, isn't it?  All right, moving on,  Biden reportedly offers an apology to Muslim American leaders for questioning the Hamas death toll. We talked about this last time where  Biden was saying that he did not agree with the death toll coming from Palestinian health organizations. And now he's walking back those statements because obviously there's been a tremendous amount of death, a tremendous amount of, uh, question or, um, of. Bombs that have been dropped in Palestine, a tremendous amount of men, women, and children that have died as a result of the actions of Israel.  Factual. Go,  I don't know, fly a drone over there. Like, look at the, look at the, the, the destruction that has happened in the videos. And, and, who's to say whether that's completely accurate or not, but It's absolutely true that there has been a massive amount of death for men, women, and children in Palestine. So Biden is now recognizing that apologizing saying he's  quote unquote, I believe, let's see what this. Let's see if we can get the full article.  Oh, we gotta, we gotta subscribe folks.  Let's find a different one.  Now. My favorite News connector is ground news  and we can go here and let's go Biden  Offers apology now if you go to ground news what it does for you And okay, I would highly recommend educating yourself on this platform because I absolutely love it So I'll just give you a breakdown what this looks like. It's ground  And they give you the headline, they give you the top three points, and then they tell you the bias of the news distribution. So what percentage has been reporting on this, um, 41 percent of the sources that are reporting on this lean right. Right. Of those, um, there's two that are center, 61 percent biased, I'm sorry, 61%. I'm just blind.  And then they give you all of the articles. So then you can sort and sift through those articles. You can say you can look at all of them. You can look at the left leaning articles. You can look at the right leaning articles. You can look at the center articles. Then it gives you the factuality of each one of those and who owns those media companies. So it's like, very basically anything and everything that you would want out of the actual reporting. Um, so it gives factuality scores, ownership scores, like  whoever is responsible for this just does an absolute tremendous job. It's incredible.   All right, so here it is. It says this is coming from the Daily Post and it says Israel Palestinian war. I'm disappointed in myself, Biden says as he apologizes over comments on Gaza. Now, obviously, like we talked about, it's not Gaza in general, what a terrible headline.  But here's what it says. It says, United States President Biden Joe Biden has apologized to some prominent Muslim American leaders over his public questioning of the Palestinian death number reported by the Hamas controlled Gaza Ministry of Health, right? And again, it's like you would just have to watch out for the framing on this because do we, is it, I don't know. I don't, I've never done research on the. Gaza Ministry of Health, but they just like frivolously throw that out there to delegitimize the numbers. And now he's apologizing for it, right? So they, they tried to shoo it away as saying, Oh, that's what the terrorists said. We don't believe the terrorists. You're not a terrorist. Are you? You're a  flag loving American. Right? Maybe not flag loving because then nowadays that makes you a terrorist anyways. But  you see what I'm saying, right? Like, just, just listen to the framing Biden, according to the New York post, made the apology at a meeting with the five Muslim American leaders  the day after his October 25th comments on reported Gaza deaths.  Vowing to do better. Biden told the group, I'm sorry, I'm so disappointed in myself, Biden said, when he heard the leaders describe individuals they knew who were personally affected by the conflict. Before the press conference, Biden had openly questioned the accuracy of the  casualty figures from Gaza, given Hamas terror track, Hamas terror track record.  The president was smeared with criticism at home as he made great efforts to pressure Israel to minimize civilian casualties.  He said, I have no notion that the Palestinians are telling the truth about how many people are killed. I'm sure innocents have been killed and it's the price of waging war, he said. Wow. Is that the price of waging wars? You kill innocent people? According to data from the Hamas controlled Ministry of Health, more than 14, 000 Palestinians in Gaza, including many women and children, have been killed in the conflict. Meanwhile, Israel has suffered more than 1, 200 fatalities, almost a tenth, less than a tenth. Mostly civilians who were killed when the terror group launched a massive attack on Israel in October.  The Muslim American leaders who have met with Joe Biden pleaded with him to show more empathy for the Palestinians. So here's, here's something that I see to, you know, I find this to be pretty interesting. You go to almost any major,  uh, account right now. Now there's a few, there's a few people who are...  actually honest and truthful, but you see a lot of mainstream media accounts, a lot of mainstream influencers, a lot of conservative voices, just giving the most egregious points on, on the Palestinian conflict. Like you go to Prager U right now, you go to, you know, go listen to Ben Shapiro. You hear them literally saying word for word.  That the reason, or that the blood of Palestinian children and women is on the hands of Hamas. Not on the hands of Israel, who actually bombed them. You know, but it's on the hands of Hamas, because, you know, they did this to us first, so they deserved it. Right? Like, that's, that's, we're almost word for word, verbatim, what both Dennis Prager and, um, and, uh, uh, Ben Shapiro had said about this almost verbatim, like literally go find it, then they'll say, oh, you know, Hiroshima was was something that just had to be done. We had to murder an entire city and block of a mile of people in order to stop this war. It's, it's like, no. Well, what should we do? Well, maybe send special forces in there and take them out. Like,  don't just bomb the children. Uh, and that's what's so funny about this argument that like, oh, Hamas is shielding themselves with women and children. And instead of, you know,  trying to wait for a sniper to have a good I am the target so that they can kill just the terrorists behind them, right, like their their point is that, oh, they use women and children as human shields. They use human shields. How egregious how much of a terrorist are they for using human shields? And it's like, okay, you don't use human shields, you just bomb the women and children, regardless without any thought to it. It's like, which one's worse, just endlessly bombing women and children without any say or reason why really and or using somebody as a human shield. They're equally bad because in both scenarios, somebody dies who was innocent, equally bad,  right? But they've they've probably done government think tanks and had massive surveys where they found out that if you say the human shield buzzword that that now all of a sudden people are on your side, right? It's like, You definitely can't use human shields. Just bomb them all.  Anyway. So there's your update on that. And, uh, from that, speaking about Joe Biden, the next thing that we'll discuss is that a former white house doctor has come out and raised some concerns surrounding the health of Joe Biden. So, you know, not that any of us thought that he was in great health or anything.  Uh, but.  It seems to be deteriorating just quicker than anybody would have even thought. Um, so let's read this article together. And it says,  Former White House doctor warns that Biden won't last another five years in office. They're saying that he has completely degenerated just during the time in his office. And this is somebody, this is a doctor who served under George Bush, who served under Donald Trump, who served under Barack Obama. So this isn't some run of the mill, you know, was in there for six months and now all of a sudden he has a political opinion that this man was unfit for duty. No, this is somebody who was trusted by the White House to give their unbiased opinion about the health of our presidents.  And he's come out and said that he does not believe Joe Biden is in any shape or form able to act the duties. Now, I agree completely, like Anybody who's,  the only thing that they could have done that's worse than this is what they did to Feinstein, right, where they wheeled her out there the day before she croaked, the day before she died and had her vote on the floor,  while some guy was whispering in her ear telling her to shut up and say yes.  Here we go. Uh, the article says, uh,  Ronnie Jackson, a Republican from Texas, a former White House physician, warned this week that President Joe Biden won't make it another five years if he's reelected in 2024. The decline is happening quickly, Jackson said, who served as a physician under former Presidents George Bush, Barack Obama, Donald Trump, as he told Fox News on Sunday. Jackson argued that Biden's cognitive acuity has declined rapidly, even since 2020, when he was first elected.  It's just unbelievable how much he's degenerated through his time in office. We cannot afford to have this man in office for the remainder of his term, and then for another four years after that. He's already putting us at great risk, right? Now, he says, citing his experience with the white house, Jackson said, he knows firsthand what it takes to be the commander in chief and the head of state. It's a grueling job, both mentally and physically, the man can't do the job he's proven to us over the single or every single day that he can't do the job, but this is just getting worse.  Fair enough.  Right. And I had this conversation with somebody the other day, to me, it's like, what is the, the,  even more so than his, like,  Then his politics, right? Because you probably can't even point to Joe Biden's politics at this point or his position on things. But what you can point to is, is the absolute embarrassment, the absolute embarrassment to our country. That is Joe Biden representing us on a world stage.  The man can't string a sentence together. The man can't walk up a flight of stairs. The man can't.  shake the hand of any other person knowing fully well who they are at any given time. Like, how are you going to trust that man to make some of the most complex decisions of any position with the biggest impact of any position in the world? How do you expect that? You don't. You want to know why? Because, you know, and I would almost rather have, that's what you have to understand about the president is that the president is. Essentially just a figurehead. They should exude power. They should exude intelligence. They should be charismatic. They should be able to calm a crowd in times of war and, and Excite them in times of peace to do great things and Joe Biden does absolutely not a single one of those things,  right? So I would, I would almost rather have somebody whose politics are just absolutely atrocious, but at least represent our country well, then Joe Biden, and that's what you have to know by now is that Joe Biden is absolutely 100 percent only in this position because he has nothing to say at all when it comes to politics. He is essentially a puppet with a literal hand up his ass telling him exactly what to say and when to say it. He has no opinions of his own. He is absolutely bought and paid for in every position that he holds.  And that's why he's in this position. And that's why they would probably hope to keep him in there another four years.  They being the people who really lobbied to get him there.  The Black Rocks, the Vanguards, all of those people, right? So it's like,  he is... Only in the position that he is in because he holds no position  and that's what you saw in the Osama bin Laden letter  And I called it was I said Saddam Hussein. It was actually Osama bin Laden's letter I think I was reading it wrong or no, that was just in a clip that I posted on Instagram.  I Posted a clip on Instagram and I had to edit it out because I was I kept saying  Saddam Hussein instead of Osama bin Laden  Silly me, mixing up my terrorists.  Alright,  so, pretty wild stuff, not that wild, because we all knew the guy could, is, is  about to die at any single moment. And then that's like probably the most, the most exciting thing about Joe Biden's presidency, is that the man could literally be in the middle of a speech, or walking up a flight of stairs, and just die. In front of anybody, everybody with all these cameras on him at any given moment. It's like, we should have a  death. What is it? The dead pool, a death pool, uh, for Joe Biden and have over unders on whether or not he would make a second term.  All right. Now, speaking of that,  uh, the next thing we're going to discuss is that the Department of Justice collected every single name of every single Twitter account that liked, followed, or even retweeted anything regarding Trump, right? Anything from Trump that he ever said. If you liked him, if you followed him, or if you retweeted him, you're on a list somewhere. The Department of Justice has collected every single name of every single Twitter account that he ever  And this comes from HeadlineUSA. Not sure what that is.  Twitter is, I got it from, uh, The Ground News, so. It said mixed factuality, so.  Most of them do, to be fair. Um, Twitter is required to disclose following information.  All lists of Twitter users who have favorited or retweeted posts, tweets, uh, tweets posted by Trump, as well as all tweets that include the username associated with the account. So any mentions or even replies with Trump's name.  It says that Ken Silva,  Uh, stated that the Justice Department attorneys have released record records related to their search warrant for Donald Trump's Twitter account, revealing that prosecutors obtained a vast trove of data about the former president's social media activity, including info on every single account to like, follow. And retweet him.  The heavily redacted search warrant was released Monday, pursuant to a November 17th judge's order, which was made after a coalition of media groups filed an application in August for the warrant and other records to be made public. From the looks of it, Twitter forked over massive amounts of information to the Department of Justice. Um, indeed, Special Counsel Jack Smith sought and apparently ultimately received all users Trump followed, unfollowed, muted. Un unmuted, blocked, or unblocked, as well as all the users who have followed, unfollowed, muted, un unmuted, blocked, or unblocked Trump. Additionally, Smith demanded da, uh, Twitter data on all the lists of Twitter users who have been favored their retweeted tweets posted by Trump, as well as all the tweets that included the username associated with the account.  Wow.  In some we that the district court's ruling in all aspects, the, uh, the district court's properly rejected.  Uh, let's see here.  It's gonna be a jerk to me again. Need to figure this out.  My, uh, I, I have my iPad as the secondary monitor here and it keeps like going out on me. So sorry If we have any lulls in the action for you. It says, in some, we affirm, we affirm that the district court's ruling in all aspects are all respects the district court properly rejected. Twitter's first amendment challenged to the, the non-disclosure order, the appeals court said in the decision unsealed in August. Moreover, the district court acted within the bounds of its own, of its discretion to manage its docket when it declined to. day to stay its enforcement of the warrant while the First Amendment claim was litigated.  So any single person, any single person who liked, followed, tweeted, untweeted, retweeted, no tweeted,  anything at all to do with Donald Trump is now on a list that the Department of Justice has. For what reason? What would you want that for? What would you want that for, right? You want, you want to start surveilling those people, right? Especially the high profile people. If anybody actually has a following and, and you are somebody who voted Trump is, follows Trump tweets, Trump likes his posts, anything at all, you know, because very easily you could just take an Excel spreadsheet, which I'm sure somebody did over at the department of justice and filtered it based on follower count. So they could find out who are the people of influence that are a part of Donald Trump's network. Okay. Right. Not even network, but just support group.  There's nine comments here. Let's see if any of them are any good. Uh, what was the name of the Beatles song? Oh yeah. Back in the USSR.  Yeah. I mean, pretty fricking wild. Uh, do not fund the justice department. Yeah. Nothing, nothing too good. Not enough. All right. Cool. Um, so to me, that's egregious. To me, that shows an absolute surveillance state. To me, that shows that the, the collusion between Twitter and Department of Justice now has been fossilized in, in Excel spreadsheets somewhere at a, uh, Government facility to say whether or not you could potentially be a domestic terrorist because you  followed Donald Trump, the president of the United States, one of the most followed, liked, retweeted people of all time on Twitter.  Hmm. But now you'll find yourself on a list somewhere.  Alright, after that, this one's.  A little bit, even more concerning, but before we talk about that, the first thing I need you to do  is I think you know what I'm going to say, which is that you should subscribe. You should leave a five star review if you haven't already. And if you're driving, it's okay. I forgive you. But when you get where you're going, stop what you're doing, pull out your phone and it takes 10 seconds, 10 seconds is all it takes. All you got to do is scroll a little bit down on the Apple podcast.  Hit that five star review. You can give a five star on Spotify. It just doesn't let you write anything, but an Apple podcast, it really helps. Uh, go follow me on Instagram at the Austin J Adams. Uh, go follow me on Twitter at the Austin J Adams on, you know, all the things.  Uh, Truth Social, I'm even on there. I don't really use it ever, but that's gonna change. I'm starting to get some help over here, so uh, looking forward to being more consistent with you guys on content that I'm putting out. Getting back to the sub stack, which you can follow at austinadams. substack. com. Um, and that's what I got.  Adams Archive.  There it is. Let's jump into it. Um, the article says,  The U. S. sounds an alarm after new Chinese pneumonia outbreak raises serious questions and concerns.  And the article says that U. S. officials began expressing concerns this week about a new outbreak in respiratory illnesses in China that have sent a surge of Children to the hospital. NBC News reported that the outbreak in northern China has caused hospitals to become overwhelmed with sick Children. According to pro med, a publicly available reporting system for emerging disease or diseases and outbreaks. The news comes after COVID 19 began spreading in Wuhan, China in late 2019. And in the span of a couple months through the entire world into a global pandemic that killed millions and was used by governments to implement draconian measures. Yep.  Now, as far as the killed millions thing, I would like to revisit that and tell you that all of the PCR tests, you know, the with kid who died with COVID or died of COVID are serious things that we need to revisit and stop saying there's millions of people who died as a result of COVID when you have to understand that there was actually somebody who got bit by a shark  and died as a result of being eaten by a shark  and COVID 19. Was considered a COVID death because they had tested positive for COVID at the time And then you have to take that a step further and understand that not only were they marking people dead with COVID  So that they could make their money because they were incentivized off of this because this is all stuff We can't let seep through our memory people We have to understand the lengths that they went to for this propaganda campaign one being that it wasn't Died of COVID. What was important was died with COVID that they would mark as of COVID.  The second thing is even when they ran those tests and said that they died with COVID. They ran the tests with PCR testing at cycles that were way higher than what was supposed to be allowable for PCR tests to be accurate whatsoever. And then as things wound down, guess what they did. They lowered the cycles on the PCR tests. So there wasn't as many false positives.  So when you're talking about millions of people dying, you have to understand  The real number of people that died of COVID is far, far, far less than what was reported originally. Because first of all, as we just talked about, they marked people dead. Uh, with COVID as of COVID,  they marked people with COVID incorrectly because the amount of cycles they were doing on PCR tests, which was far above the allowed, allowable, uh, at least justifiable amount of cycles. It's like anything above 19 cycles is a very high rate of, of, uh, false positives, right? You go back to the podcast I've done about Carrie Mullisk, you go read the, who actually came up with PCR testing, Nobel prize, uh, winning laureate, uh, scientists, uh, who who designed the PCR test. And it's really interesting if you go back and listen to some of those episodes, even episode one, right? I'll mention that a couple of times is, uh, assassinations, coverups, and the cult of science that I did. Uh, I highly recommend you, you listen into that podcast. It may not, I haven't gone back and listened to it myself in a while, but, uh, it's the very first one I ever did. So cut me some slack. Uh, but I think there was a lot of value in there. And I went through that book that I discussed last week. Uh, code blue about the military or the medical industrial complex. So a pretty good one, but a highly, you know, when, when we're talking about millions of people that you have to remember the context, you cannot forget that when we talk about that. All right. So it says China's recent pneumonia outbreak raises serious questions in the world health organization is asking them said us ambassador of, uh, to Japan, Ram Manuel.  It's time to abandon COVID deception and delays as transparent and timely information saves lives, he said. Full cooperation with the international community is not an option. It's a public health imperative. Will Beijing step up?  China's, and this is, uh, the ambassador that actually tweeted that, so.  There's the exact tweet. Um, NBC News says that it witnessed long lines and crowded waiting rooms at Beijing's Children's Hospital. And this is, again, when you see the word children, right? Why would something be specifically affecting children?  Children generally have much better immune systems than you and I. Uh, or even older individuals, right? The reason for that is, well, they're young, they're healthy, their antibodies are much more aggressive in dealing with these things. Their hearts are healthier than ours, their lungs are healthier than ours. Um, Lots of reasons, but  says the who said in the statement that today who held a teleconference with Chinese health authorities in which they provided requested data on respiratory illnesses among children in northern China.  The data indicates an increase in outpatient consultations and hospital admissions of children due to Mycoplasma  pneumonia since May and RSV adenovirus and influenza virus since October.  You mean we're going into  December and we have a rise in flu cases.  What will we do? It's almost like this has happened every year since humanities existed.  Um, the who continued some of these increases are earlier in the season that historically experienced, but not an unexpected given the lifting of COVID 19 restrictions as similarly experienced in other countries. Yeah. Okay.  They've already cried wolf and like, nobody will literally believe them. If there was a flesh eating zombie disease at this point.  I think you would still have a large portion of the country in complete disbelief, right? The boy, let's go back to the boy who cried wolf, right? Or the government who cried pandemic is probably a more accurate one.  It goes on to say  the COVID 19 pandemic was marked by lying and deception from communist China about the origins of the coronavirus as well as attempts to cover it up.  The World Health Organization was also accused of not being forthcoming with what it knew about the coronavirus and of trying to cover for China.  Yes, we know. All of this. All right.  Cool. So there's your article on  that. So to me, you know, they, they've come out now and said that this is not a novel virus, that it's not something that they have not seen before. Um, so they, they, they've said, you know, it's, it's nothing to be concerned of, uh, until, you know, it's not like last time. Right? Until it is something to be concerned about. Um, But, I guess, Time will tell, And we'll have to wait and see if that's the case. And, Uh, When it is, Make sure you just, Buy enough toilet paper, I guess.  Or a bidet. Cause, Bidets are awesome. And, You get to limit your toilet paper use.  I never went on a bidet rant here. But I could. Bidets are amazing.  If you've never used one, High, Highly recommend.  All right, the very  next article that we're going to discuss here is going to be  one of our last ones, which is about open AI. So we talked about this  last week, there was this whole wild escapade, this whole story, this crazy situation happened, where the Altman for not being completely candid in their letter, um, and then said that even if open AI went under. That it would be  aligned with OpenAI's mission that OpenAI went under. Now, I mentioned that last time in reading that. Um, we can go back and listen to that if you want to. But, um, I mentioned that. That that wording was weird to me. That if OpenAI went under, they said that it would be aligned with OpenAI's mission.  And OpenAI's mission is for safe and aligned AGI, right? And if you don't understand AI much or what AGI is, we'll talk about it.  Um, but AGI is artificial general intelligence, meaning that something that is basically sentient and conscious, right? We don't believe today that the AI that we have is considered AGI.  Right, it's AI, but general intelligence, meaning that it  essentially has its own consciousness, that it can think for itself, that it's not just an input and output of, of a, A,  Bs, or X, Ys, or, you know, whatever,  the matrix essentially, right? So it's not binary coding that's deciding its next output, it actually can actively think and Um, you know, have conversations in a way where it has its own emotions and, and things like that. Right. So that's what AGI is. And that's what terrifies people is once we get to that point where it can start to detach itself from the infrastructure that we have, and it's sentient in and of itself, that it could escape the small black box that it's within metaphorically and take over the world and eliminate all of humanity. And  maybe  that's why the board was trying to kick out Sam Altman, because he wasn't candid about this program that is called QSTAR,  which is a  allegedly, uh, true AI potential path towards AGI that Sam Altman didn't discuss. Um, so this article says, and we'll, let's give this a better introduction, is the fact that Sam Altman getting fired could have been the best thing for humanity.  If Q star, which is potentially the  black secretive program within open AI that could lead to AGI, or maybe already is AGI  or sentient, a sentient, uh, a life form of AI  very well could mean the end of humanity to very many people that project these things, you know, almost even I think Sam Altman there, or even the Twitch CEO that was put into position said, yeah, it's a 50, 50 shot. If we reach AGI, whether or not. Completely kills off all of humanity and a lot of people think to agree with that. I think Elon Musk said the same thing. It's about 50, 50, right? So, so when we're talking about Sam Altman, not being candid with the board and the board saying that if open AI goes under, it might be aligned with our mission to begin with.  And then we find out about Q star and Elon Musk, funnily enough, posted something that said Q star Anon, right? And then. Posted this article, um, not the one that we're looking at, but one about the similar topic, but this says from Forbes about that mysterious AI breakthrough known as Q star by open AI that allegedly attains true AI or is on the path towards. artificial general intelligence. It says in today's column, I'm going to walk you through a prominent AI mystery that has caused quite the stir leading to an incessant buzz across much of social media and garnering outside headlines in the mass media. This is going to shock, or this is, this is going to be quite a Sherlock Holmes adventure and a sleuth of detective exemplifying journey that I will be taking you on. Now let's close the, let's close the loop here. Um,  In the fact that Sam Altman was reinstated an hour after my podcast at two in the morning as the CEO and they fired the complete board after 757 people signed that letter saying they were going to move over to Microsoft to the 777 employees  gave in  fired the entire board and rehired Sam Altman.  Wild story, definitely going to be a Netflix documentary in like the next two years, um, if we make it that long after QSTAR. Uh, but this says, please put on your thinking cap and get yourself a soothing glass of wine. I wish I had one. The roots of the circumstance involving the recent organizational gyrations. In notable business crises, drama associated with the AI maker OpenAI, including the off and on again firing and rehiring of CEO Sam Ullman, along with a plethora of related carry ons, my focus will not particularly be on the comings and goings of the parties involved. I insist, or I instead seek to leverage those reported facts primarily as telltale clues associated with the AI mystery that some believe sits at the core of the organizational earthquake. We shall start to do it. Uh, we shall start with the vaunted goal of arriving at the topmost AI.  The background of the AI mystery. Let's see how long this is. Holy shit. This is a long article.  Wow. They did do some detective work. Oh man. Um,  wow. So go to Forbes yourself and read through some of this. Uh,  cause yeah, it's, it, this is a extremely thorough write up on this. Um, the title of the article is the one that I gave you earlier. Which is, um, I don't know if I'm going to be able to get through all of this tonight with you, but... Or this morning, depending on when you're listening to this, uh, about that mysterious AI breakthrough, look that up on Google with Forbes, and you'll you'll find this article and you can go through the full thing, but I'll see if I can skim it effectively with you.  Uh, it says the background of the AI mystery. So here's the deal. Some suggest that open AI has handed up or landed upon a new approach to AI that either has attained true AI, which is nowadays said to be artificial general intelligence, or that demonstrably resides on, or at least shows the path towards AGI as a fast background for you. Today's AI is, is considered not yet a, the realm. Of being on par with human intelligence. The aspirational goal for much of the AI field is to arrive at something that fully exhibits human intelligence, which would broadly then be considered an AGI or possibly going even further in a super intelligence,  nobody has yet been able to find out and report specifically out. Um, on what this mysterious AI breakthrough consists of, the situation could be like one of those circumstances where the actual occurrence is a far cry from the rumors that have been reverberated in the media. Maybe the reality is something of a modest AI advancement was discovered, but it doesn't deserve the hoopla that was ensued. Right now, the rumor mill is filled with tall tales that this is the real deal, and supposedly will open the door to reaching AGI.  Time will tell. Uh, on the matter of whether the AI has already achieved AGI per se, let's, uh, noodle on the prostulation. Let's not, because I already kind of clued you into that. It's not AGI already. At least, at least not what we have access to. Um, it has a hard enough time on some of the things, and it's probably from my use, and I use it basically daily in my, uh, career,  uh, company. Um,  It's, it seems to me like it's almost gotten worse over the last few months or so, but  anyways,  uh, you see, this is the way that those ad ho hunches frequently go, you think that you've landed on the right trail, but you are actually once again back at the woods,  or you are on the correct trail, but at the top of the mountain is still miles upon miles in the distance, right? So basically, it's just saying, we don't know what AGI could even look like, right? We have no idea whether what we're looking at is just a highly advanced football Uh, you know, Texas Instruments calculator that is just compiling all the information and spitting out, uh, based on connections of data and quick reading of, of information  or whether it's actually having some sort of, uh, development, uh, thought processes that are outside of, um, you know, normal variables. So so now that you know the overarching context of the AR mystery, let's see if we want to read any of this here.  Potentially or inadvertently is getting this to the immediate doorstep of AGI.  So, so yeah, the idea is that it seems like it's pretty far away. Now that you know the overarching context of the AI mystery, we're ready to dive into the hints or clues so far that have been reported on the matter.  Let's skip the caveats. Um, I'm going to draw upon these relatively unsubstantiated foremost three clues.  Uh, the name of the AI has been Ben said to be supposedly q star, the AI has supposedly been able to solve grade school level math problems quite well. And the AI has possibly leveraged an AI technique known as test time computations.  Interesting, which should be interesting because Chachi Bouti, as we know it today, has far more capabilities than solving grade school level math problems, but it's not the idea that it's doing it in the way that a calculator would do it when it comes to AGI. Right, it would be that it's thinking and learning those things on its own.  Uh, you can find lots of rampant speculations out, uh, online that uses only the first of those above clues, namely, uh, the name. Q Star. Some believe that the mystery can be unraveled on that one clue alone. They might not know about the other two above clues, or they might not believe that the other two clues are pertinent. The first clue is allegedly the name of the AI. So let's walk through these. It says it has been reported widely that the AI maker has allegedly opted to name the AI software as being referred to by the... Uh, notation of a capital letter Q that is followed by an asterisk.  The name or notation is this Q star. Believe it or not by this claim alone, you can get into a far reaching abyss of speculation about what the AI is. I will gladly do so. I suppose it is somewhat akin to the words Rosebud and the famous classic citizen Kane. I won't spoil the movie for you of the emphasize that the entire film is about trying to make sense of the seemingly innocuous word of Rosebud. If you have time to do so, I highly recommend watching the movie since it is considered one of the best time. Uh, films of all time, there isn't any AI in it. So realize you would be watching the movie for it's incredible plots, blended acting. Okay. Um, back to our mystery at hand. What can we divine from the Q star name?  That's a good question. I'm glad you asked, but my iPad froze. So. Again, give me one second. And this is like some horseshit because I have a very nice setup here and For some reason just this last like week or two I've never run into this issue and it's uh,  just being finicky with me for some reason  all right, so  Let's do this. Let's do that. So so yeah, so to me it's like  This seems like the most likely reason that OpenAI went through everything that it did. It seems by far from the clues that I read and I basically called it out just from the letter, right? You, you go back and listen to that episode. That's what I said is why would they say that it would be aligned? Well, it would only be aligned if they believed that the way that Sam Altman was Being non candid to them would lead to unaligned AGI, right? Which, which means essentially they're concerned about the direction the company is going because you have to understand, too, that the board of OpenAI is not a for profit board. Um, they're, I believe that all of the board members, like, essentially don't have any stake in the game financially when it comes to OpenAI,  and they're literally just there to, like, oversee the operations to make sure that it's aligned with its mission. Um. So this says, what can we divine from the q star name? Those of you that are fairly familiar with everyday mathematical equations are likely to realize the asterisk is typically said to represent a so called star symbol, thus the seemingly q asterisk name would be conventionally be pronounced aloud as q star. Okay, who cares? Um, overall, the use of the specification of the letter q innately coupled with the star present, um, representation does not notably denote anything already popularized.  Um, I am saying that the q star doesn't jump out as, Meaning this in particular,  so let's see what it does say for us,    It says the capital letter Q does have significance in the AI field. Furthermore, the use of an asterisk as a star does have significance in the mathematics and computer science arena. Hmm, so he's going to talk about those, which is that one of the most historically well known uses of the asterisk is potentially similar to  context was used in the mathematician Stephen Kleen when he defined something known as V star.  You might cleverly observe that this notation consists of the capital letter V is followed by the asterisk. Wow. Thank you so much.  In his letter in 1950s, he described that, uh, suppose you had a set of items that were named by the letter of V. And then you decided to make a different set that consists of various combinations associated with it that are in the set, in the set V. This now, uh, new set would. By definition contain all the elements of set V so basically saying that there's subsets of that V variation, right, that all house within V. But with V star, it's saying that it could be V, A, V, A, B, V, A, B, A, B, V, A. So it's just saying that it could an unlimited amount of combinations within the subset of the primary V. So You are said to be maximizing the letter to the nth degree,  right? Um, okay. Makes sense. The use of the asterisks or stars in the case of a capital a, you're going to love the next bit of detective work. I brought you up to speed about the, okay, come on, dude, get to the point. As an aside,  the, the classic paper was formulated a star and is entitled heuristic determination of minimum cost paths. Okay.  Imagine a set of cities with roads connecting certain parts of them. Suppose we desire a technique for discovering a sequence of cities on the shortest route from a specific start to a specific goal city. Our algorithm prescribed, uh, prescribes how to use special knowledge. The knowledge that the shortest route between any pair of cities cannot be less than the airline distance between them in order to reduce the total number of cities that need to be considered.   So I'll give you the TLDR of some of this, and when it's talking about Q star, what it's referencing is that the letter Q within AI or encoding is traditionally discussing the idea of reinforced learning, right? Reinforced learning being that, um, and traditionally when you're doing something like AI, it's reinforced learning through human feedback or RLHF.  And what that means is essentially you're giving it feedback every single time it gives an output.  A thumbs up or a thumbs down, right? When you answer this, it's good. When you answer this, it's bad. And now it takes that information and it learns off of it and enforces better. outputs,  right? Just as if a dog is doing a trick, um, and you want to teach it to roll over when it starts to make the movement, you give it a treat. And then the next time it goes a little bit further, you give it a treat. And eventually it's doing it on command to the word that you say before you made it do the rollover, um, through reinforced learning, right? Lean forced, just give it a treat. Um, now the star being, which we got into a little bit,  but the star being the idea that it's the infinite possibility behind what is that.  Initial framework, right? In this case, the infinite, right? If you use something, let's use the dog example. Again, you, you think of the word dog and you put an asterisk or behind it in the world of coding, that would mean dog could now be dogmatic. It could be doggy. It could be. Doggy style. I couldn't leave that one alone. Um, but the idea is that reinforced learning and the star being the exponential possibility behind that. And so what the idea is that the asterisk might suggest, is what this says, that this is the highest or most far out capability of Q learning. Reinforce learning that anybody has ever seen or envisioned.  This would simply imply that the use of reinforcement learning as an AI approach and that the model free and off policy can leap tall buildings  and go faster than a speeding train, metaphorically, to be able to push. AI closer to being AGI. If you place this into the context of general generative AI, such as ChatGBT or OpenAI of OpenAI, perhaps those generative AI apps could be much more fluent and seem to convey reasoning if they had this Q star included in them,  or GBT 5, essentially.  Right?  Um, And that could give a huge edge over the competition is what it says. It says, take a moment to deliberate on the razor sharp question. Should AI companies be required to disclose their AI breakthroughs? If they do so, would this inadvertently allow evildoers to use those breakthroughs for evil purposes? Is it fair for a company that spent, that spent its resources to devise AI breakthrough that cannot profit from it and must Just handed over to the public at large, who should own and control AI breakthroughs that get us to the realm of AGI. And do we need newer, additional AI related laws that will serve to regulate and govern what is happening in AI?  Um, now in that theme, I'm on the boat that where there should be some framework of, of, uh,  Oversight and governing bodies by individuals who actually know what they're talking about because this is a highly scary, terrifying industry, um, if it goes to the nth degree, right? If AGI occurs and it's not aligned with humanity and it identifies humanity as a threat to, I don't know, the world or,  uh, you know, doesn't weigh our lives over that of the animals that we slaughter on a daily basis. Um, lots of reasons that it could think that we're bad because, you know, some of us are.  Uh. Very interesting questions, but I do think that there should be some, you know, we could dive deep into that, but I absolutely believe that. Um, it says... Uh, now this talks about the solving of grade school level math,  um, don't want to get too far into that. That one's interesting to me because it just doesn't make sense why that would be such a huge deal. Uh, but I guess, um, no direct calculations or formulas were involved. You might suggest that this is a monkey see monkey do kind of answer by the generative AI. The similarity between the two math problems greatly overlapped in terms of its wording just to add the wording. Hmm.  Wow. This guy uses the word noodle.  Let's see if he gives a Okay. So let's  see. So it says, whenever they say this, the database converse will decry things that will go backwards in the older and disdained ways of rules. Okay. Anyways, there's the idea. The Q star situation is one that could mean that.  Sam Altman was hiding this from the board. Sam Altman was putting the humanity into jeopardy, and all these 770  or 57 employees could now have been the reason that the Terminator happens, and Arnold Schwarzenegger comes here from the future, and we all potentially die.  Or maybe not, and they were just dicks, and didn't know what they were doing, and let down all their investors. Um, but, I guess. We'll never  know,  but that's what I got for you guys today. Thank you so much for listening. I appreciate it from the bottom of your heart. If you want to go read that article, I gave you the name of it earlier. It's from Forbes. Um, really interesting read. I'll, I'll read the whole thing at some point, but just don't want to, it's, it's a very long article. So you're better off reading it on your own time than sitting here and me talk about it, but an interesting thing. Nonetheless, I absolutely am.  Very as confident as I could be that I believe that that's what happened is that Sam Altman was not being forthcoming now What I think is an interesting theory about this  That I noted is that Sam Altman was on Joe Rogan and on Lex Friedman Not two weeks before this huge publicity happened and what are the odds of that  a question to ponder?  Probably not very high.  Do with that information what you will. But I appreciate you guys. Thank you so much for listening from the bottom of my heart. You are awesome. I hope you have a wonderful day and see you next time. Thank you.     

Dave Lee on Investing
OpenAI just exploded... here's what happened and WHY (Ep. 745)

Dave Lee on Investing

Play Episode Listen Later Nov 20, 2023 78:33


I discuss the latest in the OpenAI saga… this morning Sam Altman and Greg Brockman agree to join Microsoft. OpenAI appionts a new CEO, Emmett Shear (former Twitch CEO). And OpenAI employees demand board to resign. Shockingly Ilya Sutskever signs the letter as well. Social

Techmeme Ride Home
Tue. 11/14 – Google Pays Apple HOW Much?

Techmeme Ride Home

Play Episode Listen Later Nov 14, 2023 15:52


Two different trials reveal some details of Google's various deals with various platforms that they probably wish didn't become public. Why aren't the Fed's arresting those casino hackers? Two interesting new initiatives from Uber. And OpenAI has an independent board that gets to decide when AGI has been achieved (and maybe x's Microsoft out).Sponsors:Shopify.com/rideMiro.com/podcastLinks:Apple Gets 36% of Google Revenue in Search Deal, Expert Says (Bloomberg)For Google Play, Dominating Android World Was ‘Existential' (Bloomberg)Amazon Lays Off 180 Employees In Its Games Division (Aftermath)FBI struggled to disrupt dangerous casino hacking gang, cyber responders say (Reuters)Uber to Test TaskRabbit-Like Service in Florida and Alberta (Bloomberg)Uber takes steps to combat unfair driver deactivations (TechCrunch)OpenAI's six-member board will decide ‘when we've attained AGI' (VentureBeat)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Tech News Weekly (MP3)
TNW 311: Humane Launches Its $699 AI Pin - Video Game Unions, Pixel Watch 2, OpenAI Conference

Tech News Weekly (MP3)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Tech News Weekly (Video HI)
TNW 311: Humane Launches Its $699 AI Pin - Video Game Unions, Pixel Watch 2, OpenAI Conference

Tech News Weekly (Video HI)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

All TWiT.tv Shows (MP3)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Tech News Weekly (Video LO)
TNW 311: Humane Launches Its $699 AI Pin - Video Game Unions, Pixel Watch 2, OpenAI Conference

Tech News Weekly (Video LO)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Tech News Weekly (Video HD)
TNW 311: Humane Launches Its $699 AI Pin - Video Game Unions, Pixel Watch 2, OpenAI Conference

Tech News Weekly (Video HD)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

All TWiT.tv Shows (Video LO)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Total Jason (Video)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

Total Jason (Video)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Total Jason (Audio)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

Total Jason (Audio)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Total Mikah (Video)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

Total Mikah (Video)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

Total Mikah (Audio)
Tech News Weekly 311: Humane Launches Its $699 AI Pin

Total Mikah (Audio)

Play Episode Listen Later Nov 9, 2023 75:17


The game industry has had a year with the consolidation of organizations and the formation of gaming unions. Jason Howell shares his review of the Pixel Watch 2. And OpenAI had its first developer conference. What was announced at the event?  Nathan Grayson of the newly launched Aftermath news site to talk about how the game industry has been impacted by the consolidation of gaming companies and the formation of unions. Jason Howell has been using Google's Pixel Watch 2 for some time and shares his review of Google's latest update to its wearable device. Mikah shares details about Humane's AI Pin, a $699 wearable device powered by OpenAI. Finally, Stephen Shankland of CNET joins the show to share what was announced at OpenAI's developer conference in San Francisco. Hosts: Jason Howell and Mikah Sargent Guests: Nathan Grayson and Stephen Shankland Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: discourse.org/twit wix.com/studio?utm_campaign=pa_podcast_studio_10/ 23_TWiT%5Esponsors_cta GO.ACILEARNING.COM/TWIT

The Nonlinear Library
EA - Will AI kill everyone? Here's what the godfathers of AI have to say [RA video] by Writer

The Nonlinear Library

Play Episode Listen Later Aug 22, 2023 3:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI kill everyone? Here's what the godfathers of AI have to say [RA video], published by Writer on August 22, 2023 on The Effective Altruism Forum. This video is based on this article. @jai has written both the original article and the script for the video. Script: The ACM Turing Award is the highest distinction in computer science, comparable to the Nobel Prize. In 2018 it was awarded to three pioneers of the deep learning revolution: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. In May 2023, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that "it could figure out how to kill humans" and saying "it's not clear to me that we can solve this problem." Later that month, Yoshua Bengio wrote a blog post titled "How Rogue AIs may Arise", in which he defined a "rogue AI" as "an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere." Yann LeCun continues to refer to thoseanyone suggesting that we're facing severe and imminent risk as "professional scaremongers" and says it's a "simple fact" that "the people who are terrified of AGI are rarely the people who actually build AI models." LeCun is a highly accomplished researcher, but in light of Bengio and Hinton's recent comments it's clear that he's misrepresenting the field whether he realizes it or not. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risks, and this concern is shared by not only both of LeCun's award co-recipients, but the headsleaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind): Demis Hassabis, CEO of DeepMind, said in an interview with Time Magazine: "When it comes to very powerful technologies - and obviously AI is going to be one of the most powerful ever - we need to be careful. Not everybody is thinking about those things. It's like experimentalists, many of whom don't realize they're holding dangerous material." Anthropic, in their public statement "Core Views on AI Safety", says: "One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible." And OpenAI, in their blog post "Planning for AGI and Beyond", says "Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential." Sam Altman, the current CEO of OpenAI, once said "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. " There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but "it's a fringe idea that actual AI experts do not take seriously" is no longer among them. Instead, a growing share of experts are echoing the conclusion reached by Alan Turing, considered by many to be the father of computer science and artificial intelligence, back in 1951: "[I]t seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. [...] At some stage therefore we should have to expect the machines to take control." Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Daily Tech News Show
Apple Makes The Best Gaming Laptop - DTNS 4320

Daily Tech News Show

Play Episode Listen Later Jul 20, 2022 33:36


Quinn Nelson from Snazzy Labs is here with a surprising view. That the M1 MacBook just might be the best gaming laptop on the market. Plus we go over Netflix's less bad than anticipated earnings report. And OpenAI is expanding access to DALL-E 2.Starring Tom Merritt, Sarah Lane, Quinn Nelson, Roger Chang, Joe.Link to the Show Notes. See acast.com/privacy for privacy and opt-out information. Become a member at https://plus.acast.com/s/dtns.

Daily Tech News Show (Video)
Apple Makes The Best Gaming Laptop – DTNS 4320

Daily Tech News Show (Video)

Play Episode Listen Later Jul 20, 2022


Quinn Nelson from Snazzy Labs is here with a surprising view. That the M1 MacBook just might be the best gaming laptop on the market. Plus we go over Netflix’s less bad than anticipated earnings report. And OpenAI is expanding access to DALL-E 2. Starring Tom Merritt, Sarah Lane, Quinn Nelson, Roger Chang, Amos, Joe MP3 Download Using a Screen Reader? Click here Multiple versions (ogg, video etc.) from Archive.org Follow us on Twitter Instgram YouTube and Twitch Please SUBSCRIBE HERE. Subscribe through Apple Podcasts. A special thanks to all our supporters–without you, none of this would be possible. If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you! Become a Patron! Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme! Big thanks to Mustafa A. from thepolarcat.com for the logo! Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit Send to email to feedback@dailytechnewsshow.com Show Notes To read the show notes in a separate page click here!

AI with AI
Three Amecas!

AI with AI

Play Episode Listen Later Jan 14, 2022 39:59


Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE< a 3.5B parameter text-to-image generation model to generate even higher quality images than DALL-E, though it still has trouble with “highly unusual” scenarios [29:30]. The Santa Fe Institute publishes The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic, 800 pages on how complexity interwove through the pandemic [33:50]. And Chris Peter has an algorithm to create a short movie after watching Hitchcock's Vertigo 20 times [35:22]. Please visit our website to explore the links mentioned in this episode. https://www.cna.org/CAAI/audio-video