POPULARITY
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
This week, Jason Howell and Jeff Jarvis celebrate the arrival of AI Mode for Jeff! They pick apart all of the defections taking place in AI (much of it heading right for Meta's Superintelligence lab), how AI impersonation is becoming a thing even if it isn't very successful yet, and the arrival of Perplexity's $200 per month plan that includes first dibs at their agentic Comet browser. Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:01:33 - Jeff finally got his AI Mode! 0:10:55 - Circle to Search Gets AI Mode, Gaming Help Arrives, and Pixel Watch Finally Gets Gemini 0:16:51 - Apple Loses Top AI Models Executive to Meta's Hiring Spree 0:24:16 - Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross 0:31:09 - xAI updated Grok to be more ‘politically incorrect' 0:35:21 - Linda Yaccarino steps down as CEO of Elon Musk's X 0:37:34 - Fake AI voice impersonating Secretary of State Marco Rubio contacts foreign ministers and US officials 0:41:18 - OpenAI tightens the screws on security to keep away prying eyes 0:46:39 - Channel 4 to offer AI-generated ads for SMEs on its streaming service 0:52:31 - How the Owner of Hidden Valley Ranch Learned to Love AI 0:57:21 - Jeff shows off Epicure for AI-generated recipes 1:00:00 - Perplexity launches a $200 monthly subscription plan 1:00:28 - Perplexity launches Comet, an AI-powered web browser 1:04:44 - Excerpt from our interview with Babak Hodjat, co-founder of the tech driving Siri 1:12:57 - Nvidia Just Became the First Company to Hit a $4 Trillion Market Cap Learn more about your ad choices. Visit megaphone.fm/adchoices
Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021
¿Quién controla la inteligencia artificial? ¿Y cuánto cuesta fichar al futuro? En este episodio desnudamos la guerra secreta por el talento más valioso del planeta: el que entrena modelos. Te aviso: hay millones, CEOs despechados, startups sin producto... y una lluvia de colonia con aroma a ego tecnológico. PUNTOS CLAVE DEL CAPÍTULO Meta va a la caza y captura de cerebros premium: ofertas, sueldos obscenos y fichajes que parecen del PC Fútbol. OpenAI se siente saqueada y responde con drama, recalibraciones y perfumes éticos. Thinking Machines y otras startups sin producto, pero con valoraciones de 10.000 millones, nos recuerdan que aquí manda la narrativa. Mira Murati, Daniel Gross, Ilya Sutskever… todos tienen precio o propuesta. Musk y Trump estrenan nueva telenovela: entre partidos cerdito, amenazas de deportación y guerras de egos. Ranking sorpresa: ¿qué modelo respeta más tu privacidad? (Spoiler: no es Meta, ni Gemini, ni Copilot). Y sí, ya nadie habla de AGI. Ahora lo que mola es la Superinteligencia. Piensa Poco, Scrollea Mucho: El Capitalismo Límbico Nos Tiene https://go.ivoox.com/rf/140187412 Ilya Sutskever y la Superinteligencia Segura: ¿Está el Ex-Jefe de OpenAI un Paso Adelante? https://go.ivoox.com/rf/134801029 HUMANIA: WIN-WIN Corporativo. La Era Trump-Musk https://go.ivoox.com/rf/135752500 Artículos de Referencia https://www.wired.com/story/mark-zuckerberg-welcomes-superintelligence-team https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million https://www.entrepreneur.com/business-news/ai-startup-tml-from-ex-openai-exec-mira-murati-pays-500000/494108 https://www.elconfidencial.com/tecnologia/novaceno/2025-07-02/zuckerberg-inteligencia-artificial-openia-futuro-tencologia_4164371 https://www.xataka.com/robotica-e-ia/industria-ia-se-ha-convertido-juego-tronos-eso-revela-verdad-inquietante-ia-casi-todo-humo https://www.wired.com/story/sam-altman-meta-ai-talent-poaching-spree-leaked-messages https://www.businessinsider.es/economia/elon-musk-arremete-nuevo-partido-republicano-ley-presupuestaria-trump-ha-sido-batalla-1470327 https://www.businessinsider.es/economia/ultima-disputa-musk-trump-clavo-ataud-tesla-inversor-ross-gerber-1470868 https://es-us.noticias.yahoo.com/chatbot-inteligencia-artificial-protege-datos-183103697.html
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
A daily Chronicle of AI Innovations from July 01 to July 07 2025:AI Builder's ToolkitHello AI Unraveled Listeners,In this week's AI News,
Taiwan Semiconductor Manufacturing is delaying construction of a second plant in Japan, Ilya Sutskever announced he will take on the CEO role at his AI startup, Safe Superintelligence, and the EU is proceeding with its AI Act despite tech companies’ efforts to delay it. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free.Continue reading "The EU Is Proceeding With Its AI Act Despite Tech Companies’ Efforts To Delay It – DTH"
Apple zou plannen hebben gehad om de eigen datacenters open te stellen voor externe partijen. Daarmee zou het bedrijf direct concurreren met de clouddiensten van Amazon, Microsoft en Google. Of het plan nog steeds uitgevoerd wordt is niet bekend. The Information schrijft over de plannen van Apple, die samenhangen met project ACDC. Dat project staat voor Apple Chips in Data Centers. Dat plan werd opgetuigd om eigen chips te ontwerpen voor AI-datacenters van het bedrijf. Het zou in eerste instantie enkel voor Apple's eigen diensten en AI-plannen gebruikt moeten worden, zoals nu bijvoorbeeld Private Cloud Compute en de AI-berekeningen van Siri draaien op die datacenters. Maar Michael Abott had binnen Apple plannen opgegooid om ook een businessmodel te maken voor de datacenters, door de rekenkracht van de datacenters te verhuren aan externe ontwikkelaars. Of dat plan nog altijd uitgewerkt wordt is onbekend en onzeker, omdat initiatiefnemer Abott in 2023 het bedrijf verliet. Volgens The Information zouden er wel in 2024 nog gesprekken over gevoerd zijn, maar inmiddels lijkt Apple vooral ook de handen vol te hebben aan het verbeteren van de eigen AI-capaciteiten. Verder in deze Tech Update: Meta trekt Daniel Gross aan voor Super AI-team, maar Ilya Sutskever weet verleiding te weerstaan en neemt plek als CEO in bij start-up Safe Superintelligence Zometeen in De Schaal van Hebben: Aircooling fan-jacket van FERNIDA See omnystudio.com/listener for privacy information.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
A daily Chronicle of AI Innovations in July 2025: July 04th 2025 AI Builder's ToolkitHello AI Unraveled Listeners,In today's AI Daily News,
Hey everyone, Alex here
This week, we unpack The Optimist, the new Sam Altman biography; revisit OpenAI's early days; and break down Coatue's AI strategy deck. Plus, tips for squeezing in side projects between thought leadership presentations. Watch the YouTube Live Recording of Episode 526 (https://www.youtube.com/live/1CnmEwdH6ME?si=64oVGDyCvXdzJeIj) Runner-up Titles Flow State Altman and AI Day 2 Thinking Growth Mindset Less of you You don't need a Harvard Business Review subscription to know that Running unnecessary hardware in your house Lifelong Costco member here. Pre-populate Everything There's no ROI on a good hotdog Rundown AI Native vs. AI Add-on (https://www.softwaredefinedtalk.com/525) AI Frenzy The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future (https://www.amazon.com/Optimist-Altman-OpenAI-Invent-Future/dp/1324075961?tag=googhydr-20&hvqmt=&hvbmt=%7BBidMatchType%7D&hvdev=c&ref=pd_sl_8w2bwd161h_e) Mira Murati's Thinking Machines Lab valued at $10bn after $2bn fundraising (https://www.ft.com/content/9edc67e6-96a9-4d2b-820d-57bc1279e358) ChatGPT's Enterprise Success Against Copilot Fuels OpenAI and Microsoft's Rivalry (https://www.bloomberg.com/news/articles/2025-06-24/chatgpt-vs-copilot-inside-the-openai-and-microsoft-rivalry) Iyo vs. Io — OpenAI and Jony Ive get sued (https://pivot-to-ai.com/2025/06/23/iyo-vs-io-openai-and-jony-ive-get-sued/) Zuckerberg Leads AI Recruitment Blitz Armed With $100 Million Pay Packages (https://www.wsj.com/tech/ai/meta-ai-recruiting-mark-zuckerberg-5c231f75) After trying to buy Ilya Sutskever's $32B AI startup, Meta looks to hire its CEO (https://techcrunch.com/2025/06/20/after-trying-to-buy-ilya-sutskevers-32b-ai-startup-meta-looks-to-hire-its-ceo/) Message from CEO Andy Jassy: Some thoughts on Generative AI (https://www.aboutamazon.com/news/company-news/amazon-ceo-andy-jassy-on-generative-ai) Clouded Judgement 6.19.25 - The Dropping Cost of Intelligence (https://cloudedjudgement.substack.com/p/clouded-judgement-61925-the-dropping?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48bf3cef-6d79-4e10-8bb4-ccf48a08341b_1189x729.png&open=false) Coatue's 2025 EMW Keynote Replay (https://www.coatue.com/blog/company-update/coatues-2025-emw-keynote-replay) Slides in online PDF (https://drive.google.com/file/d/1Srl8Y4pBoKtNVYZBxmfj2TEMYM5tp1mE/view) Coatue's Laffont Brothers. AI, Public & VC Mkts, Macro, US Debt, Crypto, IPO's, & more (https://www.youtube.com/watch?v=4JA7n0wTChw) Agents and the Web Remote MCP support in Claude Code (https://www.anthropic.com/news/claude-code-remote-mcp) Agentforce 3, it's agents all the way down. (https://siliconangle.com/2025/06/23/salesforce-launches-agentforce-3-greater-ai-agent-visibility-connectivity/) Google Cloud donates A2A to Linux Foundation- Google Developers Blog (https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/) Linux Foundation Appoints Jonathan Bryce as Executive Director, Cloud & Infrastructure and Chris Aniszczyk as CTO, Cloud & Infrastructure to Oversee Major Open Source Initiatives (https://www.cncf.io/announcements/2025/06/24/linux-foundation-appoints-jonathan-bryce-as-executive-director-cloud-infrastructure-and-chris-aniszczyk-as-cto-cloud-infrastructure-to-oversee-major-open-source-initiatives/) Relevant to your Interests Amazon orders employees to relocate to Seattle and other hubs (https://finance.yahoo.com/news/amazon-orders-employees-relocate-seattle-212945920.html) Microsoft announces advancement in quantum error correction (https://www.nextgov.com/emerging-tech/2025/06/microsoft-announces-advancement-quantum-error-correction/406175/) Datadog DASH: A Revolving Door Of Operations And Security Announcements (https://www.forrester.com/blogs/datadog-dash-a-revolving-door-of-operations-and-security-announcements/) the six-month recap: closing talk on AI at Web Directions, Melbourne, June 2025 (https://ghuntley.com/six-month-recap/) Snap acquires Saturn, a social calendar app for high school and college students (https://techcrunch.com/2025/06/20/snap-acquires-saturn-a-social-calendar-app-for-high-school-and-college-students/) Frequent reauth doesn't make you more secure (https://tailscale.com/blog/frequent-reauth-security?ck_subscriber_id=512840665&utm_source=convertkit&utm_medium=email&utm_campaign=%5BLast%20Week%20in%20AWS%5D%20Issue%20#428:%20One%20UI%20Gets%20Fixed,%20Another%20Falls%20-%2018055641) Checking In on AI and the Big Five (https://stratechery.com/2025/checking-in-on-ai-and-the-big-five/?access_token=eyJhbGciOiJSUzI1NiIsImtpZCI6InN0cmF0ZWNoZXJ5LnBhc3Nwb3J0Lm9ubGluZSIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJzdHJhdGVjaGVyeS5wYXNzcG9ydC5vbmxpbmUiLCJhenAiOiJIS0xjUzREd1Nod1AyWURLYmZQV00xIiwiZW50Ijp7InVyaSI6WyJodHRwczovL3N0cmF0ZWNoZXJ5LmNvbS8yMDI1L2NoZWNraW5nLWluLW9uLWFpLWFuZC10aGUtYmlnLWZpdmUvIl19LCJleHAiOjE3NTMyODQ4NzAsImlhdCI6MTc1MDY5Mjg3MCwiaXNzIjoiaHR0cHM6Ly9hcHAucGFzc3BvcnQub25saW5lL29hdXRoIiwic2NvcGUiOiJmZWVkOnJlYWQgYXJ0aWNsZTpyZWFkIGFzc2V0OnJlYWQgY2F0ZWdvcnk6cmVhZCBlbnRpdGxlbWVudHMiLCJzdWIiOiIxNjY4NDg4My04NTYzLTQ1ZGEtYjVhYy1hYWY2MmEyYzZhZTciLCJ1c2UiOiJhY2Nlc3MifQ.rg-oA59aKciV6Pvwn1GezC8ElCYxg92wPMQ9ORYS5KXLFvsuSRlJj1hjn9rlcpqmY3BtiPSHpPHDC1Sos9J5ZIPaW3Rn7o-5Yu6Rn_0HyGkqHUSCAsU36SZ-9Q9bf7Ibd_fWcRN7G6nuIe2j0OMURacJ30W3jMm6_dBtR-IacPllW7q6yDxlDW-pX50I_xhZ_pZfTa7B7HXimMTOWiJ5S-uddGLDOOqxihxgIa3w96SnK7wiiyx5bwe5r0A7IQBvHOe5yVzrTSOxm5DBSZJwbGx_f36MzDGPtdwsMOojbs3yN5gWRZnlre6h1GkiukeAXHqXTWImfUfxyBS1ebOjOQ) U.S. House tells staffers not to use Meta's WhatsApp (https://www.cnbc.com/2025/06/23/meta-whatsapp-us-house.html) How AlmaLinux and Rocky Linux Have Diverged Since CentOS (https://thenewstack.io/how-almalinux-and-rocky-linux-have-diverged-since-centos/) AI search finds publishers starved of referral traffic (https://www.theregister.com/2025/06/22/ai_search_starves_publishers/) 10 years of platform engineering at SIXT: Lessons in scaling and innovation - Boyan Dimitrov (https://www.youtube.com/watch?v=OtxWxkehkPE) What Would a Kubernetes 2.0 Look Like (https://matduggan.com/what-would-a-kubernetes-2-0-look-like/) kubectl-ai (https://github.com/GoogleCloudPlatform/kubectl-ai) Nonsense Costco Executive Members get extended hours (https://www.axios.com/2025/06/19/costco-hours-executive-members-early-shopping) Listener Feedback Warp (https://www.warp.dev/future) Conferences CF Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Frankfurt, October 7th, 2025. SpringOne (https://www.vmware.com/explore/us/springone?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/watch?v=f_xOudsmUmk). Explore 2025 US (https://www.vmware.com/explore/us?utm_source=organic&utm_medium=social&utm_campaign=cote), Las Vegas, August 25th to 28th, 2025. See Coté's pitch (https://www.youtube.com/shorts/-COoeIJcFN4). Texas Linux Fest (https://2025.texaslinuxfest.org), Austin, October 3rd to 4th. CFP closes August 3rd (https://www.papercall.io/txlf2025). SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Discount Tire (https://www.discounttire.com/) Coté: Brimstone Angels (https://en.wikipedia.org/wiki/Brimstone_Angels) Rebels of Reason: The Long Road from Aristotle to ChatGPT and AI's Heroes Who Kept the Faith (https://www.amazon.com/Rebels-Reason-Aristotle-ChatGPT-Heroes/dp/B0FCD969SD?crid=2KBTZJS1P49C2&dib=eyJ2IjoiMSJ9.E2MZsF2Qb-y8u2F4mRTKt5KT39pbgvp_DiV9oA2bPgsqqPJMqdRhIlFh_wyf9wTvia5jPoenX4kfS9HWQAdt5LdXt4zy3NiHbluCozW2B0KUya8M4uCGKdxInNb6npHqJlko7hFE8pzIKtF1X8hJlk02C6nmAb1PN-MsiNB4mZVoFLa9KIFS1Y7zJ8QVc-K5ICucbOAsm6rH-ZgsoyiaO4eFT8-qlzMYHxM4TxUyXx8.hl_-MoO-eXVVzohj3CN42fh3IIQ5wWuiss_O0iiLuHI&dib_tag=se&keywords=John+Willis&qid=1750401917&sprefix=john+will,aps,186&sr=8-1&linkCode=sl1&tag=coteicomthecoteb&linkId=5da48a792d65369c5b69ff1b351b16d6&language=en_US&ref_=as_li_ss_tl) Photo Credits Header (https://unsplash.com/s/photos/Flow?license=free&orientation=landscape)
Hey folks, Alex here, writing from... a undisclosed tropical paradise location
Find this episode on YouTube: You are not simply a biological computer. No matter what the latest AI super nerd says. This is an emergency pod - listen up! ✒ Substack: https://johnheersftf.substack.com/ ⓧ https://x.com/johnfromftf
Meta has some new smartglasses. How long can the TikTok groundhog day go on? Masa Son wants to create a Shenzhen-like production city here in the US. Are your smart cameras a national security threat to the home front in a war? And, of course, the Weekend Longreads Suggestions.Sponsors:Factor75.com/rideLinks:Meta announces Oakley smart glasses (The Verge)Meta tried to buy Ilya Sutskever's $32 billion AI startup, but is now planning to hire its CEO (CNBC)Trump extends TikTok ban deadline for a third time, without clear legal basis (AP)Publishers facing existential threat from AI, Cloudflare CEO says (Axios)Masa Son Pitches $1 Trillion US AI Hub to TSMC, Trump Team (Bloomberg)Israeli Officials Warn Iran Is Hijacking Security Cameras to Spy (Bloomberg)Weekend Longreads Suggestions:Scientists once hoarded pre-nuclear steel; now we're hoarding pre-AI content (ArsTechnica)Why Everything in the Universe Turns More Complex (QuantaMagazine)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
A daily Chronicle of AI Innovations in June 2025: June 20thRead Online AI Builder's ToolkitHello AI Unraveled Listeners,In today's AI Daily News,⚠️OpenAI prepares for bioweapon risks
OpenAI's Sam Altman drops o3-Pro & sees “The Gentle Singularity”, Ilya Sutskever prepares for super intelligence & Mark Zuckerberg is spending MEGA bucks on AI talent. WHAT GIVES? All of the major AI companies are not only preparing for AGI but for true “super intelligence” which is on the way, at least according to *them*. What does that mean for us? And how do we exactly prepare for it? Also, Apple's WWDC is a big AI letdown, Eleven Labs' new V3 model is AMAZING, Midjourney got sued and, oh yeah, those weird 1X Robotics androids are back and running through grassy fields. WHAT WILL HAPPEN WHEN AI IS SMARTER THAN US? ACTUALLY, IT PROB ALREADY IS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links /? Ilya Sutsketver's Commencement Speech About AI https://youtu.be/zuZ2zaotrJs?si=U_vHVpFEyTRMWSNa Apple's Cringe Genmoji Video https://x.com/altryne/status/1932127782232076560 OpenAI's Sam Altman On Superintelligence “The Gentle Singularity” https://blog.samaltman.com/the-gentle-singularity The Secret Mathematicians Meeting Where The Tried To Outsmart AI https://www.scientificamerican.com/article/inside-the-secret-meeting-where-mathematicians-struggled-to-outsmart-ai/ O3-Pro Released https://x.com/sama/status/1932532561080975797 The most expensive o3-Pro Hello https://x.com/Yuchenj_UW/status/1932544842405720540 Eleven Labs v3 https://x.com/elevenlabsio/status/1930689774278570003 o3 regular drops in price by 80% - cheaper than GPT-4o https://x.com/edwinarbus/status/1932534578469654552 Open weights model taking a ‘little bit more time' https://x.com/sama/status/1932573231199707168 Meta Buys 49% of Scale AI + Alexandr Wang Comes In-House https://www.nytimes.com/2025/06/10/technology/meta-new-ai-lab-superintelligence.html Apple Underwhelms at WWDC Re AI https://www.cnbc.com/2025/06/09/apple-wwdc-underwhelms-on-ai-software-biggest-facelift-in-decade-.html BusinessWeek's Mark Gurman on WWDC https://x.com/markgurman/status/1932145561919991843 Joanna Stern Grills Apple https://youtu.be/NTLk53h7u_k?si=AvnxM9wefXl2Nyjn Midjourney Sued by Disney & Comcast https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/ 1x Robotic's Redwood https://x.com/1x_tech/status/1932474830840082498 https://www.1x.tech/discover/redwood-ai Redwood Mobility Video https://youtu.be/Dp6sqx9BGZs?si=UC09VxSx-PK77q-- Amazon Testing Humanoid Robots To Deliver Packages https://www.theinformation.com/articles/amazon-prepares-test-humanoid-robots-delivering-packages?rc=c3oojq&shared=736391f5cd5d0123 Autonomous Drone Beats Pilots For the First Time https://x.com/AISafetyMemes/status/1932465150151270644 Random GPT-4o Image Gen Pic https://www.reddit.com/r/ChatGPT/comments/1l7nnnz/what_do_you_get/?share_id=yWRAFxq3IMm9qBYxf-ZqR&utm_content=4&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1 https://x.com/AIForHumansShow/status/1932441561843093513 Jon Finger's Shoes to Cars With Luma's Modify Video https://x.com/mrjonfinger/status/1932529584442069392
(0:00) Intro (1:49) About the podcast sponsor: The American College of Governance Counsel(2:36) Introduction by Professor Anat Admati, Stanford Graduate School of Business. Read the event coverage from Stanford's CASI.(4:14) Start of Interview(4:45) What inspired Karen to write this book and how she got started with journalism.(8:00) OpenAI's Nonprofit Origin Story(8:45) Sam Altman and Elon Musk's Collaboration(10:39) The Shift to For-Profit(12:12) On the original split between Musk and Altman over control of OpenAI(14:36) The Concept of AI Empires(18:04) About concept of "benefit to humanity" and OpenAI's mission "to ensure that AGI benefits all of humanity"(20:30) On Sam Altman's Ouster and OpenAI's Boardroom Drama (Nov 2023) "Doomers vs Boomers"(26:05) Investor Dynamics Post-Ouster of Sam Altman(28:21) Prominent Departures from OpenAI (ie Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati, etc)(30:55) The Geopolitics of AI: U.S. vs. China(32:37) The "What about China" Card used by US companies to ward off regulation.(34:26) "Scaling at All Costs is not leading us in a good place"(36:46) Karen's preference on ethical AI development "I really want there to be more participatory AI development. And I think about the full supply chain of AI development when I say that."(39:53) Her biggest hope and fear for the future "the greatest threat of these AI empires is the erosion of democracy."(43:34) The case of Chilean Community Activism and Empowerment(47:20) Recreating human intelligence and the example of Joseph Weizenbaum, MIT (Computer Power and Human Reason, 1976)(51:15) OpenAI's current AI research capabilities: "I think it's asymptotic because they have started tapping out of their scaling paradigm"(53:26) The state (and importance of) open source development of AI. "We need things to be more open"(55:08) The Bill Gates demo on chatGPT acing the AP Biology test.(58:54) Funding academic AI research and the public policy question on the role of Government.(1:01:11) Recommendations for Startups and UniversitiesKaren Hao is the author of Empire of AI (Penguin Press, May 2025) and an award-winning journalist covering the intersections of AI & society. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
Jason Howell and Jeff Jarvis return for a deep dive into the week's AI news. We cover Apple's new research paper exposing the illusion of AI reasoning, industry leaders' superintelligence hype and hubris, Altman's “Gentle Singularity” vision, Ilya Sutskever's brain-as-computer analogy, Meta's massive superintelligence lab, LaCun and Pichai's call for new AGI ideas, Apple's on-device AI framework, NotebookLM's new sharing features, pairing NotebookLM with Perplexity, Hollywood's awkward embrace of AI tools, and the creative collision of AI and filmmaking. Subscribe to the YouTube channel! https://www.youtube.com/@aiinsideshow Enjoying the AI Inside podcast? Please rate us ⭐⭐⭐⭐⭐ in your podcatcher of choice! Note: Time codes subject to change depending on dynamic ad insertion by the distributor. CHAPTERS: 0:00:00 - Podcast begins 0:02:27 - Apple paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity 0:05:50 - Sinofsky on the costs of anthropomorphizing LLMs 0:07:34 - Nate Jones: Let's Talk THAT Apple AI Paper—Here's the Takeaway Everyone is Ignoring 0:13:46 - Altman's latest manifesto might be worth mention in comparison 0:19:33 - Ilya Sutskever, a leader in AI and its responsible development, receives U of T honorary degree 0:25:52 - Meta Is Creating a New A.I. Lab to Pursue ‘Superintelligence' 0:29:05 - Google CEO says AGI is impossible with today's tech 0:33:17 - WWDC: Apple opens its AI to developers but keeps its broader ambitions modest 0:39:57 - NotebookLM is adding a new way to share your own notebooks publicly. 0:42:01 - I paired NotebookLM with Perplexity for a week, and it feels like they're meant to work together 0:45:26 - The Googlers behind NotebookLM are launching their own AI audio startup. Here's a sneak peek. 0:50:48 - Could AI make a Scorsese movie? Demis Hassabis and Darren Aronofsky discuss 0:55:05 - Luca Guadagnino to Direct True-Life OpenAI Movie ‘Artificial' for Amazon MGM 0:59:19 - Everyone Is Already Using AI (And Hiding It) “We can say, ‘Do it in anime, make it PG-13.' Three hours later, I'll have the movie.” Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Hashtag Trending, Jim Love discusses several major tech and geopolitical developments. OpenAI co-founder Ilya Sutskever's controversial proposal for a doomsday bunker before achieving artificial general intelligence (AGI) highlights internal conflicts over AI safety. Apple announces a leap from iOS 18 to iOS 26, aligning software version names with the automotive industry model. The Trump administration targets China by halting software sales essential for chip design, escalating tech tensions between the two nations. Additionally, Instagram's battery drain issue on Android devices is explored, and Google co-founder Sergey Brin's surprising claim that threatening AI can improve performance is examined. 00:00 Introduction and Headlines 00:28 OpenAI's Doomsday Bunker Proposal 04:34 Apple's Leap to iOS 26 07:17 Google Confirms Instagram Battery Drain 08:34 US Blocks Chip Design Software Sales to China 11:55 Threatening AI for Better Performance 14:47 Conclusion and Sign-Off
# TEMA ☢️BUNKER AGI ¿Qué dijo Ilya Sutskever?# ÍNDICE# PRESENTA Y DIRIGE
Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence.
Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence. Getting to the airport two hours early. Snitzer would try Galaxy Gas. Man accuses his wife of cheating at his 40th birthday party. Charlie would out someone that wronged him. $1200 cap and gown. The Polk county sheriff's office conducted an operation called "fool around and find out" arrested 250 people including former Browns player, Adarius Taylor. Rover believes prostitution should be legal. A couple married for 31 years schedule their sex life. Charlie and Rover would love to schedule their sex lives. Smelly vaginas. Controversy in Paralympics after a gold medalist was banned for life. Rover will never get an Airbnb again.
Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence.See omnystudio.com/listener for privacy information.
Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence. Getting to the airport two hours early. Snitzer would try Galaxy Gas. Man accuses his wife of cheating at his 40th birthday party. Charlie would out someone that wronged him. $1200 cap and gown. The Polk county sheriff's office conducted an operation called "fool around and find out" arrested 250 people including former Browns player, Adarius Taylor. Rover believes prostitution should be legal. A couple married for 31 years schedule their sex life. Charlie and Rover would love to schedule their sex lives. Smelly vaginas. Controversy in Paralympics after a gold medalist was banned for life. Rover will never get an Airbnb again. See omnystudio.com/listener for privacy information.
Ronen Bar is a social entrepreneur and the co-founder and former CEO of Sentient, a meta non-profit for animals focused on community building and developing tools tosupport animal rights advocates. He is currently focused on advancing a new community-building initiative, The Moral Alignment Center, to ensure AI development benefits all sentient beings, including animals, humans, and futuredigital minds. For over a decade, his work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative reporting sectors.In Sentientist Conversations we talk about the most important questions: “what's real?”, “who matters?” and "how can we make a better world?"Sentientism answers those questions with "evidence, reason & compassion for all sentient beings." The video of our conversation is here on YouTube.00:00 Clips01:11 Welcome02:40 Ronen's Intro- Social entrepreneur "using storytelling to promote reason and compassion for all sentient beings"- Investigative journalism (care homes then slaughterhousesin Israel and abroad)- Leading the Sentient NGO, including using on-animal investigative cameras to "enhance animal storytelling... of particular named animals" not just the story of a slaughterhouse- Alternative protein non-profits- The Moral Alignment Center "making sure that #ai is apositive force for all sentient beings"- "What is good?... I don't think those questions are asked enough in the AI space"- Starting new fields and communities- How the advent of powerful AI forces us to revisit thesefundamental "what's real?", "what matters?" and "whomatters?" questions- The ethical question is neglected in AI, but "it is in the minds of people... Ilya Sutskever... Ray Kurtzweil... Sam Altman..."07:23 What's Real?- Growing up in Israel "a very religious country" but in a secular family- Wider relatives #orthodox and ultra-orthodox- Asking self "what do I know for sure... 100%...? the obvious answer is subjective experiences at this moment"- Being less sure of everything else but "my subjective experience is certainly true"- #illusionism ? "It's funny to think of it [subjective experience] as an illusion because subjective experience is the only information you will ever receive in your life"- "Science is just the discipline... of trying throughrationality to predict the subjective experiences of humans" (even the results of scientific measurements come through our experiences)- JW: "So if it is an illusion it's still all we've got!"- "Starting from your own subjective experiences... itactually brings you more compassion... "29:40 What Matters?36:00 Who Matters?41:03 A Better World?01:33:20 Follow Ronen:- Ronen on the EA forum- Ronen on LinkedIn- Moral Alignment Center on LinkedIn - Alien Journalist Dictionary- Email: ronenbar07@gmail.comAnd more... full show notes at Sentientism.info.Sentientism is “Evidence, reason & compassion for all sentient beings.” More at Sentientism.info. Join our "I'm a Sentientist" wall via this simple form.Everyone, Sentientist or not, is welcome in our groups. The biggest so far is here on FaceBook. Come join us there!
AI Arms Race from ChatGPT to Deepseek - AZ TRT S06 EP08 (269) 4-20-2025 What We Learned This Week AI Arms Race is real with the major tech co's involved ChatGPT by OpenAI is considering the top chat AI program Google has Gemini (was Bard), Microsoft has CoPilot, Amazon has Claude / Alexa Deepseek is a startup from China that has disrupted AI landscape with a more cost effective AI model Costs and investment $ dollars into AI is being rethought as Deepseek spent millions $ vs Silicon Valley spending billions $ Notes: Seg 1: Major Tech Giants AI Programs - Gemini (was Bard) Developed by Google, Gemini is known for its multimodal capabilities and integration with Google Search. It can analyze images, understand verbal prompts, and engage in verbal conversations. ChatGPT Developed by OpenAI, ChatGPT is known for its versatility and platform-agnostic solution for text generation and learning. It can write code in almost any language, and can also be used to provide research assistance, generate writing prompts, and answer questions. Microsoft Copilot Developed by Microsoft, Copilot is known for its integration with applications like Word, Excel, and Power BI. It's particularly well-suited for document automation. Amazon Alexa w/ Claude - Improved AI Model: Claude is a powerful AI model from Anthropic, known for its strengths in natural language processing and conversational AI, as noted in the video and other sources. Industry 3.0 (1969-2010): The Third Industrial Revolution, or the Digital Revolution, was marked by the automation of production through the use of computers, information technology, and the internet. This era saw the widespread adoption of digital technologies, including programmable logic controllers and robots. Industry 4.0 (2010-present): The Fourth Industrial Revolution, also known as the Fourth Industrial Revolution, is characterized by the integration of digital technologies, including the Internet of Things (IoT), artificial intelligence (AI), big data, and cyber-physical systems, into manufacturing and industrial processes. This era is focused on creating "smart factories" and "smart products" that can communicate and interact with each other, leading to increased efficiency, customization, and sustainability. Top AI programs include a range of software, platforms, and resources for learning and working with artificial intelligence. Some of the most popular AI software tools include Viso Suite, ChatGPT, Jupyter Notebooks, and Google Cloud AI Platform, while popular AI platforms include TensorFlow and PyTorch. Educational resources like Coursera's AI Professional Certificate and Fast.ai's practical deep learning course also offer valuable learning opportunities. ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is based on large language models (LLMs) such as GPT-4o. ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.[2] It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI).[3] Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.[4][5] OpenAI was founded in December 2015 by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba, and John Schulman. The founding team combined their diverse expertise in technology entrepreneurship, machine learning, and software engineering to create an organization focused on advancing artificial intelligence in a way that benefits humanity. Elon Musk is no longer involved in OpenAI, and Sam Altman is the current CEO of the organization. ChatGPT has had a profound influence on the evolution of AI, paving the way for advancements in natural language understanding and generation. It has demonstrated the effectiveness of transformer-based models for language tasks, which has encouraged other AI researchers to adopt and refine this architecture. The model's success has also stimulated interest in LLMs, leading to a wave of research and development in this area. Seg 2: DeepSeek is a private Chinese company founded in July 2023 by Liang Wenfeng, a graduate of Zhejiang University, one of China's top universities, who funded the startup via his hedge fund, according to the MIT Technology Review. Liang has about $8 billion in assets, Ives wrote in a Jan. 27 research note. Chinese startup DeepSeek's launch of its latest AI models, which it says are on a par or better than industry-leading models in the United States at a fraction of the cost, is threatening to upset the technology world order. The company has attracted attention in global AI circles after writing in a paper last month that the training of DeepSeek-V3 required less than $6 million worth of computing power from Nvidia H800 chips. DeepSeek's AI Assistant, powered by DeepSeek-V3, has overtaken rival ChatGPT to become the top-rated free application available on Apple's App Store in the United States. This has raised doubts about the reasoning behind some U.S. tech companies' decision to pledge billions of dollars in AI investment and shares of several big tech players, including Nvidia, have been hit. NVIDIA Blackwell Ultra Enables AI ReasoningThe NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform's increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses. GB300 NVL72 is also expected to be available on NVIDIA DGX™ Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD™ with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory. The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning. AZ TRT Shows – related to AI Topic Link: https://brt-show.libsyn.com/size/5/?search=ai+ Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science AZ Tech Council Shows: https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023 Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the AZ TRT Podcast. AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business. AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving. Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more… AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
Our 207th episode with a summary and discussion of last week's big AI news! Recorded on 04/14/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Join our Discord here! https://discord.gg/nTyezGSKwP In this episode: OpenAI introduces GPT-4.1 with optimized coding and instruction-following capabilities, featuring variants like GPT-4.1 Mini and Nano, and a million-token context window. Concerns arise as OpenAI reduces resources for safety testing, sparking internal and external criticisms. XAI's newly launched API for Grok 3 showcases significant capabilities comparable to other leading models. Meta faces allegations of aiding China in AI development for business advantages, with potential compliances and public scrutiny looming. Timestamps + Links: Tools & Apps (00:03:13) OpenAI's new GPT-4.1 AI models focus on coding (00:08:12) ChatGPT will now remember your old conversations (00:11:16) Google's newest Gemini AI model focuses on efficiency (00:14:27) Elon Musk's AI company, xAI, launches an API for Grok 3 (00:18:35) Canva is now in the coding and spreadsheet business (00:20:31) Meta's vanilla Maverick AI model ranks below rivals on a popular chat benchmark Applications & Business (00:25:46) Ironwood: The first Google TPU for the age of inference (00:34:15) Anthropic rolls out a $200-per-month Claude subscription (00:37:17) OpenAI co-founder Ilya Sutskever's Safe Superintelligence reportedly valued at $32B (00:40:20) Mira Murati's AI startup gains prominent ex-OpenAI advisers (00:42:52) Hugging Face buys a humanoid robotics startup (00:44:58) Stargate developer Crusoe could spend $3.5 billion on a Texas data center. Most of it will be tax-free. Projects & Open Source (00:48:14) OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web Research & Advancements (00:56:09) Sample, Don't Search: Rethinking Test-Time Alignment for Language Models (01:03:32) Concise Reasoning via Reinforcement Learning (01:09:37) Going beyond open data – increasing transparency and trust in language models with OLMoTrace (01:15:34) Independent evaluations of Grok-3 and Grok-3 mini on our suite of benchmarks Policy & Safety (01:17:58) OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action' (01:24:33) OpenAI slashes AI model safety testing time (01:27:55) Ex-OpenAI staffers file amicus brief opposing the company's for-profit transition (01:32:25) Access to future AI models in OpenAI's API may require a verified ID (01:34:53) Meta whistleblower claims tech giant built $18 billion business by aiding China in AI race and undermining U.S. national security
Mira Murati e Ilya Sutskever, entrambi ex-OpenAI, sono protagonisti di due dei round più grandi mai visti nel mondo startup. Nel frattempo è partito il processo della FTC contro Meta, probabilmente uno dei più attesi e potenzialmente dirompenti degli ultimi vent'anni nel mondo tech. Nella Big Story, insieme Fabio Bocchiola, Country Manager in Repower Italia, parliamo di energie e impianti rinnovabili, visti come unica alternativa alla fine della dipendenza da gas, non solo come opportunità per produrre “energia green”. Questo podcast e gli altri nostri contenuti sono gratuiti anche grazie a chi ci sostiene con Will Makers. Sostienici e accedi a contenuti esclusivi su willmedia.it/abbonati Learn more about your ad choices. Visit megaphone.fm/adchoices
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
On April 14th, 2025, the AI landscape saw significant activity, including the launch of Ilya Sutskever's safe AI venture, Safe Superintelligence Inc. (SSI), which secured substantial funding, highlighting the ongoing focus on AI safety. AI also demonstrated practical advancements, outperforming experts in tuberculosis diagnosis using ultrasound technology. Meanwhile, concerns arose regarding OpenAI's shift towards a for-profit model, voiced by former employees. Further developments included Nvidia's ambitious plan to manufacture AI supercomputers in the US and Google's creation of DolphinGemma to decode dolphin communication. Additionally, a high school student used AI to identify a vast number of unknown space objects, illustrating AI's expanding applications.
Microsoft Shifts Away from OpenAI with A New AI Strategy. Safe Superintelligence Inc. (SSI), the startup founded by former OpenAI chief scientist Ilya Sutskever, is reportedly raising over $2 billion in a new funding round that values the company at a staggering $30 billion. Artie Intel and Micheline Learning report on Artificial Intelligence for The AI Report. This message brought to you by Amazon. Do More at Amazon.com Chinese AI companies like DeepSeek are kicking America's Ass. The US Army's TRADOC is using an AI tool, CamoGPT, to identify and remove DEI references from training materials per an executive order by President Trump. CamoGPT, developed by the Army's AI Integration Center, scans documents for specific keywords and has about 4,000 users. The initiative is part of a wider government effort to eliminate DEI content, leveraging AI for increased efficiency in aligning with national security objectives. The AI Report
Welcome to episode 294 of The Cloud Pod – where the forecast is always cloudy!Ilya Boy, do we have a news packed week for you! Sutskever raised $30B without a product, Mira Murati launched her own AI lab, and Claude 3.7 now thinks before it speaks. Meanwhile, Microsoft casually invented new matter for quantum computing, Google built an AI scientist, and AWS killed Chime (RIP). At this rate, AI is either going to save the world or speedrun becoming Ultron. Let's all find out together – today on The Cloud Pod! Titles we almost went with this week: Ding – Chime is Dead Does your container really need 192 cores Quantum is the new AI AI is now IN the robots A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info. AI Is Going Great – Or How ML Makes All It's Money 02:41 Ilya Sutskever's Startup in Talks to Raise Financing at $30 Billion Valuation It's been a minute since we talked about former OpenAI executives and what they're up to. Let's start with Ilya Sutskever and Mira Murati, post Open AI career The Information reports that Ilya Suskevers' startup “Safe Superintelligence” is in talks to raise $1Billion in a round that would value the startup at $30 Billion. The company has yet to release a product, but based on the name we can guess what they’re working on… 03:22 Ryan – “It's so nuts to me that they can raise that much without – really just an idea. Doesn't have to have any proof or POC…” 07:07 Murati Joins Crowded AI Startup Sector Mira Murati confirmed one of the worst kept secrets in AI, by revealing her lab Thinking Machine Labs. Murati has lured away two thirds of her team from OpenAI. We'll be waiting to see how the funding goes for this one. 08:02 Claude 3.7 Sonnet and Claude Code Anthropic is releasing their latest model Claude 3.7 Sonnet, their most intelligent model to date and the first hybrid reasoning model on the market. Claude 3.7 sonnet can produce near instant responses or extended, step by step thinning that is made visible to the user. API users also have fine grai
Ilya Sutskever, former chief scientist at OpenAI, founded a new startup called Safe Superintelligence that's already worth $30 billion. But what are investors backing beyond Sutskever's reputation? WSJ reporter Berber Jin shares what we know so far about the secretive startup. Plus, AI coding tools can automate large portions of code development. How could this affect human coders? Charlotte Gartenberg hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
xAI, de start-up op het gebied van kunstmatige intelligentie (AI) van multimiljardair Elon Musk, heeft zijn nieuwste versie van chatbot Grok-3 gelanceerd. Joe van Burik vertelt erover in deze Tech Update. Met de chatbot, die Musk de 'slimste AI op aarde' noemt, wil het bedrijf de concurrentie aangaan met de chatbots van OpenAI en het Chinese DeepSeek. Grok-3 wordt onmiddellijk beschikbaar voor Premium+-abonnees op X, het socialemediaplatform van Musk. Het bedrijf start ook een nieuw abonnement genaamd SuperGrok voor de mobiele app van de chatbot en de website Grok.com. xAI is daarnaast van plan om eerdere versies van zijn Grok-modellen binnen een paar maanden open source te maken, waarmee de technologie vrij toegankelijk wordt voor derden. Grok-3 heeft "meer dan tien keer" de rekenkracht van zijn voorganger, zei Musk bij de presentatie van de nieuwe chatbot. Op het gebied van wiskunde, wetenschap en codering verslaat Grok-3 de AI-modellen van concurrenten zoals Google Gemini van Alphabet, DeepSeeks V3 model en OpenAI's GPT-4o, beweert Musk. Verder in deze Tech Update: OpenAI zoekt naarstig naar manieren om vijandige overnames - zoals van Elon Musk - af te weren De andere medeoprichter van OpenAI, Ilya Sutskever, benadert met zijn eigen start-up Safe Superintelligence (SSI) een waardering van 30 miljard dollar See omnystudio.com/listener for privacy information.
Wie sollte ein Cap Table aussehen? Hörerfrage: Consulting oder Founders Associate? Gibt es bald Roboter von Apple und Meta? Perplexity macht jetzt auch Deep Research. Pip spielt mit Elons Grok 3 und Ilya Sutskever's AI Startup ist 30 Milliarden wert. Wir müssen natürlich auch über die Rede von JD Vance sprechen. Entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Geht wählen (00:05:20) Cap Table (00:17:20) Erste Job: Beratung vs. Startup (00:34:50) Humanoide (00:40:20) ChatGPT, Perplexity, Grok (00:54:40) Safe Superintelligence (01:01:00) JD Vance (01:20:45) China (01:22:15) Elon & Dschingis Khan (01:31:00) Milei Shownotes Apple und Meta liefern sich einen Kampf um humanoide Roboter Bloomberg KI-Antwortmaschine Perplexity stellt neue Deep-Research-Funktion vor decoder Wie Elon Musk die rechtsextreme Politik auf der ganzen Welt fördert nbcnews Dschingis Khan tötete genug Menschen, um den Planeten zu kühlen iflscience ChatGPT ist alles andere als „typisch“ Twitter Sommer Hit 2025: Hostile Government Takeover (EDM Remix) YouTube
Live from an ESG-flavored 2025, it's an all-new Wacky Wednesday edition of Business Pants. Joined by Analyst-Hole Matt Moscardi! On today's Costco lovefest called January 8th 2025: Headlines We Missed since the end of December and the new comic book superhero named Costco!Our show today is being sponsored by Free Float Analytics, the only platform measuring board power, connections, and performance for FREE.DAMION1Shit We Missed (in no particular order):Tech BrosZuckDana White, UFC CEO and Trump ally, to join Meta's board of directorsZuckerberg Announces New Measures to Increase Hate Speech on FacebookMark Zuckerberg's Meta is moving moderators out of California to combat concerns about bias and censorship“Huge problems” with axing fact-checkers, Meta oversight board saysCo-chair Helle Thorning-Schmidt said she is "very concerned" about how parent company Meta's decision to ditch fact-checkers will affect minority groups: "We are seeing many instances where hate speech can lead to real-life harm, so we will be watching that space very carefully," she added.Meta Drops Rules Protecting LGBTQ Community as Part of Content Moderation OverhaulThe changes included allowing users to share “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality.”Meta replaces policy chief Nick Clegg with former Republican staffer Joel Kaplan ahead of Trump inaugurationSamSam Altman Explodes at Board Members Who Fired Him"And all those people that I feel like really fucked me and fucked the company were gone, and now I had to clean up their mess," adding that he was "fucking depressed and tired.""And it felt so unfair," the billionaire told Bloomberg. "It was just a crazy thing to have to go through and then have no time to recover, because the house was on fire."The board's primary fiduciary duty was not to maintain shareholder value or profits, but rather to stay true to OpenAI's mission of creating safe artificial general intelligence (AGI) that benefits humanity.Helen Toner: the director of strategy at Georgetown's Center for Security and Emerging Technology.Tasha McCauley: an adjunct senior management scientist at think tank RAND Corporation. McCauley was also on the advisory board of the Centre for Effective Altruism. In 2017 she signed the Asilomar AI Principles on ethical AI development alongside Altman, OpenAI co-founder Ilya Sutskever, and former board member Elon MuskOpenAI CEO Sam Altman denies sexual abuse allegations made by his sister in lawsuitMuskMaga v Musk: Trump camp divided in bitter fight over immigration policyElon Musk Endorses Nazi-Linked German Party, Even Though It Opposed Tesla's GigafactoryTech Bro Wealth12 US billionaires gained almost $1 trillion in wealth in 2024 as the stock market delivered another year of massive returnsNYT Report Says Jensen Huang, The CEO Of Nvidia And The 10th-Richest Person In The U.S., Trying To Allegedly Avoid $8 Billion In TaxesMark Zuckerberg says he doesn't have a Hawaiian doomsday bunker, just a 'little shelter.' It's bigger than most houses.You could live next door to Jeff Bezos on 'Billionaire Bunker' island for $200 millionMusk urges Bezos to throw an ‘epic wedding' after Amazon founder blasts report of $600 million nuptials as ‘completely false'Elon Musk takes aim at MacKenzie Scott again for giving billions to liberal causes, calling the gifts 'concerning'How Jensen Huang and 3 Nvidia Board Members Became BillionairesMark Zuckerberg sported a $900,000 piece of wrist candy as he announced the end of fact-checking on MetaDEI/ESG Flip-FloppingWhen an anti-DEI activist took a swing at Costco, the board hit backA Costco shareholder proposal brought by conservative activist The National Center for Public Policy Research asked the company to probe its diversity, equity and inclusion policies, with an eye toward eliminating them.The thrust of the proposal is that certain DEI initiatives could open Costco up to financial risks over discrimination lawsuits from employees who are “white, Asian, male or straight.”The company's board of directors unanimously urged shareholders to reject the proposal and made the case that Costco's success depends on establishing a racially diverse, inclusive workplace: “We believe that our diversity, equity and inclusion efforts are legally appropriate, and nothing in the (Center for Public Policy Research) proposal demonstrates otherwise,” the board's statement said.The statement went on to rebuke the Center for Public Policy Research, saying that they and others were the ones responsible for inflicting financial and legal burdens on companies. “The proponent's broader agenda is not reducing the risk for the Company but abolition of diversity programs,” the board said.Costco board member defends DEI practices, rebukes companies scrapping policiesJeff Raikes, co-founder of the Raikes Foundation and former CEO of the Bill & Melinda Gates Foundation, who has served on Costco's board of directors since 2008: "Attacks on DEI aren't just bad for business—they hurt our economy. A diverse workforce drives innovation, expands markets, and fuels growth. Let's focus on building a future where all talent thrives." He concluded his post on X with the hashtag, "InclusiveEconomy." While businesses began to announce their departures from DEI policies last year, Raikes urged companies to expand such practices at work, insisting that scaling down DEI in businesses would harm the economy.Robbie Starbuck: “I fully endorse cancelling memberships at this point.”McDonald's rolls back DEI programs, ending push for greater diversityFour years after launching a push for more diversity in its ranks,McDonald's said it will retire specific goals for achieving diversity at senior leadership levels. It also intends to end a program that encourages its suppliers to develop diversity training and to increase the number of minority group members represented within their own leadership ranks.Managers 'touch up' staff: McDonald's faces fresh abuse claimsFast-food chain McDonald's has been hit by fresh allegations of sexual and homophobic abuse as staff members allege they have been 'touched up' by managers and offered extra shifts for sex.The chain first faced bombshell claims of widespread sexual abuse and harassment at its stores in July 2023 and has since been reported more than 300 times for harassment to the UK's equality watchdog.Allegations have included racist abuse, sexual assault and harassment and bullying. BlackRock Cuts Back on Board Diversity Push in Proxy-Vote GuidelinesThe policy updates remove both (a) numerical diversity targets (i.e., boards should aspire to 30% diversity of membership and have at least 2 women directors and 1 director from an underrepresented group) and (b) the related disclosure-based voting policy (i.e., BlackRock previously would consider taking voting action if a company did not adequately explain its approach to board diversity) – but provides that BlackRock may consider taking voting action if an S&P 500 board is not sufficiently diverse (BlackRock includes a footnote in the policy update suggesting that 30% diversity may still be the expectation).BlackRock's investment stewardship team tweaked the language used to describe how it approaches votes for other companies' boards. It didn't explicitly recommend that boards should aspire to at least 30% diversity of their members, after having done so in previous years.The report noted, however, that all but 2% of the boards of companies in the S&P 500 have diverse representation of at least 30%—and that if companies were out of step with those norms, BlackRock may cast opposing votes on a case-by-case basis. JPMorgan Leaves Net Zero Banking Group, Completing Departure of Major U.S. Banks Stakeholder Anger (or Anger at Stakeholders)Poll finds many Americans pin partial blame on insurance companies in UHC CEO killingA recent survey from the University of Chicago, found that, while 8 out of 10 U.S. adults believe the person who killed Brian Thompson bears the responsibility for the murder, 7 in 10 shared the belief that healthcare companies are also to blame. Luigi Mangione mention on SNL met with applause, critics slam 'woke' audience: 'Wooing for justice?'New York to charge fossil fuel companies for damage from climate changeThe new law requires companies responsible for substantial greenhouse gas emissions to pay into a state fund for infrastructure projects meant to repair or avoid future damage from climate change.Albania bans TikTok for a year after fatal stabbing of teenager last monthTeens in Vietnam will now be limited to one hour of gaming per sessionStarbucks baristas set to strike as new CEO makes $100 millionWashington Post Cartoonist Quits After Jeff Bezos Cartoon Is KilledNorway on track to be the first to ‘erase petrol and diesel engine cars'Fully electric vehicles accounted for 88.9% of new cars sold in 2024Exxon Sues California Official, Claiming He Defamed the CompanyExxon Mobil sued California's attorney general, the Sierra Club and other environmental groups on Monday, alleging that they conspired to defame the oil giant and kneecap its business prospects amid a debate over whether plastics can be recycled effectively.DystopiaMan Trying to Catch Flight Alarmed as His Driverless Waymo Gets Stuck Driving in Loop Around Parking LotAsked to Write a Screenplay, ChatGPT Started Procrastinating and Making ExcusesKlarna's CEO says AI is capable of doing his job and it makes him feel 'gloomy'Governance newsShari Redstone is saying goodbye to Paramount GlobalCharles Dolan, TV pioneer who founded HBO and Cablevision, dies at 98Richard Parsons, former Time Warner CEO, dies at age 76 Dye & Durham board resigns, activist nominees take control, interim CEO named The Fortune 500 has two new female CEOs—finally pushing that milestone above 11%And we end with a few classics:Boeing ends a troubled year with a jet-crash disaster in South KoreaMan who exploded Tesla Cybertruck outside Trump hotel used ChatGPT to plan the attackNorovirus rates have skyrocketed by 340% this season. Here's where the ‘winter vomiting disease' is spreading and whyMATT1CostcoNational Center for Public Policy Research filed the proxy with CostcoTheir arguments include…US Supreme court decision at HarvardA $25m judgment in PA for white regional manager at Starbucks who was fired after two black patrons were arrested for being blackThis gem: “With 310,000 employees, Costco likely has at least 200,000 employees who are potentially victims of this type of illegal discrimination because they are white, Asian, male or straight.”This, perhaps, is the greatest ironic argument for “meritocracy” ever made in historyThey point out that the MAJORITY OF THE STAFF is white, Asian, male, or straight… but they don't even use Costco's data, they source census data and just guessThe real numbers:Non management is 44.2% white, management is 58% white - a 14% increase in meritocracyExecutives are 80.6% white - a whopping 36.4% more meritHispanics are 33.1% of non management, 23.3% of management - 9.8% less merit!Executives are 5.8% Hispanic, 26.3% less meritAsians are 8.5% and 7.1%, so 1.4% less merit7.9% executive - so even merit?US Exec management is 72.3% maleSo 80.6% of executives are white, and 72.3% are male - and the argument NCPPR is making is that BECAUSE there are a lot of white males, there is a lot of RISK that THE WHITE MALES WILL SUE YOU if they think they're discriminated againstThink of what they're saying - because you have so many non diverse people, you can't have diversity programs for risk of lawsuitThe response dropped the pretense that the proxy was anything except racismThe proponent professes concern about legal and financial risks to the Company and its shareholders associated with the diversity initiatives. The proponent's broader agenda is not reducing risk for the Company but abolition of diversity initiatives. A 2023 federal district court decision, in a case brought by the proponent, noted that the proponent had "published a document called 'Balancing the Boardroom 2022,' which describes its shareholder activism as 'fighting back' against 'the evils of woke politicized capital and companies.' [The proponent went] on to describe 'CEOs and other corporate executives who are most woke and most hard-left political in their management of their corporations' as 'inimical to the Republic and its blessings of liberty' and 'committed to critical race theory and the socialist foundations of woke' or 'shameless monsters who are willing to sacrifice our future for their comforts.'" National Center for Public Policy Research v. Schultz, E.D. WA. (Sept. 11, 2023). And the proponent's efforts to demonstrate retrenchment on the part of companies are misleading, at best. For example, the assertion that "Microsoft laid off an entirea[sic] DEI team" is simply wrong. It was later reported that Microsoft stated that the two positions eliminated were redundant roles on its events team and that Microsoft's diversity and inclusion commitments remain unchanged, according to Jeff Jones, a Microsoft spokesperson: “Our focus on diversity and inclusion is unwavering and we are holding firm on our expectations, prioritizing accountability, and continuing to focus on this work.” Colvin, Caroline. Amid DEI cuts, Microsoft works to distinguish itself from those responding to ‘woke' backlash. HR Dive, July 24, 2024.Reason Costco might be pushing back?Racism is basically unveiledOf all the companies targeted by a proposal or Robbie Starbuck, Costco has the lowest deviation in board member influence - as in, nearly the entire board has equal power, it's highly democratic - women, men, diverse cohorts are more or less equally powerful to anyone else in the roomNo connections to any board member on another DEI flipper companyMeanwhile, the anti DEI, anti immigrant movement has begun to eat itself before Trump even takes officeIn defense of more HB1 visas and foreign workers, Vivek Ramaswamy says we venerate jocks over valedictorians on Twitter, and Americans aren't as good employeesThe rebuttal was MAGA Trumpers saying Vivek is fake MAGAAlso this: “His entire argument is a terrible proposition,” he adds. “Children raised to be good little robots might grow up to build robots of their own someday, and become rich. Asians are the highest-earning racial group in America, but are they happier for it? Suicide is the leading cause of death for Asians aged 15-24 … and the second-leading cause of death for those aged 25-34.” Page points to a Psychology Today post that blames tiger parenting for causing anxiety and depression and then asks, “Do we really want this country to be even more stressed-out?”Costco proxy says Asians are discriminated againstTwitch gamers are streaming about “meritocracy”
Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.00:01 — Intro00:49— Is pre-training over?10:25 — Granite 3.122:23 — AI model stealing33:38—NVIDIA Jetson
Google's VEO 2 and Gemini updates are… freaking good?! Plus, OpenAI lets you call ChatGPT, o1 is in the API now & Nvidia's Jensen Huang is giving us a new robot brain. Plus, new Pika 2.0 AI video tools, Ilya Sutskever returns to tell us pre-training is over, YouTube and CAA comes together on a deal for protecting celebrity likenesses, Google's Whisk tool is a fun AI toy and we meet an angry old man who calls us wanting help for his furnace whom we then get to drink Monster Milk. IT'S A NEW SHOW, A PRESENT JUST FOR YOU Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // SHOW LINKS // Call ChatGPT aka 1-800-ChatGPT https://www.youtube.com/live/LWa6OHeNK3s?si=0vogE9s_qVOmp81- VEO 2 is actually pretty insane https://deepmind.google/technologies/veo/veo-2/ Knight on a Zebra https://x.com/emollick/status/1868897308529787248 Steak-Off https://x.com/blizaine/status/1868850653759783033 Tomato Cutting Vs Sora https://x.com/joecarlsonshow/status/1868822801546985685 Google Whisk https://x.com/Google/status/1868781358635442359 New Gemini 2.0 Flash Experimental Advanced https://x.com/sundarpichai/status/1869066293426655459 o1 in API + fine tuning in Real Time Voice https://x.com/OpenAIDevs/status/1869134054190448874 $2000 a month OAI Sub?? https://x.com/tsarnick/status/1868201597727342941 RUMORED TASKS/TO-DO BETA https://x.com/testingcatalog/status/1869364027769377146 o1 Preview Vastly Better Than Doctors at Reasoning https://x.com/deedydas/status/1869049071346102729 The Return of Ilya & The End of Pre-training https://www.youtube.com/watch?v=1yvBqasHLZs Pika Labs 2.0 https://x.com/pika_labs/status/1867651381840040304 YT+CAA Deal in Celebrity Licenses https://www.hollywoodreporter.com/business/digital/youtube-caa-generative-ai-celebrity-likeness-deal-1236088491/ Nvidia New $250 Computer Jetson Nano Super https://youtu.be/S9L2WGf1KrM?si=hc10pdLVNuMZtCcn MJ Prompt: A person protesting something weird https://www.reddit.com/r/midjourney/comments/1heki7l/prompt_a_person_protesting_something_weird/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button Cap4D https://x.com/taubnerfelix/status/1869076254051151995 Video Seal (Meta Watermarking) https://www.threads.net/@luokai/post/DDkBu5lvXG_?xmt=AQGzHOz9TCBGkd77lKX4VFZsl9IFjjn9Nc95J3oLmVRF7A
The AI Breakdown: Daily Artificial Intelligence News and Discussions
At a recent conference appearance, SSI founder (and former OpenAI leader) Ilya Sutskever claimed that we had reached peak data and that the era of pre-training as a scaling method had come to a close. NLW explores the implications. Plus, NotebookLM releases an enterprise edition. Brought to you by: Vanta - Simplify compliance - https://vanta.com/nlw The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Metanoia Lab | Liderança, inovação e transformação digital, por Andrea Iorio
Neste episódio da quarta temporada do Metanoia Lab, patrocinado pela Oi Soluções, o Andrea (andreaiorio.com) analisa uma frase do Ilya Sutskever, cofundador do OpenAI e hoje cofundador da Safe Superintelligence, que fala sobre as razões pelas quais hoje os Large Language Models chegaram num limite em seu desenvolvimento e aprimoramento, e porque os Small Language Models representam o futuro próximo no mundo da Inteligência Artificial.
While Santa's loading his sleigh, Silicon Valley's dropping AI breakthroughs by the hour. OpenAI's "12 Days of Shipmas" keeps the gifts coming with ChatGPT Canvas, Apple Intelligence integration, and game-changing video capabilities. Not to be outdone, Google jumps in with Gemini 2.0 and its impressive Deep Research tool. Join Paul Roetzer and Mike Kaput as they unwrap these developments, plus rapid-fire updates on Andreessen's AI censorship bombshell, an OpenAI employee's AGI claims, and the latest product launches and funding shaking up the industry. Access the show notes and show links here This episode is brought to you by our AI Mastery Membership, this 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery. As a special thank you to our podcast audience, you can use the code POD150 to save $150 on a membership. Timestamps: 00:05:39 — OpenAI 12 Days of Shipmas: Days 4 - 8 00:18:54 — Gemini 2 Release + Deep Research 00:33:03 — Hands-On with o1 00:46:18 — Perplexity Growth 00:50:46 — Andreessen AI Tech Censorship Comments 00:56:22 — OpenAI AGI 01:00:38 — Amazon Agent Lab 01:03:38 — Pricing for AI Agents 01:07:45 — OpenAI Faces Opposition to For-Profit Status 01:11:13 —Ilya Sutskever at NeurIPS 01:14:20 — Mollick Essay on When to Use AI 01:16:15 — Product and Funding Updates Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers
OpenAI's Project Mode, AI Industry's Future, and Workforce Shifts: Hashtag Trending In today's episode of Hashtag Trending, host Jim Love delves into OpenAI's innovative Project Mode, the escalating AI rivalry with figures like Elon Musk and Mark Zuckerberg, and the anticipated mass resignation of Gen Z and Millennials in 2025. Key points include demonstrations of Project Mode's capabilities, industry-wide challenges and opportunities, and insights from AI thought leaders like Ilya Sutskever on the future of data and AI systems. Tune in for a comprehensive overview of the latest in AI and workplace trends. 00:00 Introduction and Overview 00:31 OpenAI's Project Mode: A Game Changer 01:03 Live Demonstrations and Practical Uses 02:25 OpenAI's Commitment to Delivery 03:25 Rivalries and Legal Battles 06:40 AI Industry Insights and Future Predictions 08:27 Workplace Trends and Future Resignations 10:40 Conclusion and Sign Off
Elon Musk Expands Legal Battle Against OpenAI and Microsoft Episode Title: Elon Musk vs. OpenAI & Microsoft: Antitrust Battle and AI Power Struggles Unveiled Episode Description: What started as a complaint over OpenAI's transformation from a nonprofit to a profit-driven powerhouse has escalated into a major antitrust legal battle. Musk is now alleging that Microsoft and OpenAI conspired to monopolize the generative AI market, sidelining competitors and potentially breaching federal antitrust laws. We dive into the history of OpenAI, the internal power struggles, and what this lawsuit could mean for the future of artificial intelligence. Key Topics Discussed: The Lawsuit's Expansion: We explore how Musk's original August complaint has evolved, now including new claims against Microsoft for allegedly colluding with OpenAI to dominate the AI market. We break down the legal arguments and what Musk is seeking from the court. OpenAI's Controversial Transformation: Originally founded as a nonprofit, OpenAI shifted gears in 2019, attracting billions in investment from Microsoft. We discuss how this change in business model became a point of contention for Musk and set the stage for the current legal conflict. Behind-the-Scenes Drama: Newly revealed emails between Musk, Sam Altman, Ilya Sutskever, and other OpenAI co-founders offer a rare glimpse into the early days of OpenAI. We dive into the disagreements over leadership, Musk's quest for control, and the internal debates about the company's mission. Microsoft's Role and Investment: Microsoft's billion-dollar partnership with OpenAI is at the heart of Musk's complaint. We examine the timeline of this collaboration, the exclusive licensing agreements, and why Musk views this as an anticompetitive move. Musk's Fear of an 'AGI Dictatorship': Emails from as early as 2016 show Musk's concerns about Google's DeepMind and its potential to dominate the AI space. We discuss Musk's fears of a single company controlling AGI (Artificial General Intelligence) and how these concerns influenced the founding of OpenAI. Intel's Missed Opportunity: We touch on Intel's decision to pass on a $1 billion investment in OpenAI back in 2017, a move that now appears shortsighted given OpenAI's current valuation and market influence. The Legal Stakes and Future Implications: What could this lawsuit mean for the future of AI development and industry partnerships? We break down the potential consequences for OpenAI, Microsoft, and the broader tech landscape. Featured Quotes: Marc Toberoff (Musk's attorney): “Microsoft's anticompetitive practices have escalated. Sunlight is the best disinfectant.” Elon Musk (internal email): “DeepMind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy.” Why It Matters: This case isn't just about corporate rivalry; it's about the future control of artificial intelligence and the ethical concerns surrounding its development. As the AI race intensifies, Musk's lawsuit raises questions about monopolistic practices, transparency, and the potential consequences of unchecked power in the tech industry. Tune In To Learn: Why Musk believes Microsoft and OpenAI's partnership is illegal and anticompetitive. How internal power struggles shaped the trajectory of OpenAI and influenced Musk's departure. What the disclosed emails reveal about the early vision for OpenAI and the concerns about AGI dominance. Resources Mentioned: Musk's original lawsuit filing (August 2023) OpenAI's response to the amended complaint Email exchanges between OpenAI co-founders (2015-2018)
We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's
Send us a text作为全球人工智能领域的领军企业,OpenAI的一举一动始终备受关注。最近,包括首席技术官Mira Murati、联合创始人Andrej Karpathy、首席科学家Ilya Sutskever以及安全负责人Jan Leike在内的多位重量级人物纷纷宣布离职,掀起了一波高管离职潮。在最新一期播客节目中,主持人俞骅与Poy Zhong深入探讨了这则新闻背后的原因。欢迎大家收听,获取更多详细分析!请您在Apple Podcasts, 小宇宙APP, Spotify, iHeart Radio, YouTube, Amazon Music等,搜寻”柠檬变成柠檬水“。Support the showThank you for listening to our podcasts. We also welcome you to join the "Turn Lemons Into Lemonade" LinkedIn page!
Welcome to episode 279 of The Cloud Pod, where the forecast is always cloudy! This week Justin, Jonathan and Matthew are your guide through the Cloud. We're talking about everything from BigQuery to Google Nuclear power plans, and everything in between! Welcome to episode 279! Titles we almost went with this week: AWS SKYNET (Q) now controls the supply chain AWS Supply Chain: Where skynet meets your shopping list Digital Ocean follows Azure with the Premium everything EKS mounts S3 GCP now a nuclear Big query don't hit that iceberg Big Query Yells: “ICEBERG AHEAD” The Cloud Pod: Now with 50% more meltdown protection The Cloud Pod radiates excitement over Google's nuclear deal A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info. Follow Up 00:46 OpenAI's Newest Possible Threat: Ex-CTO Murati Apologies listeners – paywall article. Given the recent departure of Ex-CTO Mira Murati from OpenAI, we speculated that she might be starting something new…and the rumors are rumorin'. Rumors have been running wild since her last day on October 4th, with several people reporting that there has been a lot of churn. Speculation is that Murati may join former Open AI VP Bret Zoph at his new startup. It may be easy to steal some people, as the research organization at Open AI is reportedly in upheaval after Liam Fedus’s promotion to lead post-training – several researchers have asked to switch teams. In addition, Ilya Sutskever, an Open AI co-founder and former chief scientist, also has a new startup. We'll definitely be keeping an eye on this particular soap opera. 2:00 Jonathan – “I kind wonder what will these other startups bring that’s different than what OpenAI are doing or Anthropic or anybody else. mean, they’re all going to be taking the same training data sets because that’s what’s available. It’s not like they’re going to invent some data from somewhere else and have an edge. I mean, I guess they could do different things like be mindful about licensing.” General News 4:41 Introducing New 48vCPU and 60vCPU Optimized Premium Droplets on DigitalOcean Those raindrops are getting pretty heavy as Digital Ocean announces their new 48vCPU Memory and storage optimized premium droplets, and 60vcpu general purpose and CPU optimized premium droplets. Droplets are DO's Linux-based virtual machines. Premium Optimized Droplets are dedicated CPU instances with access to the full hyperthread, as well as 10GBps of outbound data transfer. The 48vCPU boxes have 384GB of memory, and the 60vCPU boxes have 160gb. 6:02 Justin – “I’ve been watchi
On today's podcast episode, we discuss what to make of former Apple Chief Design Officer Jony Ive working on a new AI device, what an AI model with “reasoning abilities” can actually do, and whether Ilya Sutskever's new AI startup can create safe superintelligence. Join host Marcus Johnson, along with analysts Jacob Bourne and Grace Harmon, for the conversation. Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/ © 2024 EMARKETER
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Russia funding the political YouTube network Tenet Media 2) Should the commentators have known better? 3) How big does the influence operation go? 4) Is Ranjan being paid off by a foreign government? 5) Twitter suspended in Brazil 6) Was Elon Musk right in standing up to Brazil? 7) More on the amorphous nature of online popularity 8) Talk Tuah 9) Ilya Sutskever raises $1 billion from a16z and others 10) Founder Mode vs. Manager Mode --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Former OpenAI founder Ilya Sutskever recently announced his new company Safe Superintelligence. Now he's announced a $1B pre-product raise. Concerned about being spied on? Tired of censored responses? AI Daily Brief listeners receive a 20% discount on Venice Pro. Visit https://venice.ai/nlw and enter the discount code NLWDAILYBRIEF. Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'podcast' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
Plus: OpenAI co-founder Ilya Sutskever's new firm raises $1 billion as it aims to build ‘'safe'' AI models. And, Stellantis presses pause on production for two of its top-selling U.S. models. Kate Bullivant hosts. Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.What the heck is happening at OpenAI? In a somewhat shocking development, an OpenAI co-founder has left OpenAI for rival Anthropic. And President Greg Brockman is taking an 'extended leave of absence.' What's it all mean? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on OpenAIRelated Episodes: Ep 318: GPT-4o Mini: What you need to know and what no one's talking aboutEp 149: Sam Altman leaving and the future of OpenAI – 7 things you need to knowUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Major Changes at OpenAI2. Legal Trouble for OpenAI3. OpenAI's Technology and Impact4. Future of OpenAITimestamps:02:00 Daily AI news06:15 Multiple high-level departures at OpenAI, significant impact.12:47 GPT technology widely used by large companies.16:08 Employees threatened to leave if demands not met.18:22 Key OpenAI figures change, raising concerns.21:05 Economic chaos and political instability in 72 hours.25:22 Apple rebranding AI as 'Apple Intelligence.' GPT technology used.27:16 Microsoft's early commitment to AI pays off.30:32 NVIDIA is least reliant on OpenAI.35:08 AI advancements raise immense safety concerns and risks.40:16 Ilya Sutskever left OpenAI to start SSI.41:16 OpenAI's new model amidst reporting and rumors.44:20 OpenAI's incredible capabilities are beyond imagination.Keywords:OpenAI, Jordan Wilson, Everyday AI, OpenAI drama, co-founder departure, OpenAI president, extended leave, AI news, Figure humanoid AI robot, NVIDIA, copyright violations, Elon Musk, Sam Altman, lawsuit, Peter Dang, John Shulman, Greg Brockman, OpenAI leadership changes, Andrei Karpathy, Ilya Sutskever, Microsoft, artificial intelligence, AGI, Jan Leakey, Anthropic, GPT 5, GPT NEXT, Apple Intelligence, US economy, global economic turmoil. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Episode 349: Neal and Toby discuss ex-OpenAI co-founder Ilya Sutskever starting his own AI company that will prioritize safety vs. profits. Should OpenAI be concerned about a rivalry? Then, climate activists just painted Stonehenge with orange paint in the latest episode of defacing historical landmarks across Europe. Next, US car dealerships are scrambling as a widely used software for transactions has been hacked, not once, but twice. Meanwhile, GenZers and millennials who have money to spend are spending it on luxury items vs. traditional assets. Also, Amazon is ditching its plastic air pillows in boxes and replacing them with loads of recyclable paper. Lastly, a harrowing escape from war-torn Ukraine featuring 2 beluga whales traveling by…car? Download the Yahoo Finance App (on the Play and App store) for real-time alerts on news and insights tailored to your portfolio and stock watchlists. Get your Morning Brew Daily Mug HERE: https://shop.morningbrew.com/products/morning-brew-daily-mug?utm_medium=youtube&utm_source=mbd&utm_campaign=mug Listen to Morning Brew Daily Here: https://link.chtbl.com/MBD Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow Learn more about your ad choices. Visit megaphone.fm/adchoices
OpenAI co-founder Ilya Sutskever's move to fire Sam Altman with no warning kicks off five days of chaos for the tech company. After he gets over the shock, Altman uses all his corporate savvy to fight back. For days, over McDonalds and Boba tea, Altman and the board feud over the path forward for Altman and OpenAI, while the business community waits on tenterhooks to find out the future of one of the most important AI companies in the world.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.