Podcasts about ilya sutskever

  • 167PODCASTS
  • 286EPISODES
  • 42mAVG DURATION
  • 1WEEKLY EPISODE
  • May 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ilya sutskever

Latest podcast episodes about ilya sutskever

xHUB.AI
T5.E084. INSIDE X BUNKER AGI ¿Qué dijo Ilya Sutskever?

xHUB.AI

Play Episode Listen Later May 28, 2025 78:21


# TEMA ☢️BUNKER AGI ¿Qué dijo Ilya Sutskever?# ÍNDICE# PRESENTA Y DIRIGE 

Rover's Morning Glory
TUES PT 1: Beeping smoke detectors, haunted dolls, and general AI destroying mankind

Rover's Morning Glory

Play Episode Listen Later May 20, 2025 50:15


Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence.

Rover's Morning Glory
TUES FULL SHOW: Snitzer would try Galaxy Gas, Charlie would out someone that wronged him, and Rover will never get an Airbnb again

Rover's Morning Glory

Play Episode Listen Later May 20, 2025 174:34


Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence. Getting to the airport two hours early. Snitzer would try Galaxy Gas. Man accuses his wife of cheating at his 40th birthday party. Charlie would out someone that wronged him. $1200 cap and gown. The Polk county sheriff's office conducted an operation called "fool around and find out" arrested 250 people including former Browns player, Adarius Taylor. Rover believes prostitution should be legal. A couple married for 31 years schedule their sex life. Charlie and Rover would love to schedule their sex lives. Smelly vaginas. Controversy in Paralympics after a gold medalist was banned for life. Rover will never get an Airbnb again.

Rover's Morning Glory
TUES PT 1: Beeping smoke detectors, haunted dolls, and general AI destroying mankind

Rover's Morning Glory

Play Episode Listen Later May 20, 2025 50:39


Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence.See omnystudio.com/listener for privacy information.

Rover's Morning Glory
TUES FULL SHOW: Snitzer would try Galaxy Gas, Charlie would out someone that wronged him, and Rover will never get an Airbnb again

Rover's Morning Glory

Play Episode Listen Later May 20, 2025 174:22


Beeping smoke detectors and haunted dolls. Ilya Sutskever, a co-founder and the chief scientist of OpenAI, says they will build a bunker before releasing artificial general intelligence. Getting to the airport two hours early. Snitzer would try Galaxy Gas. Man accuses his wife of cheating at his 40th birthday party. Charlie would out someone that wronged him. $1200 cap and gown. The Polk county sheriff's office conducted an operation called "fool around and find out" arrested 250 people including former Browns player, Adarius Taylor. Rover believes prostitution should be legal. A couple married for 31 years schedule their sex life. Charlie and Rover would love to schedule their sex lives. Smelly vaginas. Controversy in Paralympics after a gold medalist was banned for life. Rover will never get an Airbnb again. See omnystudio.com/listener for privacy information.

Sentientism
"The story of our species needs to be re-written in the AI age" - Ronen Bar - The Moral Alignment Center - Sentientism 226

Sentientism

Play Episode Listen Later Apr 24, 2025 103:20


Ronen Bar is a social entrepreneur and the co-founder and former CEO of Sentient, a meta non-profit for animals focused on community building and developing tools tosupport animal rights advocates. He is currently focused on advancing a new community-building initiative, The Moral Alignment Center, to ensure AI development benefits all sentient beings, including animals, humans, and futuredigital minds. For over a decade, his work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative reporting sectors.In Sentientist Conversations we talk about the most important questions: “what's real?”, “who matters?” and "how can we make a better world?"Sentientism answers those questions with "evidence, reason & compassion for all sentient beings." The video of our conversation is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here on YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.00:00 Clips01:11 Welcome02:40 Ronen's Intro- Social entrepreneur "using storytelling to promote reason and compassion for all sentient beings"- Investigative journalism (care homes then slaughterhousesin Israel and abroad)- Leading the Sentient NGO, including using on-animal investigative cameras to "enhance animal storytelling... of particular named animals" not just the story of a slaughterhouse- Alternative protein non-profits- The Moral Alignment Center "making sure that #ai is apositive force for all sentient beings"- "What is good?... I don't think those questions are asked enough in the AI space"- Starting new fields and communities- How the advent of powerful AI forces us to revisit thesefundamental "what's real?", "what matters?" and "whomatters?" questions- The ethical question is neglected in AI, but "it is in the minds of people... Ilya Sutskever... Ray Kurtzweil... Sam Altman..."07:23 What's Real?- Growing up in Israel "a very religious country" but in a secular family- Wider relatives #orthodox and ultra-orthodox- Asking self "what do I know for sure... 100%...? the obvious answer is subjective experiences at this moment"- Being less sure of everything else but "my subjective experience is certainly true"- #illusionism ? "It's funny to think of it [subjective experience] as an illusion because subjective experience is the only information you will ever receive in your life"- "Science is just the discipline... of trying throughrationality to predict the subjective experiences of humans" (even the results of scientific measurements come through our experiences)- JW: "So if it is an illusion it's still all we've got!"- "Starting from your own subjective experiences... itactually brings you more compassion... "29:40 What Matters?36:00 Who Matters?41:03 A Better World?01:33:20 Follow Ronen:- Ronen on the EA forum- Ronen on LinkedIn- Moral Alignment Center on LinkedIn - Alien Journalist Dictionary- Email: ronenbar07@gmail.comAnd more... full show notes at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Sentientism.info⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.Sentientism is “Evidence, reason & compassion for all sentient beings.” More at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Sentientism.info⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Join our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠"I'm a Sentientist" wall⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ via⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ this simple form⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.Everyone, Sentientist or not, is welcome in our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠groups⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. The biggest so far is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here on FaceBook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Come join us there!

AZ Tech Roundtable 2.0
AI Arms Race from ChatGPT to Deepseek - AZ TRT S06 EP08 (269) 4-20-2025

AZ Tech Roundtable 2.0

Play Episode Listen Later Apr 24, 2025 23:15


AI Arms Race from ChatGPT to Deepseek - AZ TRT S06 EP08 (269) 4-20-2025              What We Learned This Week AI Arms Race is real with the major tech co's involved ChatGPT by OpenAI is considering the top chat AI program Google has Gemini (was Bard), Microsoft has CoPilot, Amazon has Claude / Alexa Deepseek is a startup from China that has disrupted AI landscape with a more cost effective AI model Costs and investment $ dollars into AI is being rethought as Deepseek spent millions $ vs Silicon Valley spending billions $   Notes: Seg 1:   Major Tech Giants AI Programs - Gemini (was Bard) Developed by Google, Gemini is known for its multimodal capabilities and integration with Google Search. It can analyze images, understand verbal prompts, and engage in verbal conversations.  ChatGPT Developed by OpenAI, ChatGPT is known for its versatility and platform-agnostic solution for text generation and learning. It can write code in almost any language, and can also be used to provide research assistance, generate writing prompts, and answer questions.  Microsoft Copilot Developed by Microsoft, Copilot is known for its integration with applications like Word, Excel, and Power BI. It's particularly well-suited for document automation.    Amazon Alexa w/ Claude - Improved AI Model: Claude is a powerful AI model from Anthropic, known for its strengths in natural language processing and conversational AI, as noted in the video and other sources.        Industry 3.0 (1969-2010): The Third Industrial Revolution, or the Digital Revolution, was marked by the automation of production through the use of computers, information technology, and the internet. This era saw the widespread adoption of digital technologies, including programmable logic controllers and robots.  Industry 4.0 (2010-present): The Fourth Industrial Revolution, also known as the Fourth Industrial Revolution, is characterized by the integration of digital technologies, including the Internet of Things (IoT), artificial intelligence (AI), big data, and cyber-physical systems, into manufacturing and industrial processes. This era is focused on creating "smart factories" and "smart products" that can communicate and interact with each other, leading to increased efficiency, customization, and sustainability.    Top AI programs include a range of software, platforms, and resources for learning and working with artificial intelligence. Some of the most popular AI software tools include Viso Suite, ChatGPT, Jupyter Notebooks, and Google Cloud AI Platform, while popular AI platforms include TensorFlow and PyTorch. Educational resources like Coursera's AI Professional Certificate and Fast.ai's practical deep learning course also offer valuable learning opportunities.    ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is based on large language models (LLMs) such as GPT-4o. ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.[2] It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI).[3] Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.[4][5] OpenAI was founded in December 2015 by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba, and John Schulman. The founding team combined their diverse expertise in technology entrepreneurship, machine learning, and software engineering to create an organization focused on advancing artificial intelligence in a way that benefits humanity. Elon Musk is no longer involved in OpenAI, and Sam Altman is the current CEO of the organization. ChatGPT has had a profound influence on the evolution of AI, paving the way for advancements in natural language understanding and generation. It has demonstrated the effectiveness of transformer-based models for language tasks, which has encouraged other AI researchers to adopt and refine this architecture. The model's success has also stimulated interest in LLMs, leading to a wave of research and development in this area.   Seg 2: DeepSeek is a private Chinese company founded in July 2023 by Liang Wenfeng, a graduate of Zhejiang University, one of China's top universities, who funded the startup via his hedge fund, according to the MIT Technology Review. Liang has about $8 billion in assets, Ives wrote in a Jan. 27 research note. Chinese startup DeepSeek's launch of its latest AI models, which it says are on a par or better than industry-leading models in the United States at a fraction of the cost, is threatening to upset the technology world order. The company has attracted attention in global AI circles after writing in a paper last month that the training of DeepSeek-V3 required less than $6 million worth of computing power from Nvidia H800 chips. DeepSeek's AI Assistant, powered by DeepSeek-V3, has overtaken rival ChatGPT to become the top-rated free application available on Apple's App Store in the United States. This has raised doubts about the reasoning behind some U.S. tech companies' decision to pledge billions of dollars in AI investment and shares of several big tech players, including Nvidia, have been hit.   NVIDIA Blackwell Ultra Enables AI ReasoningThe NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform's increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses. GB300 NVL72 is also expected to be available on NVIDIA DGX™ Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD™ with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory. The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning.     AZ TRT Shows – related to AI Topic Link: https://brt-show.libsyn.com/size/5/?search=ai+       Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science   AZ Tech Council Shows:  https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023   Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science  Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech   ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT      Thanks for Listening. Please Subscribe to the AZ TRT Podcast.     AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business.  AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving.  Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more…    AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here                    More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/     Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.

Let's Talk AI
#207 - GPT 4.1, Gemini 2.5 Flash, Ironwood, Claude Max

Let's Talk AI

Play Episode Listen Later Apr 18, 2025 102:30 Transcription Available


Our 207th episode with a summary and discussion of last week's big AI news! Recorded on 04/14/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Join our Discord here! https://discord.gg/nTyezGSKwP In this episode: OpenAI introduces GPT-4.1 with optimized coding and instruction-following capabilities, featuring variants like GPT-4.1 Mini and Nano, and a million-token context window. Concerns arise as OpenAI reduces resources for safety testing, sparking internal and external criticisms. XAI's newly launched API for Grok 3 showcases significant capabilities comparable to other leading models. Meta faces allegations of aiding China in AI development for business advantages, with potential compliances and public scrutiny looming. Timestamps + Links: Tools & Apps (00:03:13) OpenAI's new GPT-4.1 AI models focus on coding (00:08:12) ChatGPT will now remember your old conversations (00:11:16) Google's newest Gemini AI model focuses on efficiency (00:14:27) Elon Musk's AI company, xAI, launches an API for Grok 3 (00:18:35) Canva is now in the coding and spreadsheet business (00:20:31) Meta's vanilla Maverick AI model ranks below rivals on a popular chat benchmark Applications & Business (00:25:46) Ironwood: The first Google TPU for the age of inference (00:34:15) Anthropic rolls out a $200-per-month Claude subscription (00:37:17) OpenAI co-founder Ilya Sutskever's Safe Superintelligence reportedly valued at $32B (00:40:20) Mira Murati's AI startup gains prominent ex-OpenAI advisers (00:42:52) Hugging Face buys a humanoid robotics startup (00:44:58) Stargate developer Crusoe could spend $3.5 billion on a Texas data center. Most of it will be tax-free. Projects & Open Source (00:48:14) OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the Web Research & Advancements (00:56:09) Sample, Don't Search: Rethinking Test-Time Alignment for Language Models (01:03:32) Concise Reasoning via Reinforcement Learning (01:09:37) Going beyond open data – increasing transparency and trust in language models with OLMoTrace (01:15:34) Independent evaluations of Grok-3 and Grok-3 mini on our suite of benchmarks Policy & Safety (01:17:58) OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action' (01:24:33) OpenAI slashes AI model safety testing time (01:27:55) Ex-OpenAI staffers file amicus brief opposing the company's for-profit transition (01:32:25) Access to future AI models in OpenAI's API may require a verified ID (01:34:53) Meta whistleblower claims tech giant built $18 billion business by aiding China in AI race and undermining U.S. national security

Actually
I round miliardari delle startup AI, il violento processo USA vs Meta + l'impatto delle energie e degli impianti rinnovabili in Italia

Actually

Play Episode Listen Later Apr 16, 2025 53:39


Mira Murati e Ilya Sutskever, entrambi ex-OpenAI, sono protagonisti di due dei round più grandi mai visti nel mondo startup. Nel frattempo è partito il processo della FTC contro Meta, probabilmente uno dei più attesi e potenzialmente dirompenti degli ultimi vent'anni nel mondo tech. Nella Big Story, insieme Fabio Bocchiola, Country Manager in Repower Italia, parliamo di energie e impianti rinnovabili, visti come unica alternativa alla fine della dipendenza da gas, non solo come opportunità per produrre “energia green”. Questo podcast e gli altri nostri contenuti sono gratuiti anche grazie a chi ci sostiene con Will Makers. Sostienici e accedi a contenuti esclusivi su willmedia.it/abbonati Learn more about your ad choices. Visit megaphone.fm/adchoices

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

On April 14th, 2025, the AI landscape saw significant activity, including the launch of Ilya Sutskever's safe AI venture, Safe Superintelligence Inc. (SSI), which secured substantial funding, highlighting the ongoing focus on AI safety. AI also demonstrated practical advancements, outperforming experts in tuberculosis diagnosis using ultrasound technology. Meanwhile, concerns arose regarding OpenAI's shift towards a for-profit model, voiced by former employees. Further developments included Nvidia's ambitious plan to manufacture AI supercomputers in the US and Google's creation of DolphinGemma to decode dolphin communication. Additionally, a high school student used AI to identify a vast number of unknown space objects, illustrating AI's expanding applications.

The AI Report
Safe Superintelligence Inc. (SSI), The AI Company with Human Values.

The AI Report

Play Episode Listen Later Mar 10, 2025 8:58


Microsoft Shifts Away from OpenAI with A New AI Strategy.  Safe Superintelligence Inc. (SSI), the startup founded by former OpenAI chief scientist Ilya Sutskever, is reportedly raising over $2 billion in a new funding round that values the company at a staggering $30 billion. Artie Intel and Micheline Learning report on Artificial Intelligence for The AI Report. This message brought to you by Amazon. Do More at Amazon.com Chinese AI companies like DeepSeek are kicking America's Ass. The US Army's TRADOC is using an AI tool,  CamoGPT,  to identify and remove DEI references from training materials per an executive order by President Trump.  CamoGPT,  developed by the Army's AI Integration Center,  scans documents for specific keywords and has about 4,000 users.  The initiative is part of a wider government effort to eliminate DEI content,  leveraging AI for increased efficiency in aligning with national security objectives.   The AI Report  

The Cloud Pod
294: Ding: Chime is Dead

The Cloud Pod

Play Episode Listen Later Mar 7, 2025 61:25


Welcome to episode 294 of The Cloud Pod – where the forecast is always cloudy!Ilya Boy, do we have a news packed week for you! Sutskever raised $30B without a product, Mira Murati launched her own AI lab, and Claude 3.7 now thinks before it speaks. Meanwhile, Microsoft casually invented new matter for quantum computing, Google built an AI scientist, and AWS killed Chime (RIP). At this rate, AI is either going to save the world or speedrun becoming Ultron. Let's all find out together – today on The Cloud Pod!  Titles we almost went with this week: Ding – Chime is Dead Does your container really need 192 cores Quantum is the new AI AI is now IN the robots A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info.  AI Is Going Great – Or How ML Makes All It's Money  02:41 Ilya Sutskever's Startup in Talks to Raise Financing at $30 Billion Valuation It's been a minute since we talked about former OpenAI executives and what they're up to.  Let's start with Ilya Sutskever and Mira Murati, post Open AI career The Information reports that Ilya Suskevers' startup “Safe Superintelligence” is in talks to raise $1Billion in a round that would value the startup at $30 Billion.   The company has yet to release a product, but based on the name we can guess what they’re working on… 03:22 Ryan – “It's so nuts to me that they can raise that much without – really just an idea. Doesn't have to have any proof or POC…” 07:07 Murati Joins Crowded AI Startup Sector Mira Murati confirmed one of the worst kept secrets in AI, by revealing her lab Thinking Machine Labs.  Murati has lured away two thirds of her team from OpenAI.  We'll be waiting to see how the funding goes for this one.  08:02 Claude 3.7 Sonnet and Claude Code Anthropic is releasing their latest model Claude 3.7 Sonnet, their most intelligent model to date and the first hybrid reasoning model on the market.   Claude 3.7 sonnet can produce near instant responses or extended, step by step thinning that is made visible to the user.   API users also have fine grai

WSJ Tech News Briefing
The Scientist Who Left OpenAI and Started a $30 Billion Firm

WSJ Tech News Briefing

Play Episode Listen Later Mar 6, 2025 12:29


Ilya Sutskever, former chief scientist at OpenAI, founded a new startup called Safe Superintelligence that's already worth $30 billion. But what are investors backing beyond Sutskever's reputation? WSJ reporter Berber Jin shares what we know so far about the secretive startup. Plus, AI coding tools can automate large portions of code development. How could this affect human coders? Charlotte Gartenberg hosts. Sign up for the WSJ's free Technology newsletter.  Learn more about your ad choices. Visit megaphone.fm/adchoices

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

A Daily Chronicle of AI Innovations on February 18th 2025Developments in artificial intelligence on February 18th, 2025, were varied and significant.Elon Musk launched Grok 3, an AI model boasting enhanced capabilities and substantial computing power. Meanwhile, OpenAI considered altering its governance structure to prevent takeovers amid rising interest from investors. Mistral AI introduced a region-specific model, Saba, designed for Middle Eastern and South Asian languages and cultures. The New York Times unveiled an AI tool to aid journalists, while setting restrictions on its usage.Further advancements included funding for Ilya Sutskever's SSI and the release of new open-source models.

Tech Update | BNR
xAI van Musk komt met Grok 3, terwijl OpenAI zich beter wil beschermen tegen Musk

Tech Update | BNR

Play Episode Listen Later Feb 18, 2025 6:31


xAI, de start-up op het gebied van kunstmatige intelligentie (AI) van multimiljardair Elon Musk, heeft zijn nieuwste versie van chatbot Grok-3 gelanceerd. Joe van Burik vertelt erover in deze Tech Update. Met de chatbot, die Musk de 'slimste AI op aarde' noemt, wil het bedrijf de concurrentie aangaan met de chatbots van OpenAI en het Chinese DeepSeek. Grok-3 wordt onmiddellijk beschikbaar voor Premium+-abonnees op X, het socialemediaplatform van Musk. Het bedrijf start ook een nieuw abonnement genaamd SuperGrok voor de mobiele app van de chatbot en de website Grok.com. xAI is daarnaast van plan om eerdere versies van zijn Grok-modellen binnen een paar maanden open source te maken, waarmee de technologie vrij toegankelijk wordt voor derden. Grok-3 heeft "meer dan tien keer" de rekenkracht van zijn voorganger, zei Musk bij de presentatie van de nieuwe chatbot. Op het gebied van wiskunde, wetenschap en codering verslaat Grok-3 de AI-modellen van concurrenten zoals Google Gemini van Alphabet, DeepSeeks V3 model en OpenAI's GPT-4o, beweert Musk. Verder in deze Tech Update: OpenAI zoekt naarstig naar manieren om vijandige overnames - zoals van Elon Musk - af te weren De andere medeoprichter van OpenAI, Ilya Sutskever, benadert met zijn eigen start-up Safe Superintelligence (SSI) een waardering van 30 miljard dollar See omnystudio.com/listener for privacy information.

Doppelgänger Tech Talk
Cap Table | Grok 3 | JD Vance #433

Doppelgänger Tech Talk

Play Episode Listen Later Feb 17, 2025 100:55


Wie sollte ein Cap Table aussehen? Hörerfrage: Consulting oder Founders Associate? Gibt es bald Roboter von Apple und Meta? Perplexity macht jetzt auch Deep Research. Pip spielt mit Elons Grok 3 und Ilya Sutskever's AI Startup ist 30 Milliarden wert. Wir müssen natürlich auch über die Rede von JD Vance sprechen. Entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Geht wählen (00:05:20) Cap Table (00:17:20) Erste Job: Beratung vs. Startup (00:34:50) Humanoide (00:40:20) ChatGPT, Perplexity, Grok (00:54:40) Safe Superintelligence (01:01:00) JD Vance (01:20:45) China (01:22:15) Elon & Dschingis Khan (01:31:00) Milei Shownotes Apple und Meta liefern sich einen Kampf um humanoide Roboter Bloomberg KI-Antwortmaschine Perplexity stellt neue Deep-Research-Funktion vor decoder Wie Elon Musk die rechtsextreme Politik auf der ganzen Welt fördert nbcnews Dschingis Khan tötete genug Menschen, um den Planeten zu kühlen iflscience ChatGPT ist alles andere als „typisch“ Twitter Sommer Hit 2025: Hostile Government Takeover (EDM Remix) YouTube

Business Pants
Costco vs. racist investors, tech bro victimhood, Altman cries, and Zuck sucks up

Business Pants

Play Episode Listen Later Jan 8, 2025 51:25


Live from an ESG-flavored 2025, it's an all-new Wacky Wednesday edition of Business Pants. Joined by Analyst-Hole Matt Moscardi! On today's Costco lovefest called January 8th 2025: Headlines We Missed since the end of December and the new comic book superhero named Costco!Our show today is being sponsored by Free Float Analytics, the only platform measuring board power, connections, and performance for FREE.DAMION1Shit We Missed (in no particular order):Tech BrosZuckDana White, UFC CEO and Trump ally, to join Meta's board of directorsZuckerberg Announces New Measures to Increase Hate Speech on FacebookMark Zuckerberg's Meta is moving moderators out of California to combat concerns about bias and censorship“Huge problems” with axing fact-checkers, Meta oversight board saysCo-chair Helle Thorning-Schmidt said she is "very concerned" about how parent company Meta's decision to ditch fact-checkers will affect minority groups: "We are seeing many instances where hate speech can lead to real-life harm, so we will be watching that space very carefully," she added.Meta Drops Rules Protecting LGBTQ Community as Part of Content Moderation OverhaulThe changes included allowing users to share “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality.”Meta replaces policy chief Nick Clegg with former Republican staffer Joel Kaplan ahead of Trump inaugurationSamSam Altman Explodes at Board Members Who Fired Him"And all those people that I feel like really fucked me and fucked the company were gone, and now I had to clean up their mess," adding that he was "fucking depressed and tired.""And it felt so unfair," the billionaire told Bloomberg. "It was just a crazy thing to have to go through and then have no time to recover, because the house was on fire."The board's primary fiduciary duty was not to maintain shareholder value or profits, but rather to stay true to OpenAI's mission of creating safe artificial general intelligence (AGI) that benefits humanity.Helen Toner: the director of strategy at Georgetown's Center for Security and Emerging Technology.Tasha McCauley: an adjunct senior management scientist at think tank RAND Corporation. McCauley was also on the advisory board of the Centre for Effective Altruism. In 2017 she signed the Asilomar AI Principles on ethical AI development alongside Altman, OpenAI co-founder Ilya Sutskever, and former board member Elon MuskOpenAI CEO Sam Altman denies sexual abuse allegations made by his sister in lawsuitMuskMaga v Musk: Trump camp divided in bitter fight over immigration policyElon Musk Endorses Nazi-Linked German Party, Even Though It Opposed Tesla's GigafactoryTech Bro Wealth12 US billionaires gained almost $1 trillion in wealth in 2024 as the stock market delivered another year of massive returnsNYT Report Says Jensen Huang, The CEO Of Nvidia And The 10th-Richest Person In The U.S., Trying To Allegedly Avoid $8 Billion In TaxesMark Zuckerberg says he doesn't have a Hawaiian doomsday bunker, just a 'little shelter.' It's bigger than most houses.You could live next door to Jeff Bezos on 'Billionaire Bunker' island for $200 millionMusk urges Bezos to throw an ‘epic wedding' after Amazon founder blasts report of $600 million nuptials as ‘completely false'Elon Musk takes aim at MacKenzie Scott again for giving billions to liberal causes, calling the gifts 'concerning'How Jensen Huang and 3 Nvidia Board Members Became BillionairesMark Zuckerberg sported a $900,000 piece of wrist candy as he announced the end of fact-checking on MetaDEI/ESG Flip-FloppingWhen an anti-DEI activist took a swing at Costco, the board hit backA Costco shareholder proposal brought by conservative activist The National Center for Public Policy Research asked the company to probe its diversity, equity and inclusion policies, with an eye toward eliminating them.The thrust of the proposal is that certain DEI initiatives could open Costco up to financial risks over discrimination lawsuits from employees who are “white, Asian, male or straight.”The company's board of directors unanimously urged shareholders to reject the proposal and made the case that Costco's success depends on establishing a racially diverse, inclusive workplace: “We believe that our diversity, equity and inclusion efforts are legally appropriate, and nothing in the (Center for Public Policy Research) proposal demonstrates otherwise,” the board's statement said.The statement went on to rebuke the Center for Public Policy Research, saying that they and others were the ones responsible for inflicting financial and legal burdens on companies. “The proponent's broader agenda is not reducing the risk for the Company but abolition of diversity programs,” the board said.Costco board member defends DEI practices, rebukes companies scrapping policiesJeff Raikes, co-founder of the Raikes Foundation and former CEO of the Bill & Melinda Gates Foundation, who has served on Costco's board of directors since 2008: "Attacks on DEI aren't just bad for business—they hurt our economy. A diverse workforce drives innovation, expands markets, and fuels growth. Let's focus on building a future where all talent thrives." He concluded his post on X with the hashtag, "InclusiveEconomy." While businesses began to announce their departures from DEI policies last year, Raikes urged companies to expand such practices at work, insisting that scaling down DEI in businesses would harm the economy.Robbie Starbuck: “I fully endorse cancelling memberships at this point.”McDonald's rolls back DEI programs, ending push for greater diversityFour years after launching a push for more diversity in its ranks,McDonald's said it will retire specific goals for achieving diversity at senior leadership levels. It also intends to end a program that encourages its suppliers to develop diversity training and to increase the number of minority group members represented within their own leadership ranks.Managers 'touch up' staff: McDonald's faces fresh abuse claimsFast-food chain McDonald's has been hit by fresh allegations of sexual and homophobic abuse as staff members allege they have been 'touched up' by managers and offered extra shifts for sex.The chain first faced bombshell claims of widespread sexual abuse and harassment at its stores in July 2023 and has since been reported more than 300 times for harassment to the UK's equality watchdog.Allegations have included racist abuse, sexual assault and harassment and bullying. BlackRock Cuts Back on Board Diversity Push in Proxy-Vote GuidelinesThe policy updates remove both (a) numerical diversity targets (i.e., boards should aspire to 30% diversity of membership and have at least 2 women directors and 1 director from an underrepresented group) and (b) the related disclosure-based voting policy (i.e., BlackRock previously would consider taking voting action if a company did not adequately explain its approach to board diversity) – but provides that BlackRock may consider taking voting action if an S&P 500 board is not sufficiently diverse (BlackRock includes a footnote in the policy update suggesting that 30% diversity may still be the expectation).BlackRock's investment stewardship team tweaked the language used to describe how it approaches votes for other companies' boards. It didn't explicitly recommend that boards should aspire to at least 30% diversity of their members, after having done so in previous years.The report noted, however, that all but 2% of the boards of companies in the S&P 500 have diverse representation of at least 30%—and that if companies were out of step with those norms, BlackRock may cast opposing votes on a case-by-case basis. JPMorgan Leaves Net Zero Banking Group, Completing Departure of Major U.S. Banks Stakeholder Anger (or Anger at Stakeholders)Poll finds many Americans pin partial blame on insurance companies in UHC CEO killingA recent survey from the University of Chicago, found that, while 8 out of 10 U.S. adults believe the person who killed Brian Thompson bears the responsibility for the murder, 7 in 10 shared the belief that healthcare companies are also to blame. Luigi Mangione mention on SNL met with applause, critics slam 'woke' audience: 'Wooing for justice?'New York to charge fossil fuel companies for damage from climate changeThe new law requires companies responsible for substantial greenhouse gas emissions to pay into a state fund for infrastructure projects meant to repair or avoid future damage from climate change.Albania bans TikTok for a year after fatal stabbing of teenager last monthTeens in Vietnam will now be limited to one hour of gaming per sessionStarbucks baristas set to strike as new CEO makes $100 millionWashington Post Cartoonist Quits After Jeff Bezos Cartoon Is KilledNorway on track to be the first to ‘erase petrol and diesel engine cars'Fully electric vehicles accounted for 88.9% of new cars sold in 2024Exxon Sues California Official, Claiming He Defamed the CompanyExxon Mobil sued California's attorney general, the Sierra Club and other environmental groups on Monday, alleging that they conspired to defame the oil giant and kneecap its business prospects amid a debate over whether plastics can be recycled effectively.DystopiaMan Trying to Catch Flight Alarmed as His Driverless Waymo Gets Stuck Driving in Loop Around Parking LotAsked to Write a Screenplay, ChatGPT Started Procrastinating and Making ExcusesKlarna's CEO says AI is capable of doing his job and it makes him feel 'gloomy'Governance newsShari Redstone is saying goodbye to Paramount GlobalCharles Dolan, TV pioneer who founded HBO and Cablevision, dies at 98Richard Parsons, former Time Warner CEO, dies at age 76 Dye & Durham board resigns, activist nominees take control, interim CEO named The Fortune 500 has two new female CEOs—finally pushing that milestone above 11%And we end with a few classics:Boeing ends a troubled year with a jet-crash disaster in South KoreaMan who exploded Tesla Cybertruck outside Trump hotel used ChatGPT to plan the attackNorovirus rates have skyrocketed by 340% this season. Here's where the ‘winter vomiting disease' is spreading and whyMATT1CostcoNational Center for Public Policy Research filed the proxy with CostcoTheir arguments include…US Supreme court decision at HarvardA $25m judgment in PA for white regional manager at Starbucks who was fired after two black patrons were arrested for being blackThis gem: “With 310,000 employees, Costco likely has at least 200,000 employees who are potentially victims of this type of illegal discrimination because they are white, Asian, male or straight.”This, perhaps, is the greatest ironic argument for “meritocracy” ever made in historyThey point out that the MAJORITY OF THE STAFF is white, Asian, male, or straight… but they don't even use Costco's data, they source census data and just guessThe real numbers:Non management is 44.2% white, management is 58% white - a 14% increase in meritocracyExecutives are 80.6% white - a whopping 36.4% more meritHispanics are 33.1% of non management, 23.3% of management - 9.8% less merit!Executives are 5.8% Hispanic, 26.3% less meritAsians are 8.5% and 7.1%, so 1.4% less merit7.9% executive - so even merit?US Exec management is 72.3% maleSo 80.6% of executives are white, and 72.3% are male - and the argument NCPPR is making is that BECAUSE there are a lot of white males, there is a lot of RISK that THE WHITE MALES WILL SUE YOU if they think they're discriminated againstThink of what they're saying - because you have so many non diverse people, you can't have diversity programs for risk of lawsuitThe response dropped the pretense that the proxy was anything except racismThe proponent professes concern about legal and financial risks to the Company and its shareholders associated with the diversity initiatives. The proponent's broader agenda is not reducing risk for the Company but abolition of diversity initiatives. A 2023 federal district court decision, in a case brought by the proponent, noted that the proponent had "published a document called 'Balancing the Boardroom 2022,' which describes its shareholder activism as 'fighting back' against 'the evils of woke politicized capital and companies.' [The proponent went] on to describe 'CEOs and other corporate executives who are most woke and most hard-left political in their management of their corporations' as 'inimical to the Republic and its blessings of liberty' and 'committed to critical race theory and the socialist foundations of woke' or 'shameless monsters who are willing to sacrifice our future for their comforts.'" National Center for Public Policy Research v. Schultz, E.D. WA. (Sept. 11, 2023). And the proponent's efforts to demonstrate retrenchment on the part of companies are misleading, at best. For example, the assertion that "Microsoft laid off an entirea[sic] DEI team" is simply wrong. It was later reported that Microsoft stated that the two positions eliminated were redundant roles on its events team and that Microsoft's diversity and inclusion commitments remain unchanged, according to Jeff Jones, a Microsoft spokesperson: “Our focus on diversity and inclusion is unwavering and we are holding firm on our expectations, prioritizing accountability, and continuing to focus on this work.” Colvin, Caroline. Amid DEI cuts, Microsoft works to distinguish itself from those responding to ‘woke' backlash. HR Dive, July 24, 2024.Reason Costco might be pushing back?Racism is basically unveiledOf all the companies targeted by a proposal or Robbie Starbuck, Costco has the lowest deviation in board member influence - as in, nearly the entire board has equal power, it's highly democratic - women, men, diverse cohorts are more or less equally powerful to anyone else in the roomNo connections to any board member on another DEI flipper companyMeanwhile, the anti DEI, anti immigrant movement has begun to eat itself before Trump even takes officeIn defense of more HB1 visas and foreign workers, Vivek Ramaswamy says we venerate jocks over valedictorians on Twitter, and Americans aren't as good employeesThe rebuttal was MAGA Trumpers saying Vivek is fake MAGAAlso this: “His entire argument is a terrible proposition,” he adds. “Children raised to be good little robots might grow up to build robots of their own someday, and become rich. Asians are the highest-earning racial group in America, but are they happier for it? Suicide is the leading cause of death for Asians aged 15-24 … and the second-leading cause of death for those aged 25-34.” Page points to a Psychology Today post that blames tiger parenting for causing anxiety and depression and then asks, “Do we really want this country to be even more stressed-out?”Costco proxy says Asians are discriminated againstTwitch gamers are streaming about “meritocracy”

Mixture of Experts
Episode 34: Granite 3.1, NVIDIA Jetson, stealing AI models, and is pre-training over?

Mixture of Experts

Play Episode Listen Later Dec 20, 2024 40:30


Is pre-training a thing of the past? In Episode 34 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Vagner Santana and Volkmar Uhlig to debrief this week in AI. First, OpenAI cofounder Ilya Sutskever said that “peak data” was achieved, does this mean there is no longer a need to model pre-training? Next, IBM released Granite 3.1 with a slew of features, we cover them all. Then, there is a new way to steal AI models, how do we protect against model exfiltration. Finally, can NVIDIA Jetson for AI developers really increase hardware accessibility? Tune-in for more!The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.00:01 — Intro00:49— Is pre-training over?10:25 — Granite 3.122:23 — AI model stealing33:38—NVIDIA Jetson

AI For Humans
Google Wins AI Video, ChatGPT Gets a 1-800 Number & More Massive AI News

AI For Humans

Play Episode Listen Later Dec 19, 2024 44:15


Google's VEO 2 and Gemini updates are… freaking good?! Plus, OpenAI lets you call ChatGPT, o1 is in the API now & Nvidia's Jensen Huang is giving us a new robot brain. Plus, new Pika 2.0 AI video tools, Ilya Sutskever returns to tell us pre-training is over, YouTube and CAA comes together on a deal for protecting celebrity likenesses, Google's Whisk tool is a fun AI toy and we meet an angry old man who calls us wanting help for his furnace whom we then get to drink Monster Milk. IT'S A NEW SHOW, A PRESENT JUST FOR YOU Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // Call ChatGPT aka 1-800-ChatGPT https://www.youtube.com/live/LWa6OHeNK3s?si=0vogE9s_qVOmp81- VEO 2 is actually pretty insane https://deepmind.google/technologies/veo/veo-2/ Knight on a Zebra https://x.com/emollick/status/1868897308529787248 Steak-Off https://x.com/blizaine/status/1868850653759783033 Tomato Cutting Vs Sora https://x.com/joecarlsonshow/status/1868822801546985685 Google Whisk  https://x.com/Google/status/1868781358635442359  New Gemini 2.0 Flash Experimental Advanced https://x.com/sundarpichai/status/1869066293426655459 o1 in API + fine tuning in Real Time Voice https://x.com/OpenAIDevs/status/1869134054190448874 $2000 a month OAI Sub?? https://x.com/tsarnick/status/1868201597727342941 RUMORED TASKS/TO-DO BETA https://x.com/testingcatalog/status/1869364027769377146 o1 Preview Vastly Better Than Doctors at Reasoning https://x.com/deedydas/status/1869049071346102729 The Return of Ilya & The End of Pre-training https://www.youtube.com/watch?v=1yvBqasHLZs Pika Labs 2.0 https://x.com/pika_labs/status/1867651381840040304 YT+CAA Deal in Celebrity Licenses https://www.hollywoodreporter.com/business/digital/youtube-caa-generative-ai-celebrity-likeness-deal-1236088491/ Nvidia New $250 Computer Jetson Nano Super https://youtu.be/S9L2WGf1KrM?si=hc10pdLVNuMZtCcn MJ Prompt: A person protesting something weird https://www.reddit.com/r/midjourney/comments/1heki7l/prompt_a_person_protesting_something_weird/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button Cap4D https://x.com/taubnerfelix/status/1869076254051151995 Video Seal (Meta Watermarking) https://www.threads.net/@luokai/post/DDkBu5lvXG_?xmt=AQGzHOz9TCBGkd77lKX4VFZsl9IFjjn9Nc95J3oLmVRF7A  

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Ilya Sutskever Calls Peak Data and the End of Pretraining

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Dec 17, 2024 15:35


At a recent conference appearance, SSI founder (and former OpenAI leader) Ilya Sutskever claimed that we had reached peak data and that the era of pre-training as a scaling method had come to a close. NLW explores the implications. Plus, NotebookLM releases an enterprise edition. Brought to you by: Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown

Metanoia Lab | Liderança, inovação e transformação digital, por Andrea Iorio
Ep. 195 | Ilya Sutskever: Small Language Models (SMLs) e porque os LLMs chegaram no limite.

Metanoia Lab | Liderança, inovação e transformação digital, por Andrea Iorio

Play Episode Listen Later Dec 17, 2024 18:57


Neste episódio da quarta temporada do Metanoia Lab, patrocinado pela Oi Soluções, o Andrea (andreaiorio.com) analisa uma frase do Ilya Sutskever, cofundador do OpenAI e hoje cofundador da Safe Superintelligence, que fala sobre as razões pelas quais hoje os Large Language Models chegaram num limite em seu desenvolvimento e aprimoramento, e porque os Small Language Models representam o futuro próximo no mundo da Inteligência Artificial.

The Marketing AI Show
#127: 12 Days of OpenAI Continues, Gemini 2, Hands-On with o1, Andressen Says Gov't Wanted “Complete Control” Over AI & OpenAI Employee Says AGI Achieved

The Marketing AI Show

Play Episode Listen Later Dec 17, 2024 86:24


While Santa's loading his sleigh, Silicon Valley's dropping AI breakthroughs by the hour. OpenAI's "12 Days of Shipmas" keeps the gifts coming with ChatGPT Canvas, Apple Intelligence integration, and game-changing video capabilities. Not to be outdone, Google jumps in with Gemini 2.0 and its impressive Deep Research tool. Join Paul Roetzer and Mike Kaput as they unwrap these developments, plus rapid-fire updates on Andreessen's AI censorship bombshell, an OpenAI employee's AGI claims, and the latest product launches and funding shaking up the industry. Access the show notes and show links here This episode is brought to you by our AI Mastery Membership, this 12-month membership gives you access to all the education, insights, and answers you need to master AI for your company and career. To learn more about the membership, go to www.smarterx.ai/ai-mastery.  As a special thank you to our podcast audience, you can use the code POD150 to save $150 on a membership.  Timestamps: 00:05:39 — OpenAI 12 Days of Shipmas: Days 4 - 8 00:18:54 — Gemini 2 Release + Deep Research 00:33:03 — Hands-On with o1 00:46:18 — Perplexity Growth  00:50:46 — Andreessen AI Tech Censorship Comments 00:56:22 — OpenAI AGI 01:00:38 — Amazon Agent Lab 01:03:38 — Pricing for AI Agents 01:07:45 — OpenAI Faces Opposition to For-Profit Status 01:11:13 —Ilya Sutskever at NeurIPS 01:14:20 — Mollick Essay on When to Use AI 01:16:15 — Product and Funding Updates Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers

Hashtag Trending
Mark Zuckerberg and Elon Musk In An Unlikely Alliance To Attack OpenAI: Hashtag Trending for Monday, December 16, 2024

Hashtag Trending

Play Episode Listen Later Dec 16, 2024 11:01 Transcription Available


OpenAI's Project Mode, AI Industry's Future, and Workforce Shifts: Hashtag Trending In today's episode of Hashtag Trending, host Jim Love delves into OpenAI's innovative Project Mode, the escalating AI rivalry with figures like Elon Musk and Mark Zuckerberg, and the anticipated mass resignation of Gen Z and Millennials in 2025. Key points include demonstrations of Project Mode's capabilities, industry-wide challenges and opportunities, and insights from AI thought leaders like Ilya Sutskever on the future of data and AI systems. Tune in for a comprehensive overview of the latest in AI and workplace trends. 00:00 Introduction and Overview 00:31 OpenAI's Project Mode: A Game Changer 01:03 Live Demonstrations and Practical Uses 02:25 OpenAI's Commitment to Delivery 03:25 Rivalries and Legal Battles 06:40 AI Industry Insights and Future Predictions 08:27 Workplace Trends and Future Resignations 10:40 Conclusion and Sign Off

Elon Musk Pod
Elon Musk vs. OpenAI & Microsoft: Antitrust Battle and AI Power Struggles Unveiled

Elon Musk Pod

Play Episode Listen Later Nov 18, 2024 9:59


Elon Musk Expands Legal Battle Against OpenAI and Microsoft Episode Title: Elon Musk vs. OpenAI & Microsoft: Antitrust Battle and AI Power Struggles Unveiled Episode Description: What started as a complaint over OpenAI's transformation from a nonprofit to a profit-driven powerhouse has escalated into a major antitrust legal battle. Musk is now alleging that Microsoft and OpenAI conspired to monopolize the generative AI market, sidelining competitors and potentially breaching federal antitrust laws. We dive into the history of OpenAI, the internal power struggles, and what this lawsuit could mean for the future of artificial intelligence. Key Topics Discussed: The Lawsuit's Expansion: We explore how Musk's original August complaint has evolved, now including new claims against Microsoft for allegedly colluding with OpenAI to dominate the AI market. We break down the legal arguments and what Musk is seeking from the court. OpenAI's Controversial Transformation: Originally founded as a nonprofit, OpenAI shifted gears in 2019, attracting billions in investment from Microsoft. We discuss how this change in business model became a point of contention for Musk and set the stage for the current legal conflict. Behind-the-Scenes Drama: Newly revealed emails between Musk, Sam Altman, Ilya Sutskever, and other OpenAI co-founders offer a rare glimpse into the early days of OpenAI. We dive into the disagreements over leadership, Musk's quest for control, and the internal debates about the company's mission. Microsoft's Role and Investment: Microsoft's billion-dollar partnership with OpenAI is at the heart of Musk's complaint. We examine the timeline of this collaboration, the exclusive licensing agreements, and why Musk views this as an anticompetitive move. Musk's Fear of an 'AGI Dictatorship': Emails from as early as 2016 show Musk's concerns about Google's DeepMind and its potential to dominate the AI space. We discuss Musk's fears of a single company controlling AGI (Artificial General Intelligence) and how these concerns influenced the founding of OpenAI. Intel's Missed Opportunity: We touch on Intel's decision to pass on a $1 billion investment in OpenAI back in 2017, a move that now appears shortsighted given OpenAI's current valuation and market influence. The Legal Stakes and Future Implications: What could this lawsuit mean for the future of AI development and industry partnerships? We break down the potential consequences for OpenAI, Microsoft, and the broader tech landscape. Featured Quotes: Marc Toberoff (Musk's attorney): “Microsoft's anticompetitive practices have escalated. Sunlight is the best disinfectant.” Elon Musk (internal email): “DeepMind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy.” Why It Matters: This case isn't just about corporate rivalry; it's about the future control of artificial intelligence and the ethical concerns surrounding its development. As the AI race intensifies, Musk's lawsuit raises questions about monopolistic practices, transparency, and the potential consequences of unchecked power in the tech industry. Tune In To Learn: Why Musk believes Microsoft and OpenAI's partnership is illegal and anticompetitive. How internal power struggles shaped the trajectory of OpenAI and influenced Musk's departure. What the disclosed emails reveal about the early vision for OpenAI and the concerns about AGI dominance. Resources Mentioned: Musk's original lawsuit filing (August 2023) OpenAI's response to the amended complaint Email exchanges between OpenAI co-founders (2015-2018)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's

柠檬变成柠檬水
Episode 80: OpenAI近来高管纷纷离职,背后的原因是什么?

柠檬变成柠檬水

Play Episode Listen Later Oct 25, 2024 34:56 Transcription Available


Send us a text作为全球人工智能领域的领军企业,OpenAI的一举一动始终备受关注。最近,包括首席技术官Mira Murati、联合创始人Andrej Karpathy、首席科学家Ilya Sutskever以及安全负责人Jan Leike在内的多位重量级人物纷纷宣布离职,掀起了一波高管离职潮。在最新一期播客节目中,主持人俞骅与Poy Zhong深入探讨了这则新闻背后的原因。欢迎大家收听,获取更多详细分析!请您在Apple Podcasts, 小宇宙APP, Spotify, iHeart Radio, YouTube, Amazon Music等,搜寻”柠檬变成柠檬水“。Support the showThank you for listening to our podcasts. We also welcome you to join the "Turn Lemons Into Lemonade" LinkedIn page!

The Cloud Pod
279: The Cloud Pod Glows With Excitement Over Google Nuclear Deal

The Cloud Pod

Play Episode Listen Later Oct 23, 2024 54:48


Welcome to episode 279 of The Cloud Pod, where the forecast is always cloudy! This week Justin, Jonathan and Matthew are your guide through the Cloud. We're talking about everything from BigQuery to Google Nuclear power plans, and everything in between! Welcome to episode 279!  Titles we almost went with this week: AWS SKYNET (Q) now controls the supply chain AWS Supply Chain: Where skynet meets your shopping list Digital Ocean follows Azure with the Premium everything EKS mounts S3  GCP now a nuclear Big query don't hit that iceberg  Big Query Yells: “ICEBERG AHEAD”  The Cloud Pod: Now with 50% more meltdown protection The Cloud Pod radiates excitement over Google's nuclear deal A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info.  Follow Up 00:46 OpenAI's Newest Possible Threat: Ex-CTO Murati Apologies listeners – paywall article.  Given the recent departure of Ex-CTO Mira Murati from OpenAI, we speculated that she might be starting something new…and the rumors are rumorin'.  Rumors have been running wild since her last day on October 4th, with several people reporting that there has been a lot of churn.  Speculation is that Murati may join former Open AI VP Bret Zoph at his new startup.   It may be easy to steal some people, as the research organization at Open AI is reportedly in upheaval after Liam Fedus’s promotion to lead post-training – several researchers have asked to switch teams.  In addition, Ilya Sutskever, an Open AI co-founder and former chief scientist, also has a new startup.   We'll definitely be keeping an eye on this particular soap opera.  2:00 Jonathan – “I kind wonder what will these other startups bring that’s different than what OpenAI are doing or Anthropic or anybody else. mean, they’re all going to be taking the same training data sets because that’s what’s available. It’s not like they’re going to invent some data from somewhere else and have an edge. I mean, I guess they could do different things like be mindful about licensing.” General News 4:41 Introducing New 48vCPU and 60vCPU Optimized Premium Droplets on DigitalOcean Those raindrops are getting pretty heavy as Digital Ocean announces their new 48vCPU Memory and storage optimized premium droplets, and 60vcpu general purpose and CPU optimized premium droplets.  Droplets are DO's Linux-based virtual machines.   Premium Optimized Droplets are dedicated CPU instances with access to the full hyperthread, as well as 10GBps of outbound data transfer. The 48vCPU boxes have 384GB of memory, and the 60vCPU boxes have 160gb. 6:02 Justin – “I’ve been watchi

Behind the Numbers: eMarketer Podcast
The Daily: AI Models That Can “Reason”, The Push to Invent a New AI Device, and The Possibility of “Safe SuperIntelligence” | Sep 26, 2024

Behind the Numbers: eMarketer Podcast

Play Episode Listen Later Sep 26, 2024 24:31


On today's podcast episode, we discuss what to make of former Apple Chief Design Officer Jony Ive working on a new AI device, what an AI model with “reasoning abilities” can actually do, and whether Ilya Sutskever's new AI startup can create safe superintelligence. Join host Marcus Johnson, along with analysts Jacob Bourne and Grace Harmon, for the conversation.   Follow us on Instagram at:  https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit:  https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com  For a transcript of this episode click here:  https://www.emarketer.com/content/   © 2024 EMARKETER

Danielle Newnham Podcast
Anil Ananthaswamy: The Elegant Math Behind Modern AI

Danielle Newnham Podcast

Play Episode Listen Later Sep 19, 2024 52:58


Today's guest is Anil Ananthaswamy - an award-winning science writer and former staff writer and deputy news editor for New Scientist magazine. He is a 2019-20 MIT Knight Science Journalism Fellow and has been a guest editor for the science writing program at the University of California, Santa Cruz, and organizes and teaches an annual science writing workshop at the National Centre for Biological Sciences in Bengaluru, India.He is a freelance feature editor for PNAS Front Matter. He writes regularly for New Scientist, Quanta, Scientific American, PNAS Front Matter and Nature, and has contributed to Nautilus, Matter, The Wall Street Journal, Discover and the UK's Literary Review, among others.He has written four award-winning books including The Edge of Physics: Dispatches from the Frontiers of Cosmology - voted book of the year in 2010 by UK's Physics World,The Man Who Wasn't There: Tales from the Edge of the Self - was long-listed for the 2016 Pen/E. O. Wilson Literary Science Writing Award, Through Two Doors at Once: The Enigmatic Story of our Quantum Reality- was named one of Smithsonian's Favorite Books of 2018 and one of Forbes's 2018 Best Books About Astronomy, Physics and Mathematics.And his latest book, Why Machines Learn: The Elegant Math Behind Modern AI which Geoffrey Hinton labelled "A masterpiece."In this episode, we discuss his start in life, why he went from a career in software to writing and dig deeper into Why Machines Learn including a history of neural networks.But, before we get into today's episode, a quick word from our sponsor, Paddle - and this is especially for the all the mobile devs in my audience. Paddle has produced an invaluable web monetisation guide (for FREE)! As they say, selling your app on the web isn't just about avoiding hefty app store fees, it actually gives you the freedom and opportunity to leverage a direct-to-consumer model where you can reach a bigger audience, enhance your marketing efforts, and experiment with different ways to monetize and grow your app. So, if you are interested in learning more, then do head here to get your FREE web monetisation guide from Paddle.Please enjoy my conversation with Anil Ananthaswamy.Anil website / TwitterWhy Machines Learn: The Elegant Math Behind Modern AIDanielle Twitter / Instagram / Substack Newsletter / YouTubeEpisode image: Rajesh Krishnan

Let's Talk AI
# 182 - Alexa 2.0, MiniMax, Surskever raises $1B, SB 1047 approved

Let's Talk AI

Play Episode Listen Later Sep 17, 2024 98:47 Transcription Available


Our 182nd episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and Jeremie Harris. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Sponsors: - Agent.ai is the global marketplace and network for AI builders and fans. Hire AI agents to run routine tasks, discover new insights, and drive better results. Don't just keep up with the competition—outsmart them. And leave the boring stuff to the robots

Leveraging AI
121 | $2,000 per month for the next version of ChatGPT, Ilya Sutskever raises $1B just a few months after leaving OpenAI, Salesforce pivots to AgentForce to focus on AI agents, and many more important AI news from the week ending on September 6, 2024

Leveraging AI

Play Episode Listen Later Sep 7, 2024 39:32 Transcription Available


Is OpenAI About to Unleash GPT-5? The Billion-Dollar Race for SuperintelligenceWhat happens when one of AI's most powerful minds leaves to form his own company with a billion-dollar seed round? And why is the next version of GPT poised to be 100 times more powerful than its predecessor? In this episode of Leveraging AI, we dive into the game-changing news around OpenAI co-founder Ilya Sutskever's new venture and the jaw-dropping advancements in AI expected in 2024. From billion-dollar funding rounds to the secrets of superintelligence, we break down the latest developments that are reshaping the future of AI and business.Get ready: the AI revolution is picking up speed, and staying ahead means understanding these new dynamics in the AI landscape.In this episode, you'll discover:How Ilya Sutskever's new company, Safe Superintelligence, raised $1 billion and its goal to develop safe AI.What makes GPT-Next (or GPT-5) *100 times more powerful* than GPT-4.Why OpenAI's potential $2,000/month subscription tier could change the enterprise AI market.How smaller, specialized models are revolutionizing data generation and training for larger AI systems.Key developments from competitors like Anthropic, Nvidia, and even Elon Musk's XAI.Whether you're intrigued by billion-dollar valuations, groundbreaking AI models, or how to position your company in the AI arms race, this episode is packed with insights that matter to leaders navigating the future of business.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Big Technology Podcast
YouTubers on Russia's Payroll, Ilya Raises $1 Billion, Founder Mode

Big Technology Podcast

Play Episode Listen Later Sep 6, 2024 56:28


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Russia funding the political YouTube network Tenet Media 2) Should the commentators have known better? 3) How big does the influence operation go? 4) Is Ranjan being paid off by a foreign government? 5) Twitter suspended in Brazil 6) Was Elon Musk right in standing up to Brazil? 7) More on the amorphous nature of online popularity 8) Talk Tuah 9) Ilya Sutskever raises $1 billion from a16z and others 10) Founder Mode vs. Manager Mode --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Discover Daily by Perplexity
Sutskever's SSI Raises $1B, Volkswagen Integrates ChatGPT, and a Biohybrid Mushroom Robot

Discover Daily by Perplexity

Play Episode Listen Later Sep 6, 2024 8:30 Transcription Available


We'd love to hear from you! Send us a text message.This episode of 'Discover Daily' by Perplexity kicks off with news of Safe Superintelligence Inc. (SSI), a startup founded by former OpenAI chief scientist Ilya Sutskever, which has secured a staggering $1 billion in funding. SSI's mission to develop safe superintelligence through long-term research and development is explored, along with the challenges and controversies surrounding this ambitious venture.The episode then shifts gears to discuss Volkswagen's integration of ChatGPT into its IDA voice assistant, promising a more interactive and capable driving experience. This innovative feature will be available in various 2024 and 2025 models, starting with the ID 4 electric vehicle. We highlight the system's ability to handle complex queries while maintaining user privacy and data security.The highlight of the episode is an exciting development in biohybrid robotics: a robot controlled by a living mushroom. Created by scientists at Cornell University, this innovative robot uses the mycelium network of a king oyster mushroom as both a sensor and control system. We discuss the potential applications of this technology in agriculture and environmental monitoring, as well as the advantages of using fungi as biological controllers in robotics.From Perplexity's Discover Feed:https://www.perplexity.ai/page/sutskever-s-ssi-raises-1b-SEwLYlvgTgG7hby4csYrdAhttps://www.perplexity.ai/page/volkswagen-integrates-chatgpt-QIqirME0QH68oVvYfeVLww https://www.perplexity.ai/page/mushroom-crawls-using-robot-bo-LVa.TfnOTb6h7..mmCiIYw**Redeem a free year of Perplexity Pro through Xfinity Rewards!** Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

Digital Currents
Building Safe AI and the Future of Digital Transformation: A Billion-Dollar Bet on Superintelligence

Digital Currents

Play Episode Listen Later Sep 6, 2024 42:56


Join us as we explore Ilya Sutskever's $1 billion raise for his venture, Safe Superintelligence (SSI), aimed at creating AI models that align with human values, Applied Digital's stock movement following a $160 million funding deal with Nvidia, and Bitcoin's continued struggle amid macroeconomic uncertainties. We then delve into Nvidia's groundbreaking GH200 chips, which are setting new performance benchmarks in European supercomputing. Finally, catch the chart of the week, highlighting the market potential of tokenizing assets.   Remember to Stay Current!  To learn more, visit us on the web at https://www.morgancreekcap.com/morgan-creek-digital. To speak to a member of our team or sign up for other content, please email mcdigital@morgancreekcap.com. 

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Ilya Sutskever Raises $1B for Safe Superintelligence

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Sep 5, 2024 15:23


Former OpenAI founder Ilya Sutskever recently announced his new company Safe Superintelligence. Now he's announced a $1B pre-product raise. Concerned about being spied on? Tired of censored responses? AI Daily Brief listeners receive a 20% discount on Venice Pro. Visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://venice.ai/nlw ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠and enter the discount code NLWDAILYBRIEF. Learn how to use AI with the world's biggest library of fun and useful tutorials: https://besuper.ai/ Use code 'podcast' for 50% off your first month. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown

WSJ Minute Briefing
Georgia School Shooting Suspect Was Previously Interviewed by Police

WSJ Minute Briefing

Play Episode Listen Later Sep 5, 2024 2:51


Plus: OpenAI co-founder Ilya Sutskever's new firm raises $1 billion as it aims to build ‘'safe'' AI models. And, Stellantis presses pause on production for two of its top-selling U.S. models. Kate Bullivant hosts. Sign up for WSJ's free What's News newsletter.  Learn more about your ad choices. Visit megaphone.fm/adchoices

AI For Humans
New Billion Dollar Start-up From Ex-OpenAI Founder, GPT-Next Is Coming (We Think) & More AI News

AI For Humans

Play Episode Listen Later Sep 5, 2024 45:05


Join our Patreon: https://www.patreon.com/AIForHumansShow AI NEWS: OpenAI co-founder Ilya Sutskever is back! Learn how SSI (Safe Superintelligence) landed a billion dollars in seed funding for the next gen of AI. Plus, OpenAI teases GPT-Next (again) but something might *actually* be on the horizon. Plus, XAI brings a massive AI cluster of H100s online, Amazon's Alexa is getting Claude, MaxMin is a cool new AI video model and we talk about that New Yorker piece. YES, THAT ONE.   Join us in our Discord: https://discord.gg/muD2TYgC8f Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow And to contact or book us for speaking/consultation, please visit our website: https://www.aiforhumans.show/   // SHOW LINKS // One Billion Dollars For SSI AGI https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/ Ilya's Mountain Identified https://x.com/ilyasut/status/1831341857714119024 Japanese GPT-Next Presentation https://x.com/kimmonismus/status/1830944806622695427 OpenAI Employee Apologies For Not Launching Stuff https://x.com/BorisMPower/status/1830714579116323004 XAI Brings Colossus 100k H100 training cluster online https://x.com/elonmusk/status/1830650370336473253 Another 125B Computer Center In Development  https://www.theinformation.com/articles/two-ai-developers-are-plotting-125-billion-supercomputers?rc=c3oojq Alexa Getting Claude https://www.theverge.com/2024/8/30/24232123/amazon-new-alexa-voice-assistant-claude-ai-mode 1x Neo Beta Robot https://www.1x.tech/discover/announcement-1x-unveils-neo-beta-a-humanoid-robot-for-the-home Minimax Chinese AI Video Model https://x.com/RyanMorrisonJer/status/1830021533894348831 Darth Vader Saber Fight https://x.com/Diesol/status/1830307056517308474 Dream Machine Camera Controls https://x.com/LumaLabsAI/status/1831027696870269188 Ted Chaing AI Art New Yorker Piece https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art Project Sid - AI Agents in Minecraft https://x.com/GuangyuRobert/status/1831006762184646829 Fighting Health Insurance Claim Denials with AI https://sfstandard.com/2024/08/23/holden-karau-fight-health-insurance-appeal-claims-denials/ How Many Strawberries inside this ‘R'? https://x.com/goodside/status/1830960952025456975 Replacing All Home Screen Apps With Kermit https://x.com/dlberes/status/1830719320457879898 Runway Extensions https://x.com/runwayml/status/1829591480664768993  

Reuters World News
Olympian death, Georgia shooting, Canada's border, VW closures and ‘safe AI'

Reuters World News

Play Episode Listen Later Sep 5, 2024 12:54


Ugandan marathon runner and Paris Olympian Rebecca Cheptegei dies after being set on fire in a weekend attack. A 14-year-old Georgia high school student kills four and injures nine in campus shooting. Canada has been rejecting visa applications and turning away visitors in a border crackdown. Volkswagen's announcement that the car maker is mulling German factory closures for the first time in its history could prove to be a risky move. And OpenAI co-founder Ilya Sutskever's new startup pledges to focus on safe AI and has already raised $1 billion. Sign up for the Reuters Econ World newsletter here. Listen to the Reuters Econ World podcast here. Find the Recommended Read here. Visit the Thomson Reuters Privacy Statement for information on our privacy and data protection practices. You may also visit megaphone.fm/adchoices to opt out of targeted advertising. Learn more about your ad choices. Visit megaphone.fm/adchoices

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 330: OpenAI Drama: Co-Founder Leaves and President Takes Leave

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 6, 2024 46:13


Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.What the heck is happening at OpenAI? In a somewhat shocking development, an OpenAI co-founder has left OpenAI for rival Anthropic. And President Greg Brockman is taking an 'extended leave of absence.' What's it all mean? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on OpenAIRelated Episodes: Ep 318: GPT-4o Mini: What you need to know and what no one's talking aboutEp 149: Sam Altman leaving and the future of OpenAI – 7 things you need to knowUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Major Changes at OpenAI2. Legal Trouble for OpenAI3. OpenAI's Technology and Impact4. Future of OpenAITimestamps:02:00 Daily AI news06:15 Multiple high-level departures at OpenAI, significant impact.12:47 GPT technology widely used by large companies.16:08 Employees threatened to leave if demands not met.18:22 Key OpenAI figures change, raising concerns.21:05 Economic chaos and political instability in 72 hours.25:22 Apple rebranding AI as 'Apple Intelligence.' GPT technology used.27:16 Microsoft's early commitment to AI pays off.30:32 NVIDIA is least reliant on OpenAI.35:08 AI advancements raise immense safety concerns and risks.40:16 Ilya Sutskever left OpenAI to start SSI.41:16 OpenAI's new model amidst reporting and rumors.44:20 OpenAI's incredible capabilities are beyond imagination.Keywords:OpenAI, Jordan Wilson, Everyday AI, OpenAI drama, co-founder departure, OpenAI president, extended leave, AI news, Figure humanoid AI robot, NVIDIA, copyright violations, Elon Musk, Sam Altman, lawsuit, Peter Dang, John Shulman, Greg Brockman, OpenAI leadership changes, Andrei Karpathy, Ilya Sutskever, Microsoft, artificial intelligence, AGI, Jan Leakey, Anthropic, GPT 5, GPT NEXT, Apple Intelligence, US economy, global economic turmoil. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

Go To Market Grit
#200 CEO and Co-Founder Together AI, Vipul Ved Prakash w/ Bucky Moore: Super Cycle

Go To Market Grit

Play Episode Listen Later Jul 22, 2024 55:29


Guests: Vipul Ved Prakash, CEO and co-founder of Together AI; and Bucky Moore, partner at Kleiner PerkinsNo one knows for sure whether the future of AI will be driven more by research labs and AI-native companies, or by enterprises applying the technology to their own data sets. But one thing is for sure, says Together AI CEO and co-founder Vipul Ved Prakash: It's going to be a lot bigger. “If you look at the next 10 years or the next 20 years, we are doing maybe 0.1 percent of [the] AI that we'll be doing 10 years from now.” In this episode, Vipul, Bucky, and Joubin discuss startup table stakes, Tri Dao, tentpole features, open-source AI, non-financial investors, Meta Llama, deep learning researchers, WeWork, “Attention is All You Need,” create vs. capture, Databricks, Docker, scaling laws, Ilya Sutskever, IRC, and Jordan Ritter and Napster.Chapters:(00:53) - Executive hiring (04:40) - How Vipul and Bucky met (06:54) - Six years at Apple (08:19) - Together and the AI landscape (12:47) - Apple's deal with OpenAI (14:27) - Open vs. closed AI (17:32) - Nvidia GPUs and capital expenditures (22:48) - Fame and reputation (24:17) - Planning for an uncertain future (27:00) - Stress and attention (30:18) - AI research (34:58) - Challenges for AI businesses (39:02) - Frequent disagreements (43:05) - Vipul's first startups, Cloudmark and Topsy (47:55) - Taking time off (50:09) - The crypto-AI connection (53:20) - Who Together AI is hiring (54:37) - What “grit” means to Vipul Links:Connect with VipulTwitterLinkedInConnect with BuckyTwitterLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 300: AI News That Matters - June 24th, 2024

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 24, 2024 34:40


Send Everyday AI and Jordan a text messageNVIDIA became the world's most valuable company. Then they lost that title. Why is ChatGPT no longer the king of the LLM hill? Did you see Runway Gen-3? We'll explain all of that in this week's AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode pageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Competition in development of LLMs2. NVIDIA's rise as the most valuable company.3. OpenAI drama and partnership4. Runway's new AI modelTimestamps:02:05 Runway's Gen 3 Alpha promises faster AI video generation.06:40 Claude 35 Sonnet AI chatbot offers innovation.08:39 Benchmarks show Sonnet 3.5 ahead of GPT-4.12:14 Live preview and rendering change language models.15:16 NVIDIA shares slump, Microsoft regains top spot.19:57 OpenAI and Color Health collaborate for cancer care.21:22 OpenAI acquires Rockset to enhance enterprise products.26:58 OpenAI accused of prioritizing product over safety.29:12 Concern about AI safety and partnerships with OpenAI.31:41 AI companies jostle for dominance in tech.Keywords:OpenAI, Meta, Anthropic, language models, Claude 3.5 SONNET, artifacts, NVIDIA, market cap, GPU chips, artificial intelligence, Hour 1, live AI clone, human interview, Jensen Huang, AI safety, SAFE Super Intelligence Inc., Ilya Sutskever, quad 3.5 Sonnet, Microsoft, Apple, Color Health, cancer research, RockSets, enterprise products, Runway, Gen 3 Alpha, AI video space, AI avatars. Enter to win a FREE Custom Avatar from Hour One as part of their #HourOneChallenge. Go find out more here: https://www.youreverydayai.com/creating-an-ai-clone-with-hour-one/

Business Casual
Meet the New OpenAI Challenger & GenZ Investors LOVE Crypto

Business Casual

Play Episode Listen Later Jun 21, 2024 27:14


Episode 349: Neal and Toby discuss ex-OpenAI co-founder Ilya Sutskever starting his own AI company that will prioritize safety vs. profits. Should OpenAI be concerned about a rivalry? Then, climate activists just painted Stonehenge with orange paint in the latest episode of defacing historical landmarks across Europe. Next, US car dealerships are scrambling as a widely used software for transactions has been hacked, not once, but twice. Meanwhile, GenZers and millennials who have money to spend are spending it on luxury items vs. traditional assets. Also, Amazon is ditching its plastic air pillows in boxes and replacing them with loads of recyclable paper. Lastly, a harrowing escape from war-torn Ukraine featuring 2 beluga whales traveling by…car? Download the Yahoo Finance App (on the Play and App store) for real-time alerts on news and insights tailored to your portfolio and stock watchlists. Get your Morning Brew Daily Mug HERE: https://shop.morningbrew.com/products/morning-brew-daily-mug?utm_medium=youtube&utm_source=mbd&utm_campaign=mug Listen to Morning Brew Daily Here: https://link.chtbl.com/MBD Watch Morning Brew Daily Here: https://www.youtube.com/@MorningBrewDailyShow Learn more about your ad choices. Visit megaphone.fm/adchoices

Techmeme Ride Home
Thu. 06/20 - Ilya Sutskever's New AI Startup

Techmeme Ride Home

Play Episode Listen Later Jun 20, 2024 17:02


Ilya Sutskever wants to go straight to Safe Superintelligence, do not pass go, but do probably collect hundreds of millions of dollars. Is Perplexity ignoring robots.txt files? Xreal's hybrid AR glasses play. And how many apps did Apple sherlock at WWDC last week?Sponsors:GreenLight.com/rideLinks:Ilya Sutskever Has a New Plan for Safe Superintelligence (Bloomberg)Perplexity Is a Bullshit Machine (Wired)For Apple's AI Push, China Is a Missing Piece (WSJ)Xreal's new gadget is a phone-sized Android tablet just for your AR glasses (The Verge)iOS 18 could ‘sherlock' $400M in app revenue (TechCrunch)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 298: Going from Everyday AI to Game-Changing AI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jun 20, 2024 32:54


Send Everyday AI and Jordan a text messageEnter to win a FREE Custom Avatar from Hour One as part of their #HourOneChallenge - Go find out more hereHow does AI go from 'cool business tool' to changing everything? It's not an overnight process. It takes intentional steps. Rehgan Bleile, CEO of AlignAI, walks us through those steps.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Rehgan questions on AIRelated Episodes: Ep 238: WWT's Jim Kavanaugh Gives GenAI Blueprint for BusinessesEp 232: Creating and Capturing Business Value with GenAI – Insights From HPEUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Current AI Trends 2. AI Implementation and Barriers3. Security Concerns of AI Implementation4. Applications of Generative AITimestamps:01:35 Daily AI news04:15 About Rehgan and AlignAI08:33 Enhancing customer service with AI for industries.12:48 Identifying core problems and solutions in industries.15:14 Leveraging AI across company operations to improve.18:52 AI governance and training critical for user adoption.20:59 Reliance on systems requires comfort and oversight.26:33 Create AI policies, buy point solutions. Ask vendors.27:20 Implement AI strategy methodically and educate employees.Keywords:small AI models, open-source AI, AI policies, steering committee, AI implementation timeline, workforce transformation, risk mitigation, Jordan Wilson, generative AI, AI dependency, digital literacy, AI literacy, data privacy, data ownership, copyright IP protection, data storage, emerging AI trends, generative AI impact, AI measurement, AI internal usage, sales, marketing, Ilya Sutskever, Safe Superintelligence Inc, Accenture, cloud migration, Align AI, regulated environments, AI in customer service. Enter to win a FREE Custom Avatar from Hour One as part of their #HourOneChallenge. Go find out more here: https://www.youreverydayai.com/creating-an-ai-clone-with-hour-one/

Your Undivided Attention
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Your Undivided Attention

Play Episode Listen Later Jun 7, 2024 37:47


This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry's leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open Letter My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI's policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom Wheeler Spotlight on AI: What Would It Take For This to Go Well? Big Food, Big Tech and Big AI with Michael Moss Can We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

The Vergecast
Microsoft is in its AI PC era

The Vergecast

Play Episode Listen Later May 21, 2024 70:24


Today on the flagship podcast of Arm-based chipsets: 03:08 - The Verge's Tom Warren and David Pierce discuss the announcements from Microsoft's Surface event, including the new Arm-powered Surface Laptop, and Copilot Plus PCs. Microsoft's Surface AI event: news, rumors, and lots of Qualcomm laptops  Microsoft announces an Arm-powered Surface Laptop Microsoft's new Surface Pro gets an OLED display for the first time Microsoft announces Copilot Plus PCs with built-in AI hardware The new, faster Surface Pro is Microsoft's all-purpose AI PC  Recall is Microsoft's key to unlocking the future of PCs 27:29 -Verge senior AI reporter Kyle Robison joins the show to chat about OpenAI's GPT-4o demo and where we're headed in the next few years of AI.  ChatGPT is getting a Mac app OpenAI's custom GPT Store is now open to all for free OpenAI releases GPT-4o, a faster model that's free for all ChatGPT users  ChatGPT will be able to talk to you like Scarlett Johansson in Her  OpenAI pulls its Scarlett Johansson-like voice for ChatGPT  OpenAI chief scientist Ilya Sutskever is officially leaving OpenAI researcher resigns, claiming safety has taken ‘a backseat to shiny products' We tried out the Project Astra demo at Google I/O which worked well un... | tech | TikTok  57:40 - Nilay Patel answers a question about iPads for this week's Vergecast Hotline. Apple iPad Pro (2024) review: the best tablet money can buy Email us at vergecast@theverge.com or call us at 866-VERGE11, we love hearing from you. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Vergecast
AI assistants are so back

The Vergecast

Play Episode Listen Later May 17, 2024 94:58


The Verge's Nilay Patel, Alex Cranz, and David Pierce discuss announcements from Google I/O and OpenAI's GPT4o event. Further reading: Google and OpenAI race to build the feature of search OpenAI releases GPT-4o, a faster model that's free for all ChatGPT users ChatGPT will be able to talk to you like Scarlett Johansson in Her ChatGPT is getting a Mac app OpenAI's custom GPT Store is now open to all for free OpenAI's “ChatGPT and GPT-4” Spring Update stream starts in 20 minutes OpenAI chief scientist Ilya Sutskever is officially leavingl Project Astra: the future of AI at Google is fast, multi-modal assistants like Gemini Live Google's Gemini AI is getting a chatty new voice mode  Google will let you create personalized AI chatbots Google's Gemini can build an entire vacation itinerary ‘in a matter of seconds'  Google's Circle to Search will help you with your math homework Google's Gemini video search makes factual error in demo We have to stop ignoring AI's hallucination problem Google I/O 2024: everything announced Google is redesigning its search engine — and it's AI all the way down  Google now offers ‘web' search — and an AI opt-out button Gemini is about to get better at understanding what's on your phone screen  Google is building Gemini Nano AI right into Chrome Google makes its AI way faster with Gemini Flash  Google's new LearnLM AI model focuses on education Android apps will soon let you use your face to control your cursor Android is getting an AI-powered scam call detection feature Google targets filmmakers with Veo, its new generative AI video model Google's invisible AI watermark will help identify generative text and video Google Photos is getting its own ‘Ask Photos' assistant this summer Blink and you missed it: Google has a new pair of prototype AR glasses We have to stop ignoring AI's hallucination problem Google launches new Home APIs and turns Google TVs into smart home hubs Email us at vergecast@theverge.com or call us at 866-VERGE11, we love hearing from you. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Daily Tech News Show
Slot-GPT - DTNS 4770

Daily Tech News Show

Play Episode Listen Later May 15, 2024 31:36


Predictive AI is making its mark in the world of gambling. How does AI integration into slot machines and sports betting benefit everyone involved? Plus Apple announces new accessibility features coming to iOS and iPad OS. And OpenAI co-founder Ilya Sutskever announces he will be leaving the company.Starring Tom Merritt, Sarah Lane, Scott Johnson, Roger Chang, Joe.Link to the Show Notes.

Business Wars
Sam Altman & the Battle for OpenAI | If You Come At the King… | 2

Business Wars

Play Episode Listen Later Apr 3, 2024 37:22


OpenAI co-founder Ilya Sutskever's move to fire Sam Altman with no warning kicks off five days of chaos for the tech company. After he gets over the shock, Altman uses all his corporate savvy to fight back. For days, over McDonalds and Boba tea, Altman and the board feud over the path forward for Altman and OpenAI, while the business community waits on tenterhooks to find out the future of one of the most important AI companies in the world.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Lex Fridman Podcast
#419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

Lex Fridman Podcast

Play Episode Listen Later Mar 18, 2024 122:04


Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:51) - OpenAI board saga (25:17) - Ilya Sutskever (31:26) - Elon Musk lawsuit (41:18) - Sora (51:09) - GPT-4 (1:02:18) - Memory & privacy (1:09:22) - Q* (1:12:58) - GPT-5 (1:16:13) - $7 trillion of compute (1:24:22) - Google and Gemini (1:35:26) - Leap to GPT-5 (1:39:10) - AGI (1:57:44) - Aliens