Podcasts about foundation models

  • 109PODCASTS
  • 187EPISODES
  • 44mAVG DURATION
  • 1WEEKLY EPISODE
  • May 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about foundation models

Latest podcast episodes about foundation models

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC Exclusive: Why Mega Platforms Will Win in VC | Why You Cannot Do VC If You Do Not Do Pre-Seed | Why Market Sizing is BS | Where Will Foundation Models Build/Buy Apps vs Where Will They Not with Bucky Moore

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later May 5, 2025 65:07


Bucky Moore is a Partner @ Lightspeed Venture Partners, announced exclusively in the show today on 20VC. Prior to Lightspeed, Bucky spent an incredibly successful 7 years at Kleiner Perkins working with Mamoon Hamid to build one of the most successful early stage firms of the last decade. Bucky has made investments in the likes of Prisma, Netlify, Browserbase and more.  In Today's Episode We Discuss: 03:07 Big News: Joining Lightspeed Venture Partners 04:09 Why Mega Platforms Will Win the Next 10 Years of VC 09:33 Are Foundation Model Companies Good Venture Investments 16:04 What Applications Will Model Providers Buy/Build? What Will They Not? 22:03 How to Approach Price Sensitivity in a World of AI 28:25 Why is it BS to do Market Sizing When Making Investments in AI 34:03 Is the Future of VC Domain Specialization 38:38 How to Know What Company Wins in Super Competitive Markets 41:06 Why Every Firm Has to do Pre-Seed To Win in VC Today? 44:43 The Risks of Multi-Stage Investing: Is Signalling Risk Real? 48:53 Investing Lessons from Leading Rounds in Glean and Windsurf 56:54 Quick Fire Round: Lessons from Mamoon, Fave CEO, Next 10 Years  

From Our Neurons to Yours
Building AI simulations of the human brain | Dan Yamins

From Our Neurons to Yours

Play Episode Listen Later May 1, 2025 32:56 Transcription Available


This week on the show: Are we ready to create digital models of the human brain? Last month, Stanford researcher Andreas Tolias and colleagues created a "digital twin" of the mouse visual cortex. The researchers used the same foundation model approach that powers ChatGPT, but instead of training the model on text, the team trained in on brain activity recorded while mice watched action movies. The result? A digital model that can predict how neurons would respond to entirely new visual inputs. This landmark study is a preview of the unprecedented research possibilities made possible by foundation models of the brain—models which replicate the fundamental algorithms of brain activity, but can be studied with complete control and replicated across hundreds of laboratories.But it raises a profound question: Are we ready to create digital models of the human brain? This week we talk with Wu Tsai Neuro Faculty Scholar Dan Yamins, who has been exploring just this question with a broad range of Stanford colleagues and collaborators. We talk about what such human brain simulations might look like, how they would work, and what they might teach us about the fundamental algorithms of perception and cognition.Learn moreAI models of the brain could serve as 'digital twins' in research (Stanford Medicine, 2025)An Advance in Brain Research That Was Once Considered Impossible (New York Times, 2025)The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)Neuroscientists use AI to simulate how the brain makes sense of the visual world (Wu Tsai Neuro, 2024)How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain (Stanford Institute for Human-Centered AI (HAI), 2021)Related researchA Task-Optimized Neural Network Replicates Human Auditory Behavior... (PNAS, 2014)Vector-based navigation using grid-like representations in artificial agents (Nature, 2018)The neural architecture of language: Integrative modeling converges on predictive processing (PNAS, 2021)Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations... (Neuron, 2021) We want to hear from your neurons! Email us at at neuronspodcast@stanford.edu. Send us a text!Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience. Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

The Art Of Programming
331 Второй пилот Ai — The Art Of Programming [ Development ]

The Art Of Programming

Play Episode Listen Later Apr 29, 2025 46:06


Вместе с Игорем Лученковым из Clarify https://www.clarify.ai/ обсудили Vibe coding. Вспомнили Андрея Карпати, Тоби (Тобиас) Лютке, Чип Хьюен, затронули тему PRD https://miro.com/product-development/what-is-a-prd/ . Хвалили и ругали разные Ai. Короче развлекались как могли. Горим в подкасте на тему вторых пилотов, Ai и всего что нас окружает уже сейчас. Слушайте 331-й подкаст The Art of Programming — «Второй пилот Ai». Андрей Карпати о vibe-кодинге Тоби Лютке о сотрудниках What is a product requirements document (PRD)? Chip Huyen — AI Engineering: Building Applications with Foundation Models Участники @golodnyj Игорь Лученков, Clarify Telegram канал VK группа Яндекс Музыка iTunes подкаст Поддержи подкаст

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Foundation Models: Who Wins & Who Loses | How Economies and Labour Markets Need to Change in a World of AI | China vs the US in an AI Race: What You Need to Know | Rich Socher, Founder @ You.com

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Apr 18, 2025 63:57


Rich Socher is the Founder and CEO of You.com. Richard previously served as the Chief Scientist and EVP at Salesforce. Before that, Richard was the CEO/CTO of the AI startup MetaMind, which Salesforce acquired in 2016. He is widely recognised as having brought neural networks into the field of natural language processing, inventing the most widely used word vectors, contextual vectors and prompt engineering. He has over 150,000 citations and served as an adjunct professor in the computer science department at Stanford. In Today's Episode We Discuss: 04:10 Winners & Losers: OpenAI, Gemini, Claude 08:59 How Partnerships Could Decide the Winners in AI 12:42 China vs US: Who Wins the War for AI 25:50 How Society and Economics Needs to Change in a World of AI 34:04 What Jobs Will Be Replaced, What Will Not 36:04 How Europe Needs to Change It's Approach to AI 41:06 How AI Will Change Health and Longevity 43:10 AI in Consumer and Enterprise Markets 49:30 Quantum Computing and AI Misconceptions 56:57 Longevity, Personal Reflections, and Future Outlook  

From Our Neurons to Yours
What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

From Our Neurons to Yours

Play Episode Listen Later Apr 17, 2025 42:31 Transcription Available


If you spend any time chatting with a modern AI chatbot, you've probably been amazed at just how human it sounds, how much it feels like you're talking to a real person. Much ink has been spilled explaining how these systems are not actually conversing, not actually understanding — they're statistical algorithms trained to predict the next likely word. But today on the show, let's flip our perspective on this. What if instead of thinking about how these algorithms are not like the human brain, we talked about how similar they are? What if we could use these large language models to help us understand how our own brains process language to extract meaning? There's no one better positioned to take us through this than returning guest Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, and a member of the department of psychology here at Stanford.Learn more:Gwilliams' Laboratory of Speech NeuroscienceFireside chat on AI and Neuroscience at Wu Tsai Neuro's 2024 Symposium (video)The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)How we understand each other (From Our Neurons to Yours, 2023)Q&A: On the frontiers of speech science (Wu Tsai Neuro, 2023)Computational Architecture of Speech Comprehension in the Human Brain (Annual Review of Linguistics, 2025)Hierarchical dynamic coding coordinates speech comprehension in the human brain (PMC Preprint, 2025)Behind the Scenes segment:By re-creating neural pathway in dish, Sergiu Pasca's research may speed pain treatment (Stanford Medicine, 2025)Bridging nature and nurture: The brain's flexible foundation from birth (Wu Tsai Neuro, 2025)Get in touchWe want to hear from your neurons! Email us at at neuronspodcast@stanford.edu if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.Episode CreditsThis episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's Send us a text!Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience. Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

The Knowledge Project with Shane Parrish
#224 Bret Taylor: A Vision for AI's Next Frontier

The Knowledge Project with Shane Parrish

Play Episode Listen Later Apr 15, 2025 131:14


What happens when one of the most legendary minds in tech delves deep into the real workings of modern AI? A 2-hour long masterclass that you don't want to miss.   Bret Taylor, current chairman of OpenAI, unpacks why AI is transforming software engineering forever, how founders can survive acquisition (he's done it twice), and why the true bottlenecks in AI aren't what most think. Drawing on his extensive experiences at Facebook, Google, Twitter and more, he explains why the next phase of AI won't just be about building better models—it's about creating entirely new ways for us to work with them. Bret exposes the reality gap between what AI insiders understand and what everyone else believes. Listen now to recalibrate your thinking before your competitors do.  (00:02:46) Aha Moments with AI (00:04:43) Founders Working for Founders (00:07:59) Acquisition Process (00:14:14) The Role of a Board (00:17:05) Founder Mode (00:20:29) Engineers as Leaders (00:24:54) Applying First Principles in Business (00:28:43) The Future of Software Engineering (00:35:11) Efficiency and Verification of AI-Generated Code (00:36:46) The Future of Software Development (00:37:24) Defining AGI (00:47:03) AI Self-Improvement? (00:47:58) Safety Measures and Supervision in AI (00:49:47) Benefiting Humanity and AI Safety (00:54:06) Regulation and Geopolitical Landscape in AI (00:55:58) Foundation Models and Frontier Models (01:01:06) Economics and Open Source Models (01:05:18) AI and AGI Accessibility (01:07:42) Optimizing AI Prompts (01:11:18) Creating an AI Superpower (01:14:12) Future of Education and AI (01:19:34) The Impact of AI on Job Roles (01:21:58) AI in Problem-Solving and Research (01:25:24) Importance of AI Context Window (01:27:37) AI Output and Intellectual Property (01:30:09) Google Maps Launch and Challenges (01:37:57) Long-Term Investment in AI (01:43:02) Balancing Work and Family Life (01:44:25) Building Sierra as an Enduring Company (01:45:38) Lessons from Tech Company Lifecycles (01:48:31) Definition and Applications of AI Agents (01:53:56) Challenges and Importance of Branded AI Agents (01:56:28) Fending Off Complacency in Companies (02:01:21) Customer Obsession and Leadership in Companies Bret Taylor is currently the Chairman of OpenAI and CEO of Sierra. Previously, he was the CTO of Facebook, Chairman of the board for X, and the Co-CEO of Salesforce.  Newsletter - The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it's completely free. Learn more and sign up at fs.blog/newsletter Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: ⁠⁠⁠⁠⁠⁠⁠fs.blog/membership⁠⁠ and get your own private feed. Watch on YouTube: @tkppodcast Learn more about your ad choices. Visit megaphone.fm/adchoices

Unsupervised Learning
Ep 62: CEO of Cohere Aidan Gomez on Scaling Limits Emerging, AI Use-cases with PMF & Life After Transformers

Unsupervised Learning

Play Episode Listen Later Apr 15, 2025 50:44


Aidan joined this week's Unsupervised Learning for a wide-ranging conversation on model architectures, enterprise adoption, and what's breaking in the foundation model stack. If you're building or investing in AI infrastructure, Aidan is worth listening to. He co-authored the original Transformer paper, leads one of the most advanced model labs outside of the hyperscalers, and is now building for real-world enterprise deployment with Cohere's agent platform, North. Cohere serves thousands of customers across sectors like finance, telco, and healthcare — and they've made a name for themselves by staying model-agnostic, privacy-forward, and deeply international (with major bets in Japan and Korea) (0:00) Intro(0:32) Enterprise AI(3:23) Custom Integrations and Future of AI Agents(4:33) Enterprise Use Cases for Gen AI(7:02) The Importance of Reasoning in AI Models(10:38) Custom Models and Synthetic Data(17:48) Cohere's Approach to AI Applications(23:24) Future Use Cases and Market Fit(27:11) Building a Unified Automation Platform(27:34) Strategic Decisions in the AI Journey(29:19) International Partnerships and Language Models(31:05) Future of Foundation Models(32:27) AI in Specialized Domains(34:40) Challenges in Data Integration(35:06) Emerging Foundation Model Companies(35:31) Technological Frontiers and Architectures(37:29) Scaling Hypothesis and Model Capabilities(42:26) AI Research Culture and Team Building(44:39) Future of AI and Societal Impact(48:31) Addressing AI Risks With your co-hosts:  @jacobeffron  - Partner at Redpoint, Former PM Flatiron Health  @patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn  @ericabrescia  - Former COO Github, Founder Bitnami (acq'd by VMWare)  @jordan_segall  - Partner at Redpoint

Machine Learning Street Talk
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Machine Learning Street Talk

Play Episode Listen Later Apr 2, 2025 96:28


Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

Microsoft Research Podcast
Ideas: Accelerating Foundation Models Research: AI for all

Microsoft Research Podcast

Play Episode Listen Later Mar 31, 2025 64:20 Transcription Available


Innovative AI research often depends on access to resources. Microsoft wants to help. Technical Advisor Evelyne Viegas and distinguished faculty from two Minority Serving Institutions discuss the benefits of Microsoft's Accelerating Foundation Models Research program in their lives and research.

Radiology Podcasts | RSNA
Foundation Models in Radiology: The Future of AI in Imaging

Radiology Podcasts | RSNA

Play Episode Listen Later Mar 25, 2025 25:56


Join Dr. Linda Chu as she chats with Dr. Magdalini Paschali and Dr. Akshay Chaudhari about foundation models in radiology. Learn how these AI models are shaping the future of medical imaging and clinical practice. Foundation Models in Radiology: What, How, Why, and Why Not. Paschali et al. Radiology 2025; 314(2):e240597.

Earthquake Science Center Seminars
Applying AI foundation models to continuous seismic waveforms

Earthquake Science Center Seminars

Play Episode Listen Later Mar 12, 2025 60:00


Chris Johnson, Los Alamos National Lab Significant progress has been made in probing the state of an earthquake fault by applying machine learning to continuous seismic waveforms. The breakthroughs were originally obtained from laboratory shear experiments and numerical simulations of fault shear, then successfully extended to slow-slipping faults. Applying these machine learning models typically require task-specific labeled data for training and tuning for experimental results or a region of interest, thus limiting the generalization and robustness when broadly applied. Foundation models diverge from labeled data training procedures and are widely used in natural language processing and computer vision. The primary different is these models learn a generalized representation of the data, thus allowing several downstream tasks performed in a unified framework. Here we apply the Wav2Vec 2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanating from a sequence of moderate magnitude earthquakes during the 2018 caldera collapse at the Kilauea volcano on the island of Hawai'i. We pre-train the Wav2Vec 2.0 model using caldera seismic waveforms and augment the model architecture to predict contemporaneous surface displacement during the caldera collapse sequence, a proxy for fault displacement. We find the model displacement predictions to be excellent. The model is adapted for near-future prediction information and found hints of prediction capability, but the results are not robust. The results demonstrate that earthquake faults emit seismic signatures in a similar manner to laboratory and numerical simulation faults, and artificial intelligence models developed for encoding audio of speech may have important applications in studying active fault zones.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Anthropic CPO Mike Krieger: Where Will Value Be Created in a World of AI | Have Foundation Models Commoditized | When Do Model Providers Become Application Providers | What Anthropic Learned from Deepseek

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Mar 3, 2025 66:06


Mike Krieger is the Co-Founder of Instagram and now CPO @ Anthropic.  In Today's Episode with Mike Krieger We Discuss: 03:07 Where Will Value Be Created and Sustained in a World of AI? 04:59 Are Foundation Models Commoditised Today? 08:36 Should Founders Build for the Models of Today or Build for Models of the Future 12:19: Why Will Models Become More Different Than More Similar 16:38: Will Human or Synthetic Data Be More Prominent in the Future  19:28 Model Quality vs. Product UX 23:36 The Competitive Landscape of AI 32:27 Do We Underestimate China's AI Capabilities 33:59 What Did Anthropic Learn from Deepseek 34:07 Is Deepseek a Sustaining and Credible Threat? 37:04 Transitioning from Model Provider to Application Provider 38:26 Where Has Anthropic Chronically Under-Invested 39:08 Why Has Anthropic Been Slow On Consumer Product Development 43:50 What is the Role of a Software Developer in the Future 48:29 Balancing API and Consumer Products 51:09 Is Europe Stronger or Weaker in a World of AI 52:40 Quickfire Round: Insights and Reflections  

The Data Exchange with Ben Lorica
The Future of AI: Regulation, Foundation Models & User Experience

The Data Exchange with Ben Lorica

Play Episode Listen Later Feb 27, 2025 47:42


This is our semi-regular conversation on topics in AI and Technology with Paco Nathan, the founder of Derwen, a boutique consultancy focused on Data and AI. Subscribe to the Gradient Flow Newsletter:  https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon •  RSS.Detailed show notes - with links to many references - can be found on The Data Exchange web site.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC Exclusive: Mercor Raises $100M at a $2BN Valuation: Scaling to $70M in ARR in 24 Months | 9-9-6: 9AM-9PM - 6 Days Per Week: The Most Intense Culture in Silicon Valley | The Future of Programming, Models and Data with Adarsh Hiremath

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Feb 20, 2025 45:55


Adarsh Hiremath is the Co-Founder and CTO @ Mercor, an AI recruitment platform and one of the fastest-growing companies in technology. They have scaled to $70M in ARR in just 24 months. They are famed for working 6 days per week, 9AM to 9PM. All of their founders are Thiel fellows, they are also the youngest unicorn founders ever with the fundraise announced today raising $100M led by Felicis at a $2BN valuation.  In Today's Episode We Discuss: 04:36 How Debating Makes The Best Founders 06:05 Do People Treat You Differently When a Unicorn Founder 10:58 Scaling to $70M ARR in 24 Months 13:42 How Culture Breaks When Scaling So Fast  23:49 The Future of Foundation Models 24:05 OpenAI vs Anthropic 24:32 Data: Synthetic vs Human 27:10 The Future of Programming and AI 28:15 The Impact of AI Tools on Software Development 28:51 Why Software Will Become Commoditised 29:55 Network Effects and Marketplaces 33:13 Raising From Benchmark After a Helicopter Ride 37:30 Quickfire Round: Insights and Reflections  

Eyes on Earth
Eyes on Earth Episode 131 – Using AI in Geospatial Work

Eyes on Earth

Play Episode Listen Later Feb 20, 2025 30:58 Transcription Available


Eyes on Earth tackles artificial intelligence (AI) in a 2-part episode. AI is quickly becoming a necessary part of geospatial work at EROS, helping us efficiently do science to better manage our world. In Part 1, EROS Director Pete Doucette discusses AI and its current and upcoming impact on our work at EROS. To help clarify AI terminology such as machine learning, deep learning, neural networks, transformers, and foundation models, we also talk to scientists who are using AI. And we learn about how AI enabled the National Land Cover Database (NLCD) to become an annual product.Part 2 will discuss one more potential application of AI—keeping Landsat satellites safe and healthy in orbit. We also have all of our guests comment on AI's challenges and benefits.

Theoretical Neuroscience Podcast
On neuroscience foundation models - with Andreas Tolias - #24

Theoretical Neuroscience Podcast

Play Episode Listen Later Feb 1, 2025 91:43


The term “foundation model” refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.  

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Deepseek Special: Is Deepseek a Weapon of the CCP | How Should OpenAI and the US Government Respond | Why $500BN for Stargate is Not Enough | The Future of Inference, NVIDIA and Foundation Models with Jonathan Ross @ Groq

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jan 29, 2025 55:08


Jonathan Ross is the Co-Founder and CEO of Groq, providing fast AI inference. Prior to founding Groq, Jonathan started Google's TPU effort where he designed and implemented the core elements of the original chip. Jonathan then joined Google X's Rapid Eval Team, the initial stage of the famed “Moonshots factory,” where he devised and incubated new Bets (Units) for Alphabet.  The 10 Most Important Questions on Deepseek: How did Deepseek innovate in a way that no other model provider has done? Do we believe that they only spent $6M to train R1? Should we doubt their claims on limited H100 usage? Is Josh Kushner right that this is a potential violation of US export laws? Is Deepseek an instrument used by the CCP to acquire US consumer data? How does Deepseek being open-source change the nature of this discussion? What should OpenAI do now? What should they not do? Does Deepseek hurt or help Meta who already have their open-source efforts with Lama? Will this market follow Satya Nadella's suggestion of Jevon's Paradox? How much more efficient will foundation models become? What does this mean for the $500BN Stargate project announced last week?  

5 Year Frontier
#29: Floating Data Centers, Decarbonized Compute, New AGI Infrastructure, Foundation Models for Molecules, and the Future of Materials w/ Orbital Materials CEO Jonathan Godwin

5 Year Frontier

Play Episode Listen Later Jan 29, 2025 27:44


The future of material science. In it we talk about discovering new materials with AI, carbon neutral compute, data centers in space, and the convergence of biology and materials. Jonathan Godwin CEO of Orbital Materials, a material sciences company that leverages AI to discovery and design of advanced materials, particularly for clean energy, carbon capture, and data center applications. Founded in 2022 out of London, Orbital have raised over $20M in funding from the likes of Radical Ventures, Nvidia, and Toyota. They have already developed the world fastest and most accurate AI model for simulating advanced materials, including their own proprietary foundation model. The startup has also entered a multi-year collaboration with Amazon Web Services (AWS) to develop new data center decarbonization and efficiency technologies. Jonathan’s impressive background includes a tenure as a senior researcher at Google DeepMind, where he honed his expertise in artificial intelligence. He holds advanced degrees in physics and computational sciences, with a degree from University College London. Sign up for new podcasts and our newsletter, and email me on danieldarling@focal.vcSee omnystudio.com/listener for privacy information.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Why All AI Companies Are Under-Valued | The Future of Foundation Models: Scaling Laws, Generalised vs Specialised, Commoditised? | From Unable to Afford Rent to Raising $130M From Index and Peter Thiel with George Sivulka @ Hebbia

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jan 22, 2025 66:26


George Sivulka is the founder and CEO of Hebbia, is one of the fastest-growing gen AI companies and they recently raised a $130M series B. Investors include the company include hailed names such as a16z, Peter Thiel, Index, GV and others.  In Today's Episode with George Sivulka We Discuss: 04:47 Three Traits The Best Founders All Share? 08:11 How Cold Calling NASA Changed My Life 12:01 From Stealing Food From Stanford to Pitching Peter Thiel 17:22 Lessons working with Peter Thiel 26:39 The Future of AI and Business Applications 33:03 The Future of Employment with AI 33:45 Debunking the Myths of AI Job Displacement 35:09 The Future of Models: Many specialised or few generalised? 35:56 Scaling at Inference: A New Frontier 38:10 The Impact of Scaling Laws on Foundation Models 40:40 The Future of AI and Enterprise Value 43:43 The Geopolitical Influence on AI 45:03 The Commoditization of AI Models 47:47 Why Foundation Models Will Not Follow the Same Path of Cloud 52:53 Why All Companies, Both AI and Non-AI Are Undervalued  

The MAD Podcast with Matt Turck
What You MUST Know About AI Engineering in 2025 | Chip Huyen, Author of “AI Engineering”

The MAD Podcast with Matt Turck

Play Episode Listen Later Jan 16, 2025 72:35


In this episode, we dive deep into the world of AI engineering with Chip Huyen, author of the excellent, newly released book "AI Engineering: Building Applications with Foundation Models". We explore the nuances of AI engineering, distinguishing it from traditional machine learning, discuss how foundational models make it possible for anyone to build AI applications and cover many other topics including the challenges of AI evaluation, the intricacies of the generative AI stack, why prompt engineering is underrated, why the rumors of the death of RAG are greatly exaggerated, and the latest progress in AI agents. Book: https://www.oreilly.com/library/view/ai-engineering/9781098166298/ Chip Huyen Website - https://huyenchip.com LinkedIn - https://www.linkedin.com/in/chiphuyen Twitter/X - https://x.com/chipro FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (02:45) What is new about AI engineering? (06:11) The product-first approach to building AI applications (07:38) Are AI engineering and ML engineering two separate professions? (11:00) The Generative AI stack (13:00) Why are language models able to scale? (14:45) Auto-regressive vs. masked models (16:46) Supervised vs. unsupervised vs. self-supervised (18:56) Why does model scale matter? (20:40) Mixture of Experts (24:20) Pre-training vs. post-training (28:43) Sampling (32:14) Evaluation as a key to AI adoption (36:03) Entropy (40:05) Evaluating AI systems (43:21) AI as a judge (46:49) Why prompt engineering is underrated (49:38) In-context learning (51:46) Few-shot learning and zero-shot learning (52:57) Defensive prompt engineering (55:29) User prompt vs. system prompt (57:07) Why RAG is here to stay (01:00:31) Defining AI agents (01:04:04) AI agent planning (01:08:32) Training data as a bottleneck to agent planning

Startup Project
From Zero to $50 Million Read.AI's Explosive Growth | David Shim Founder of Read AI

Startup Project

Play Episode Listen Later Jan 13, 2025 54:02


This episode of Startup Project tackles that very problem! Join host Nataraj as he sits down with David Shim, CEO of Read.ai, a company that just raised $50 million in Series B funding to revolutionize how we handle meetings and work in general. About the Episode: This insightful conversation dives deep into Read.ai's innovative AI-powered meeting summarizer. David, a repeat founder, trader, investor, and former CEO of Foursquare, shares his journey, from the initial spark of an idea during a particularly unproductive Zoom call to Read.ai's current position as a top meeting note-taker. He reveals the secrets behind their rapid growth, achieving a remarkable climb from #20 to #2 in the world in less than 18 months – all with zero dollars spent on media marketing! David discusses his unique approach, combining video analysis with text, to create meeting summaries that go beyond simple transcriptions. Learn how Read.ai analyzes sentiment, engagement, and even subtle cues like speaking pace to deliver truly insightful summaries. The conversation also touches upon the future of work, with David advocating for hybrid work models and sharing his perspective on the current AI hype cycle. He provides a compelling vision for Read.ai's evolution into a true "co-pilot" for everyone, everywhere, seamlessly integrating with various platforms to streamline workflows and boost productivity. The discussion extends to David's own investment strategies, offering valuable insights into his approach to venture capital and angel investing. About the Guest and Host: David Shim: CEO of Read.ai, repeat founder, trader, investor, and former CEO of Foursquare. Connect with David: → LinkedIn: https://www.linkedin.com/in/davidshim/. → Website: https://www.read.ai/ Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. → LinkedIn: https://www.linkedin.com/in/natarajsindam/ → Twitter: https://x.com/natarajsindam → Email updates: ⁠https://startupproject.substack.com/⁠ → Website: ⁠⁠⁠https://thestartupproject.io⁠⁠⁠ Timestamps: 00:02 - Introduction and Guest Introduction 00:54 - David's Location and Work Model 01:17 - Remote vs. Hybrid Work Models 03:37 - The Origin Story of Read.ai 05:50 - Read.ai's Initial Idea and Technological Challenges 08:42 - Analyzing Video and Text Data 11:33 - The Evolution of Read.ai's Product and Value Proposition 12:41 - Overcoming Challenges in Product Adoption and User Experience 14:50 - Customer Acquisition Strategy and Product-Led Growth 16:58 - Word-of-Mouth Marketing and Inherent Virality 19:43 - Read.ai's Expansion Beyond Meeting Notes: "Copilot Everywhere" 20:16 - Vision for Read.ai as a Universal Work Copilot 22:57 - Targeting Specific Verticals While Maintaining a Horizontal Approach 24:55 - Foundation Models and Pricing Strategies 29:15 - David's Perspective on the Current LLM Landscape 30:37 - Using AI to Optimize Internal Operations at Read.ai 34:09 - Thoughts on AI Agents and Their Current State 36:44 - Read.ai's Vision for the Next Few Years 41:11 - David's Investment Thesis and Strategies 46:21 - David's Early Career in Trading 50:59 - Current Consumption (Books, Podcasts, etc.) 52:38 - Mentors and Their Influence 54:13 - Lessons Learned as a Founder and Investor Subscribe to Startup Project for more engaging conversations with leading entrepreneurs! → Email updates: ⁠https://startupproject.substack.com/⁠ #StartupProject #ReadAI #AI #ArtificialIntelligence #Meetings #Productivity #HybridWork #RemoteWork #VentureCapital #AngelInvesting #SeriesB #Funding #Entrepreneurship #Podcast #YouTube #Tech #Innovation #AIEverywhere #Copilot

Fluidity
Better Text Generation With Science And Engineering

Fluidity

Play Episode Listen Later Jan 12, 2025 38:20


Current text generators, such as ChatGPT, are highly unreliable, difficult to use effectively, unable to do many things we might want them to, and extremely expensive to develop and run. These defects are inherent in their underlying technology. Quite different methods could plausibly remedy all these defects. Would that be good, or bad? https://betterwithout.ai/better-text-generators John McCarthy's paper “Programs with common sense”: http://www-formal.stanford.edu/jmc/mcc59/mcc59.html Harry Frankfurt, "On Bullshit": https://www.amazon.com/dp/B001EQ4OJW/?tag=meaningness-20 Petroni et al., “Language Models as Knowledge Bases?": https://aclanthology.org/D19-1250/ Gwern Branwen, “The Scaling Hypothesis”: gwern.net/scaling-hypothesis Rich Sutton's “Bitter Lesson”: www.incompleteideas.net/IncIdeas/BitterLesson.html Guu et al.'s “Retrieval augmented language model pre-training” (REALM): http://proceedings.mlr.press/v119/guu20a/guu20a.pdf Borgeaud et al.'s “Improving language models by retrieving from trillions of tokens” (RETRO): https://arxiv.org/pdf/2112.04426.pdf Izacard et al., “Few-shot Learning with Retrieval Augmented Language Models”: https://arxiv.org/pdf/2208.03299.pdf Chirag Shah and Emily M. Bender, “Situating Search”: https://dl.acm.org/doi/10.1145/3498366.3505816 David Chapman's original version of the proposal he puts forth in this episode: twitter.com/Meaningness/status/1576195630891819008 Lan et al. “Copy Is All You Need”: https://arxiv.org/abs/2307.06962 Mitchell A. Gordon's “RETRO Is Blazingly Fast”: https://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html Min et al.'s “Silo Language Models”: https://arxiv.org/pdf/2308.04430.pdf W. Daniel Hillis, The Connection Machine, 1986: https://www.amazon.com/dp/0262081571/?tag=meaningness-20 Ouyang et al., “Training language models to follow instructions with human feedback”: https://arxiv.org/abs/2203.02155 Ronen Eldan and Yuanzhi Li, “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?”: https://arxiv.org/pdf/2305.07759.pdf Li et al., “Textbooks Are All You Need II: phi-1.5 technical report”: https://arxiv.org/abs/2309.05463 Henderson et al., “Foundation Models and Fair Use”: https://arxiv.org/abs/2303.15715 Authors Guild v. Google: https://en.wikipedia.org/wiki/Authors_Guild%2C_Inc._v._Google%2C_Inc. Abhishek Nagaraj and Imke Reimers, “Digitization and the Market for Physical Works: Evidence from the Google Books Project”: https://www.aeaweb.org/articles?id=10.1257/pol.20210702 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: The Future of Foundation Models | The Future of AI Consumer Apps and Why OpenAI Did a Disservice to Them | The Future of Music: Spotify vs YouTube & Spotify vs TikTok: What Happens with Mikey Shulman @ Suno

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jan 10, 2025 57:50


Mikey Shulman is the Co-Founder and CEO of Suno, the leading music AI company. Suno lets everyone make and share music. Mikey has raised over $125M for the company from the likes of Lightspeed, Founder Collective and Nat Friedman and Daniel Gross. Prior to founding Suno, Mikey was the first machine learning engineer and head of machine learning at Kensho technologies, which was acquired by S&P Global for over $500 million.  In Today's Episode with Mikey Shulman: 1. The Future of Models:  Who wins the future of models? Anthropic, OpenAI or X? Will we live in a world of many smaller models? When does it make sense for specialised vs generalised models? Does Mikey believe we will continue to see the benefits of scaling laws? 2. The Future of UI and Consumer Apps:  Why does Mikey believe that OpenAI did AI consumer companies a massive disservice? Why does Mikey believe consumers will not choose their model or pay for a superior model in the future?  Why does Mikey believe that good taste is more important than good skills? Why does Mikey argue physicists and economists make the best ML engineers? 3. The Future of Music:  What is going on with Suno's lawsuit against some of the biggest labels in music? How does Mikey see the future of music discovery? How does Mikey see the battle between Spotify and YouTube playing out? How does Mikey see the battle between TikTok and Spotify playing out?  

The AI Podcast
NVIDIA's Ming-Yu Liu on How World Foundation Models Will Advance Physical AI - Episode 240

The AI Podcast

Play Episode Listen Later Jan 7, 2025 20:31


As AI continues to evolve rapidly, it is becoming more important to create models that can effectively simulate and predict outcomes in real-world environments. World foundation models are powerful neural networks that can simulate physical environments, enabling teams to enhance AI workflows and development. Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, joined the NVIDIA AI Podcast to talk about world foundation models and how it will impact various industries. https://blogs.nvidia.com/blog/world-foundation-models-advance-physical-ai/ https://www.nvidia.com/cosmos/

The Neil Ashton Podcast
S2, EP7 - Prof. Michael Mahoney - Perspectives on AI4Science

The Neil Ashton Podcast

Play Episode Listen Later Dec 26, 2024 76:44


In this episode of the Neil Ashton podcast, Professor Michael Mahoney discusses the intersection of machine learning, mathematics, and computer science. The conversation covers topics such as randomized linear algebra, foundational models for science, and the debate between physics-informed and data-driven approaches. Prof. Mahoney shares insights on the relevance of his research, the potential of using randomness in algorithms, and the evolving landscape of machine learning in scientific disciplines. He also discusses the evolution and practical applications of randomized linear algebra in machine learning, emphasizing the importance of randomness and data availability. He explores the tension between traditional scientific methods and modern machine learning approaches, highlighting the need for collaboration across disciplines. Prof Mahoney also addresses the challenges of data licensing and the commercial viability of machine learning solutions, offering insights for aspiring researchers in the field.Prof. Mahoney website: https://www.stat.berkeley.edu/~mmahoney/Google scholar: https://scholar.google.com/citations?user=QXyvv94AAAAJ&hl=enYoutube version: https://youtu.be/lk4lvKQsqWUChapters00:00 Introduction to the Podcast and Guest05:51 Understanding Randomized Linear Algebra19:09 Foundational Models for Science32:29 Physics-Informed vs Data-Driven Approaches38:36 The Practical Application of Randomized Linear Algebra39:32 Creative Destruction in Linear Algebra and Machine Learning40:32 The Role of Randomness in Scientific Machine Learning41:56 Identifying Commonalities Across Scientific Domains42:52 The Horizontal vs. Vertical Application of Machine Learning44:19 The Challenge of Common Architectures in Science46:31 Data Availability and Licensing Issues50:04 The Future of Foundation Models in Science54:21 The Commercial Viability of Machine Learning Solutions58:05 Emerging Opportunities in Scientific Machine Learning01:00:24 Navigating Academia and Industry in Machine Learning01:11:15 Advice for Aspiring Scientific Machine Learning ResearchersKeywordsmachine learning, randomized linear algebra, foundational models, physics-informed neural networks, data-driven science, computational efficiency, academic advice, numerical methods, AI in science, engineering, Randomized Linear Algebra, Machine Learning, Scientific Computing, Data Availability, Foundation Models, Academia, Industry, Research, Algorithms, Innovation

HPE Tech Talk
Taking an AI strategy from dream to reality

HPE Tech Talk

Play Episode Listen Later Dec 19, 2024 19:35


In this episode we are looking at something rather practical: How to take an AI strategy from dream to reality.In many cases, when organizations decide they want an AI solution, they need help and guidance designing it, and getting the most out of it. So, what does an effective AI-building strategy look like? To find out, this week we're joined by Jimmy Whitaker, Chief Scientist of AI and Strategy at Hewlett Packard Enterprise.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Jimmy Whitaker: https://www.linkedin.com/in/jimmymwhitaker/Sources cited in this week's episode:UK stats on AI literacy : https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/bulletins/businessinsightsandimpactontheukeconomy/4january2024Grammarly research on AI literacy: https://thecuberesearch.com/ai-literacy-the-new-competitive-advantage-for-organizations-of-all-sizes/#:~:text=The%20AI%20Literacy%20Gap,workers%20have%20reached%20this%20levelNASA's underwater swarm robotics programme: https://www.jpl.nasa.gov/images/pia26425-nasas-swim-prototypes/

Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise

In this episode we are looking at something rather practical: How to take an AI strategy from dream to reality.In many cases, when organizations decide they want an AI solution, they need help and guidance designing it, and getting the most out of it. So, what does an effective AI-building strategy look like? To find out, this week we're joined by Jimmy Whitaker, Chief Scientist of AI and Strategy at Hewlett Packard Enterprise.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Jimmy Whitaker: https://www.linkedin.com/in/jimmymwhitaker/Sources cited in this week's episode:UK stats on AI literacy : https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/bulletins/businessinsightsandimpactontheukeconomy/4january2024Grammarly research on AI literacy: https://thecuberesearch.com/ai-literacy-the-new-competitive-advantage-for-organizations-of-all-sizes/#:~:text=The%20AI%20Literacy%20Gap,workers%20have%20reached%20this%20levelNASA's underwater swarm robotics programme: https://www.jpl.nasa.gov/images/pia26425-nasas-swim-prototypes/

HPE Tech Talk, SMB
Taking an AI strategy from dream to reality

HPE Tech Talk, SMB

Play Episode Listen Later Dec 19, 2024 19:35


In this episode we are looking at something rather practical: How to take an AI strategy from dream to reality.In many cases, when organizations decide they want an AI solution, they need help and guidance designing it, and getting the most out of it. So, what does an effective AI-building strategy look like? To find out, this week we're joined by Jimmy Whitaker, Chief Scientist of AI and Strategy at Hewlett Packard Enterprise.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Jimmy Whitaker: https://www.linkedin.com/in/jimmymwhitaker/Sources cited in this week's episode:UK stats on AI literacy : https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/bulletins/businessinsightsandimpactontheukeconomy/4january2024Grammarly research on AI literacy: https://thecuberesearch.com/ai-literacy-the-new-competitive-advantage-for-organizations-of-all-sizes/#:~:text=The%20AI%20Literacy%20Gap,workers%20have%20reached%20this%20levelNASA's underwater swarm robotics programme: https://www.jpl.nasa.gov/images/pia26425-nasas-swim-prototypes/

Chit Chat Money
The Future of Artificial Intelligence (AI) and Cloud Computing With Shawn Wang (OpenAI, Google, + More)

Chit Chat Money

Play Episode Listen Later Dec 18, 2024 70:13


On this episode of Chit Chat Stocks, Ryan and Brett speak to Shawn Wang of Latent Space, who is an expert and on the ground working in the AI start-up space. We discuss: (00:00) Introduction to AI and Cloud Computing (02:21) Expertise in AI and Foundation Models (04:43) Changes in Cloud Revenue Growth (06:45) The Cost of AI Infrastructure (10:36) The Landscape of AI Startups (14:09) Building Moats in AI (15:45) Evaluating Major AI Players (16:41) OpenAI's Current Standing (20:10) First Mover Advantage in AI (25:48) Google's AI Strategy and Challenges (32:51) Meta's AI Transformation (36:59) Amazon and Microsoft's AI Strategies (40:16) Startup Spending and Growth Justification (43:02) The Risks and Challenges of AI Investment (46:27) Cost Reduction and the Future of AI Training (49:22) The Rise of AI Engineers (54:46) Autonomous Agents and Their Impact (01:01:22) The Future of Robotics and AI Integration (01:05:400 Industries Affected by AI Advancements SUBSCRIBE TO LATENT SPACE: https://www.latent.space/ ***************************************************** JOIN OUR FREE CHAT COMMUNITY: https://chitchatstocks.substack.com/  ********************************************************************* Sign-up for a bond account at Public.com/chitchatstocks  A Bond Account is a self-directed brokerage account with Public Investing, member FINRA/SIPC. Deposits into this account are used to purchase 10 investment-grade and high-yield bonds. As of 9/26/24, the average, annualized yield to worst (YTW) across the Bond Account is greater than 6%. A bond's yield is a function of its market price, which can fluctuate; therefore, a bond's YTW is not “locked in” until the bond is purchased, and your yield at time of purchase may be different from the yield shown here. The “locked in” YTW is not guaranteed; you may receive less than the YTW of the bonds in the Bond Account if you sell any of the bonds before maturity or if the issuer defaults on the bond. Public Investing charges a markup on each bond trade. See our Fee Schedule. Bond Accounts are not recommendations of individual bonds or default allocations. The bonds in the Bond Account have not been selected based on your needs or risk profile. See https://public.com/disclosures/bond-account to learn more. ********************************************************************* FinChat.io is The Complete Stock Research Platform for fundamental investors. With its beautiful design and institutional-quality data, FinChat is incredibly powerful and easy to use. Use our LINK and get 15% off any premium plan: ⁠finchat.io/chitchat  ********************************************************************* Sign up for YellowBrick Investing to track the best investing pitches across the internet: joinyellowbrick.com/chitchat ********************************************************************* Bluechippers Club is a tight-knit community of stock focused investors. Members share ideas, participate in weekly calls, and compete in portfolio competitions. To join, go to ⁠bluechippersclub.com⁠ and hit apply! ********************************************************************* Disclosure: Chit Chat Stocks hosts and guests are not financial advisors, and nothing they say on this show is formal advice or a recommendation.

Cover Up: Body Brokers
You Might Also Like: Smart Talks with IBM

Cover Up: Body Brokers

Play Episode Listen Later Dec 9, 2024


Introducing The Power of Granite in Business from Smart Talks with IBM.Follow the show: Smart Talks with IBMAs the scale of artificial intelligence continues to evolve, open technology like many of IBM's Granite models are helping enhance transparency in AI and improve efficiency across businesses. In this episode of Smart Talks with IBM, Jacob Goldstein sat down with Maryam Ashoori, the Director of Product Management and Head of Product for IBM's watsonx.ai, where she spearheads the product strategy and delivery of IBM's watsonx Foundation Models. Together, they explored the shift from large general-purpose AI models to smaller, customizable models tailored to specific needs. This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions. Visit us at https://ibm.com/smarttalksSee omnystudio.com/listener for privacy information.DISCLAIMER: Please note, this is an independent podcast episode not affiliated with, endorsed by, or produced in conjunction with the host podcast feed or any of its media entities. The views and opinions expressed in this episode are solely those of the creators and guests. For any concerns, please reach out to team@podroll.fm.

AI + a16z
REPLAY: Scoping the Enterprise LLM Market

AI + a16z

Play Episode Listen Later Nov 30, 2024 43:12


This is a replay of our first episode from April 12, featuring Databricks VP of AI Naveen Rao and a16z partner Matt Bornstein discussing enterprise LLM adoption, hardware platforms, and what it means for AI to be mainstream. If you're unfamiliar with Naveen, he has been in the AI space for more than decade working on everything from custom hardware to LLMs, and has founded two successful startups — Nervana Systems and MosaicML. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

People of Pathology Podcast
Episode 192: Julianna Ianni - A Crash Course In Foundation Models And Concentriq Embeddings

People of Pathology Podcast

Play Episode Listen Later Nov 25, 2024 43:12


Julianna Ianni from Proscia joins me today. What we discuss with Julianna: Her background in biomedical engineering and biomedical imaging. Transition from radiology to pathology and the overlap between the two fields. Role as VP of AI Research and Development at Proscia, focusing on AI applications in digital pathology. An overview of foundation models Embeddings and their role in feature extraction from data Concentriq Embeddings by Proscia: The importance of data diversity in training AI models Importance of collaboration with other companies in developing AI solutions. Future possibilities for foundation models in pathology, including multimodal applications. Links for this episode: Health Podcast Network  LabVine Learning Dress A Med scrubs Digital Pathology Club   Concentriq Embeddings Overview "How Proscia's AI R&D Team Leveraged Foundation Models at Scale to Build 80 Breast Cancer Biomarker Prediction Models in Under 24 Hours" Foundation Models For Pathology AI Development At Your Fingertips Accelerating Tumor Segmentation Model Development with Concentriq Embeddings The Hidden Costs of AI Development in Pathology and How Concentriq Embeddings Helps Life Sciences Organizations Mitigate Them   People of Pathology Podcast: Twitter Instagram

Infinite Machine Learning
Voice-to-Voice Foundation Models

Infinite Machine Learning

Play Episode Listen Later Oct 30, 2024 39:08


Alan Cowen is the cofounder and CEO of Hume, a company building voice-to-voice foundation models. They recently raised their $50M Series B from Union Square Ventures, Nat Friedman, Daniel Gross, and others. Alan's favorite book: 1984 (Author: George Orwell)(00:01) Introduction(00:06) Defining Voice-to-Voice Foundation Models(01:26) Historical Context: Handling Voice and Speech Understanding(03:54) Emotion Detection in Voice AI Models(04:33) Training Models to Recognize Human Emotion in Speech(07:19) Cultural Variations in Emotional Expressions(09:00) Semantic Space Theory in Emotion Recognition(12:11) Limitations of Basic Emotion Categories(15:50) Recognizing Blended Emotional States(20:15) Objectivity in Emotion Science(24:37) Practical Aspects of Deploying Voice AI Systems(28:17) Real-Time System Constraints and Latency(31:30) Advancements in Voice AI Models(32:54) Rapid-Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Future of Everything presented by Stanford Engineering

Returning guest Marco Pavone is an expert in autonomous robotic systems, such as self-driving cars and autonomous space robots. He says that there have been major advances since his last appearance on the show seven years ago, mostly driven by leaps in artificial intelligence. He tells host Russ Altman all about the challenges and progress of autonomy on Earth and in space in this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your quest. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile:  Marco PavoneCenter for AEroSpace Autonomy Research (CAESAR)Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads or Twitter/XConnect with School of Engineering >>> Twitter/XChapters:(00:00:00) IntroductionRuss Altman introduces guest Marco Pavone, a professor of aeronautics and astronautics at Stanford.(00:02:37) Autonomous Systems in Everyday LifeAdvancements in the real-world applications of autonomous systems.(00:03:51) Evolution of Self-Driving TechnologiesThe shift from fully autonomous cars to advanced driver assistance systems.(00:06:36) Public Perception of Autonomous VehiclesHow people react to and accept autonomous vehicles in everyday life.(00:07:49) AI's and Autonomous DrivingThe impact of AI advancements on autonomous driving performance.(00:09:52) Simulating Edge Cases for SafetyUsing AI to simulate rare driving events to improve safety and training.(00:12:04) Autonomous Vehicle CommunicationCommunication challenges between autonomous vehicles and infrastructure.(00:15:24) Risk-Averse Planning in Autonomous SystemsHow risk-averse planning ensures safety in autonomous vehicles.(00:18:43) Autonomous Systems in SpaceThe role of autonomous robots in space exploration and lunar missions.(00:22:47) Space Debris and Collision AvoidanceThe challenges of space debris and collision avoidance with autonomous systems.(00:24:39) Distributed Autonomous Systems for SpaceUsing distributed autonomous systems in space missions for better coordination.(00:28:40) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads or Twitter/XConnect with School of Engineering >>> Twitter/X

AI + a16z
How GPU Access Helps AI Startups Be Agile

AI + a16z

Play Episode Listen Later Oct 23, 2024 39:08


In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them. Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:"It comes down to whether the customer that's using them . . .  has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s."The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs."Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."Learn more:Navigating the High Cost of GPU ComputeChasing Silicon: The Race for GPUsRemaking the UI for AIFollow on X:Anjney MidhaDerrick Harris Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Eye On A.I.
#212 Thomas Dietterich: The Future of Machine Learning, Deep Learning and Computer Vision

Eye On A.I.

Play Episode Listen Later Oct 9, 2024 56:27


This episode is sponsored by Speechmatics. Check it out at www.speechmatics.com/realtime     In this episode of the Eye on AI podcast, host Craig Smith sits down with Thomas G. Dietterich, a pioneer in the field of machine learning, to explore the evolving landscape of AI and its application in real-world problems.   Thomas shares his journey from the early days of AI, where rule-based systems dominated, to the breakthroughs in deep learning that have revolutionized computer vision. He delves into the challenges of detecting novelty in AI, emphasizing the importance of teaching machines to recognize "unknown unknowns."   The conversation highlights the growing field of computational sustainability, where AI is used to solve pressing environmental problems, from designing new materials to optimizing wildfire management. Thomas also provides insights into the role of transformers and generative AI, discussing their power and limitations, particularly in tasks like object recognition and problem formulation.   Join us for a deep dive into the future of AI, where Thomas explains why the development of novel materials and drugs may have the most transformative impact on our economy. Plus, hear about his latest work on multi-instance learning, weak supervision, and the role of reinforcement learning in real-world applications like wildfire management.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest trends and insights in AI and machine learning!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Thomas Dietterich's Machine Learning Journey (02:34) The Early Days of Machine Learning and AI Systems (04:29) Tackling the Multiple Instance Problem in Drug Design (05:41) AI in Sustainability (07:17) The Challenge of Novelty Detection in AI Systems (08:00) Addressing the Open Set Problem in Cybersecurity and Computer Vision (09:11) The Evolution of Deep Learning in Computer Vision (11:21) How Deep Learning Handles Novel Representations (12:01) Foundation Models and Self-Supervised Learning (14:11) Vision Transformers vs. Convolutional Neural Networks (16:05) The Role of Multi-Instance Learning in Weakly Labeled Data (18:36) Ensemble Learning and Deep Networks in Machine Learning (20:33) The Future of AI: Large Language Models and Their Applications (23:51) Symbolic Regression and AI's Role in Scientific Discovery (34:44) AI in Wildfire Management: Using Reinforcement Learning (39:32) AI-Driven Problem Formulation and Optimization in Industry (41:30) The Future of AI Reasoning Systems and Problem Solving (45:03) The Limits of Large Language Models in Scientific Research (50:12) Closing Thoughts: Open Challenges and Opportunities in AI  

The AI Fundamentalists
New paths in AI: Rethinking LLMs and model risk strategies

The AI Fundamentalists

Play Episode Listen Later Oct 8, 2024 39:51 Transcription Available


Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn't changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion.Intro and news: The veto of California's AI Safety Bill (00:00:03)Can state-specific AI regulations really protect consumers, or do they risk stifling innovation? (Gov. Newsome's response)Veto highlights the critical need for risk-based regulations that don't rely solely on the size and cost of language models Arguments to be made for a cohesive national framework that ensures consistent AI regulation across the United StatesAre businesses ready to embrace large language models, or are they underestimating the challenges? (00:08:35) The myth that acquiring a foundational model is a quick fix for productivity woes The essential role of robust risk management strategies, especially in sensitive sectors handling personal dataReview of model cards, Open AI's system cards, and the importance of thorough testing, validation, and stricter regulations to prevent a false sense of securityTransparency alone is not enough; objective assessments are crucial for genuine progress in AI integrationFrom hallucinations in language models to ethical energy use, we tackle some of the most pressing problems in AI today (00:16:29)Reinforcement learning with annotators and the controversial use of other models for reviewJan LeCun's energy systems and retrieval-augmented generation (RAG) offer intriguing alternatives that could reshape modeling approachesThe ethics of advancing AI technologies, consider the parallels with past monumental achievements and the responsible allocation of resources (00:26:49)There is good news about developments and lessons learned from LLMs; but there is also a long way to go.Our original predictions in episode 2 for LLMs still reigns true: “Reasonable expectations of LLMs: Where truth matters and risk tolerance is low, LLMs will not be a good fit”With increased hype and awareness from LLMs came varying levels of interest in how all model types and their impacts are governed in a business.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

Software Defined Talk
Episode 487: WordPress Drama and Chicken Sandwiches

Software Defined Talk

Play Episode Listen Later Oct 4, 2024 68:16


This week, we recap the WordPress showdown between Automattic and WP Engine, and discuss the future of OpenAI. Plus, Coté has a lot (maybe too much) to say about Chick-fil-A coming to the UK. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/KDlBdG0OlZ8?si=bg705q4AX0WmFOcr) 487 (https://www.youtube.com/live/KDlBdG0OlZ8?si=bg705q4AX0WmFOcr) Runner-up Titles Time change is a-comin'. Don't worry, I'm wearing clothes under this bathrobe. Where-ever there are chickens, there is fried chicken, I assume. Waffle Chips Free Refills and Ice The tyranny of no free refills Take the Five Guy approach You can't change the rules in the race Fill the square Rundown Chick-fil-A to open in the UK (https://www.chick-fil-a.com/press-room/chick-fil-a-launching-in-the-uk) Wordpress Big chronological list of WordPre (https://mjtsai.com/blog/2024/09/24/automattic-vs-wp-engine/)s (https://mjtsai.com/blog/2024/09/24/automattic-vs-wp-engine/)s/WP Engine drama (https://mjtsai.com/blog/2024/09/24/automattic-vs-wp-engine/) The Word (https://techcrunch.com/2024/09/26/wordpress-vs-wp-engine-drama-explained/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAABCzxY8I1T5GJaC-5-r2ZPSlWzU70oxy1GPtzV7NLmfUAUGFSUWFhX3OVxWCqeZSWLn4lCXlNat3dpWXnwTFqZsG8MzcRTsgkMQDrOCIoYElP84SzmjPhQwrQEeSvmBZ_IJ050-ixuNbmzEZyxQjVzDZLlbXy1ViAS0Xdx89AAFM)P (https://techcrunch.com/2024/09/26/wordpress-vs-wp-engine-drama-explained/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAABCzxY8I1T5GJaC-5-r2ZPSlWzU70oxy1GPtzV7NLmfUAUGFSUWFhX3OVxWCqeZSWLn4lCXlNat3dpWXnwTFqZsG8MzcRTsgkMQDrOCIoYElP84SzmjPhQwrQEeSvmBZ_IJ050-ixuNbmzEZyxQjVzDZLlbXy1ViAS0Xdx89AAFM)ress vs. WP Engine drama, explained (https://techcrunch.com/2024/09/26/wordpress-vs-wp-engine-drama-explained/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAABCzxY8I1T5GJaC-5-r2ZPSlWzU70oxy1GPtzV7NLmfUAUGFSUWFhX3OVxWCqeZSWLn4lCXlNat3dpWXnwTFqZsG8MzcRTsgkMQDrOCIoYElP84SzmjPhQwrQEeSvmBZ_IJ050-ixuNbmzEZyxQjVzDZLlbXy1ViAS0Xdx89AAFM) How much would it cost to fork WordPress? (https://www.threads.net/@hazardalexander/post/DAjbKhtTiiS?xmt) Charitable Contributions (https://ma.tt/2024/09/charitable-contributions/) OpenAI New funding to scale the benefits of AI (https://openai.com/index/scale-the-benefits-of-ai/) OpenAI Is A Bad Business (https://www.wheresyoured.at/oai-business/) Relevant to your Interests Announcing Upgraded Docker Plans (https://www.docker.com/blog/november-2024-updated-plans-announcement/) OpenAI Discusses Giving Altman 7% Stake in For-Profit Transition (https://www.bloomberg.com/news/articles/2024-09-25/openai-cto-mira-murati-says-she-will-leave-the-company) Intel's Transformation 2.0 (https://thechipletter.substack.com/p/intels-transformation-20) Intel begins reducing real estate holdings worldwide (https://www.tomshardware.com/tech-industry/intel-begins-reducing-real-estate-holdings-worldwide-european-hq-up-for-sale-leaving-purpose-built-swindon-site-after-more-than-40-years) Arm Is Rebuffed by Intel After Inquiring About Buying Product Unit (https://www.bloomberg.com/news/articles/2024-09-27/arm-rejected-by-intel-after-approaching-it-about-buying-product-unit?srnd=phx-deals&utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosprorata&stream=top) Hurricane Helene May Have Given Intel (https://www.tipranks.com/news/hurricane-helene-may-have-given-intel-nasdaqintc-a-death-blow) Pure Storage announces VM assessment service (https://www.itpro.com/cloud/cloud-storage/pure-storage-announces-vm-assessment-and-it-could-please-beleaguered-vmware-customers) Attacking UNIX Systems via CUPS, Part I (https://www.evilsocket.net/2024/09/26/Attacking-UNIX-systems-via-CUPS-Part-I/) Clouded Judgement 9.27.24 - The Foundation of Foundation Models (https://cloudedjudgement.substack.com/p/clouded-judgement-92724-the-foundation?utm_source=post-email-title&publication_id=56878&post_id=149461824&utm_campaign=email-post-title&isFreemail=true&r=2l9&triedRedirect=true&utm_medium=email) The streaming wars are over. (https://www.businessinsider.com/netflix-stock-historic-high-chart-streaming-2024-9) Uber's next act: taking on Amazon (https://www.ft.com/content/7b503c54-ee5b-4413-ad06-580cdab531e5?accessToken=zwAGIx5DTf8Akc97UDxU7ltEE9OtBlgM2rUx5Q.MEUCIQCpieab4UZjnSSY9bGhGLZwigzHqcUUtL5cx0LMP8_17AIgQZD3nUAX-gsir_cRYv9AK2SdTf_JSVYiV-ZAGL5f7-Y&sharetype=gift&token=556e605c-1bb8-4882-a66d-9ce783d5103a) Trump Says He'll Prosecute Google If He Retakes Power (https://gizmodo.com/trump-says-hell-prosecute-google-if-he-retakes-power-2000504499) Epic is suing Google — again — and now Samsung, too (https://www.theverge.com/policy/2024/9/30/24256395/epic-sues-google-samsung-antitrust-auto-blocker) Windows 11 Patch Tuesday preview is a glitchy disaster (https://www.theregister.com/2024/09/30/windows_11_kb5043145/) Observe Inc. Introduces AI-Powered Observability, Closes Series B Funding Of $145M (https://www.prnewswire.com/news-releases/observe-inc-introduces-ai-powered-observability-closes-series-b-funding-of-145m-302259523.html) How North Korea Infiltrated the Crypto Industry (https://www.coindesk.com/tech/2024/10/02/how-north-korea-infiltrated-the-crypto-industry/) How to share your access to media with family and simultaneously sweep the annual nerdy nephew of the year awards (https://a.wholelottanothing.org/how-to-share-your-access-to-media-with-family-and-simultaneously-sweep-the-annual-nerdy-nephew-of-the-year-awards/) Nonsense A guide to 30 of Australia's iconic Big Things (https://www.australiantraveller.com/australia/most-iconic-big-things-of-australia/) Chick-fil-A to open in the UK (https://www.chick-fil-a.com/press-room/chick-fil-a-launching-in-the-uk) Listener Feedback Observability Architect (Remote, USA) (https://boards.greenhouse.io/grafanalabs/jobs/5336194004) Conferences Cloud Foundry Day EU (https://events.linuxfoundation.org/cloud-foundry-day-europe/), Karlsruhe, GER, Oct 9, 2024, 20% off with code CFEU24VMW. VMware Explore Barcelona (https://www.vmware.com/explore/eu), Nov 4-7, 2024. Coté speaking. GoTech World (https://www.gotech.world/), Coté speaking, Bucharest, Nov 12th and 13th. SREday Amsterdam (https://sreday.com/2024-amsterdam/), Nov 21, 2024. Coté speaking (https://sreday.com/2024-amsterdam/Michael_Cote_VMwarePivotal_We_Fear_Change), 20% off with code SRE20DAY. DevOpsDayLA (https://www.socallinuxexpo.org/scale/22x/events/devopsday-la) at SCALE22x (https://www.socallinuxexpo.org/scale/22x), March 6-9, 2025, discount code DEVOP SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Balatro on the App Store (https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://apps.apple.com/us/app/balatro/id6502453075&ved=2ahUKEwiRjYOywfCIAxW74skDHTpRLeoQFnoECAoQAQ&usg=AOvVaw1yGgqh4oVBzK4ocblojsY6) Cloud News of the Month - September 2024 (https://www.thecloudcast.net/2024/10/cloud-news-of-month-september-2024.html) Matt: Some Like it Hot (https://www.imdb.com/title/tt0053291/?ref_=fn_al_tt_1) Coté: Creaks (https://apps.apple.com/us/app/creaks/id1466040723) Software Defined Interviews relaunch (https://www.softwaredefinedinterviews.com/83), with Whitney Lee. Photo Credits Header (https://unsplash.com/photos/a-fast-food-restaurant-with-a-large-menu-iHljlctatOE) Artwork (https://unsplash.com/photos/person-in-black-and-white-t-shirt-using-computer-Zk--Ydz2IAs)

UM HELLO?
You Might Also Like: Smart Talks with IBM

UM HELLO?

Play Episode Listen Later Oct 3, 2024


Introducing The power of Granite in business from Smart Talks with IBM.Follow the show: Smart Talks with IBMAs the scale of artificial intelligence continues to evolve, open technology like many of IBM's Granite models are helping enhance transparency in AI and improve efficiency across businesses. In this episode of Smart Talks with IBM, Jacob Goldstein sat down with Maryam Ashoori, the Director of Product Management and Head of Product for IBM's watsonx.ai, where she spearheads the product strategy and delivery of IBM's watsonx Foundation Models. Together, they explored the shift from large general-purpose AI models to smaller, customizable models tailored to specific needs. This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions. Visit us at https://ibm.com/smarttalksSee omnystudio.com/listener for privacy information.DISCLAIMER: Please note, this is an independent podcast episode not affiliated with, endorsed by, or produced in conjunction with the host podcast feed or any of its media entities. The views and opinions expressed in this episode are solely those of the creators and guests. For any concerns, please reach out to team@podroll.fm.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Benchmark's Eric Vishria on Where is the Value in AI: Chips, Models or Apps | Why Nvidia Will Not Be The Only Game in Town | The Commoditisation of Foundation Models | Which AI Apps Have Sustaining Value vs Hype and Short Term Revenue

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Sep 25, 2024 62:11


Eric Vishria is a General Partner @ Benchmark Capital, one of the world's leading venture firms. At Benchmark, Eric has served on over 10 boards including Confluent (CFLT), Amplitude (AMPL), Benchling, Contentful, Cerebras and several other private companies. Prior to joining Benchmark, Eric was the Co‐Founder and CEO of RockMelt, acquired by Yahoo in 2013. In Today's Episode with Eric Vishria We Discuss: 1. How to Make Money Investing in AI Today: How does Eric think through where value will accrue in the stack between chips, models and applications? Why does Eric believe foundation models are the fastest commoditising asset in history? Why does Eric believe that Nvidia will not be the only game in town in the next 3-5 years? 2. How to Invest in AI Application Layer Successfully: How does Eric analyse between a standalone and deep product vs a product that foundation model will commodities and incorporate into their feature set? How does Eric differentiate between the 10 different players all going after customer service, or sales tools or data analyst products etc? How does Eric analyse the quality of revenue of these AI application layer companies? What does he mean when he describes their revenue as "sugar high"? 3. How the Best VC Firm Makes Decisions: What is the decision-making process for all new deals in Benchmark? As specifically as possible, how does the voting process inside Benchmark work? What deal was the most contentious deal that went through? What did the partnership learn? How has the Benchmark decision-making process changed over 10 years? 4. Does AI Break Venture Capital Models: Does the price of AI deals and size of their rounds break the Benchmark model? Will foundation model companies all be acquired by the larger cloud providers? Unless multiples reflate in the public markets, does venture as an asset class have hope? Why does AI make paying ludicrously high prices potentially rational?  

TechStuff
Smart Talks with IBM: The power of Granite in business

TechStuff

Play Episode Listen Later Sep 24, 2024 32:28 Transcription Available


As the scale of artificial intelligence continues to evolve, open technology like many of IBM's Granite models are helping enhance transparency in AI and improve efficiency across businesses. In this episode of Smart Talks with IBM, Jacob Goldstein sat down with Maryam Ashoori, the Director of Product Management and Head of Product for IBM's watsonx.ai, where she spearheads the product strategy and delivery of IBM's watsonx Foundation Models. Together, they explored the shift from large general-purpose AI models to smaller, customizable models tailored to specific needs.   This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.   Visit us at https://ibm.com/smarttalks  See omnystudio.com/listener for privacy information.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: OpenAI's Newest Board Member, Zico Colter on The Biggest Bottlenecks to the Performance of Foundation Models | The Biggest Questions and Concerns in AI Safety | How to Regulate an AI-Centric World

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Sep 4, 2024 60:23


Zico Colter is a Professor and the Director of the Machine Learning Department at Carnegie Mellon University.  His research spans several topics in AI and machine learning, including work in AI safety and robustness, LLM security, the impact of data on models, implicit models, and more.  He also serves on the Board of OpenAI, as a Chief Expert for Bosch, and as Chief Technical Advisor to Gray Swan, a startup in the AI safety space. In Today's Episode with Zico Colter We Discuss: 1. Model Performance: What are the Bottlenecks: Data: To what extent have we leveraged all available data? How can we get more value from the data that we have to improve model performance? Compute: Have we reached a stage of diminishing returns where more data does not lead to an increased level of performance? Algorithms: What are the biggest problems with current algorithms? How will they change in the next 12 months to improve model performance? 2. Sam Altman, Sequoia and Frontier Models on Data Centres: Sam Altman: Does Zico agree with Sam Altman's statement that "compute will be the currency of the future?" Where is he right? Where is he wrong? David Cahn @ Sequoia: Does Zico agree with David's statement; "we will never train a frontier model on the same data centre twice?" 3. AI Safety: What People Think They Know But Do Not: What are people not concerned about today which is a massive concern with AI? What are people concerned about which is not a true concern for the future? Does Zico share Arvind Narayanan's concern, "the biggest danger is not that people will believe what they see, it is that they will not believe what they see"? Why does Zico believe the analogy of AI to nuclear weapons is wrong and inaccurate?  

TechStuff
Smart Talks with IBM - Transformations in Al: why foundation models are the future

TechStuff

Play Episode Listen Later Jul 16, 2024 40:50 Transcription Available


Major breakthroughs in artificial intelligence research often reshape the design and utility of Al in both business and society. In this special rebroadcast episode of Smart Talks with IBM, Malcolm Gladwell and Jacob Goldstein explore the conceptual underpinnings of modern Al with Dr. David Cox, VP of Al models at IBM Research. They talk foundation models, self-supervised machine learning, and the practical applications of Al and data platforms like watsonx in business and technology. When we first aired this episode last year, the concept of foundation models was just beginning to capture our attention. Since then, this technology has evolved and redefined the boundaries of what's possible. Businesses are becoming more savvy about selecting the right models and understanding how they can drive revenue and efficiency. This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.   Visit us at ibm.com/smarttalks    See omnystudio.com/listener for privacy information.

Weather Geeks
Using AI to Study Weather & Climate

Weather Geeks

Play Episode Listen Later Jul 10, 2024 49:07


Guest: Campbell Watson, Senior Research Scientist at IBM ResearchAs artificial intelligence, or AI, continues to become more pervasive in our technology, it's only natural to wonder what it means for meteorology and climatology. Believe it or not, AI is already revolutionizing how we develop models in the Earth and Space sciences. Joining us today is Campbell Watson, a Senior Research Scientist at IBM Research, to discuss how we are creating these AI models, and the opportunities and advancements we hope to learn from using them.Chapters00:00 Introduction to AI in Meteorology and Climatology01:35 Campbell Watson's Background and Interest in Weather04:18 The Use of AI in Weather Forecasting05:24 The Evolution of AI in Weather Forecasting06:26 Understanding AI at its Core08:09 Difference Between AI and Conventional Weather Model Forecasting10:44 The Difference Between AI and Conventional Weather Model Forecasting13:31 AI Models Learning from Weather Models20:30 Foundation Models and Unsupervised Learning25:12 Verification of AI in Weather Forecasting27:57 The Robustness of AI Models28:24 Learning from Observations29:23 AI Models Learning from Observations30:09 The Importance of Observations30:38 Comparing Deep Learning and AI with Traditional Weather Forecasting Models35:38 Trustworthiness in AI Forecasting36:12 Adopting AI in Weather Forecasting41:46 Nowcasting with AI44:03 AI in Climate Forecasting47:38 The Future of AI in Weather ForecastingSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Foundation Models are the Fastest Depreciating Asset in History, Lina Kahn is a Threat to American Capitalism, PE is Not Coming to Save the M&A Market & How China Could Overtake the US in the AI Race with Michael Eisenberg

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jun 19, 2024 56:29


Michael Eisenberg is a Co-Founder and General Partner @ Aleph, one of Israel's leading venture firms with a portfolio including the likes of Wix, Lemonade, Empathy, Honeybook and more. Before leading Aleph, Michael was a General Partner @ Benchmark. In Today's Show with Michael Eisenberg We Discuss: 1. The State of AI Investing: Why does Michael believe that "foundation models are the fastest depreciating asset in history"? Are we in an AI bubble today? As an investor, what is the right way to approach this market? Who will be the biggest losers in this AI investing phase? Where will the biggest value accrual be? What lessons does Michael have from the dot com for this? 2. Where Is the Liquidity Coming From? Why does Michael believe that it is BS that private equity will come in and buy a load of software companies and be the primary exit destination? Why does Michael believe that IPO windows are always open? Should founders go out now? What is good enough revenue numbers to go out into the public markets? Why does Michael believe that Lina Kahn is a threat to capitalism? How does Michael predict the next 12-24 months for the M&A market? 3. AI as a Weapon: Who Wins: China or the US: Does Michael agree with the notion that China is 2 years behind the US in AI development? Does Michael agree that AI could be a more dangerous weapon in wars than nuclear weapons? Why does Michael suggest that for all founders in Europe, they should leave? US, China, Israel, Europe, how do they rank for innovating around data regulation for AI? 4. Venture 101: Reserves, Selling Positions and Fund Dying: Why does Michael only want to do reserves into his middle-performing companies? What framework does Michael use to determine whether he should sell a position? Which funds will be the first to die in this next wave of venture? Why does Michael not do sourcing anymore? Where is he weakest in venture? Why does Michael believe that no board meeting needs to be over 45 mins?    

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Reid Hoffman on Foundation Models: Who Wins & How Do Incumbents Respond | The Inflection AI Deal: How it Went Down | Why Trump is a Threat to Democracy | The Future of TikTok | Lessons from Sam Altman, Brian Chesky and the OpenAI Board

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jun 10, 2024 75:56


Reid Hoffman has been one of the most impactful people in technology over the last two decades. He is the Co-Founder of Linkedin (acq by Microsoft for $26BN) and Co-Founder of Inflection.ai. As an investor, Reid has backed the likes of Facebook, Airbnb, Zynga and more. Reid is also a Board Member @ Microsoft and was on the board of OpenAI. In Today's Show with Reid Hoffman We Discuss: 1. Foundation Models: Commoditisation, Business Models, Incumbents: Does Reid believe we are seeing the commoditization of foundation models? Is it too late for new foundation models to be born today? Are they VC backable? How will foundation models eventually make money? What will be the sustainable business model? Does Reid believe that foundation models will be acquired by large cloud providers? Who goes first? 2. Inflection & Microsoft: What Went Down: How did the Microsoft and Inflection deal go down? Did Satya call up one day and make it happen? With the decay rate of models, Microsoft did not do it for the models, so why did they do it? Was Inflection a sustainable business in it's own right? Does this not prove that to win at this game, you have to be an incumbent with incumbent cash? 3. OpenAI: Board, Lessons and Management: What are 1-2 of Reid's biggest lessons from being on the OpenAI board with Sam? Why did Sam ask Reid in front of the whole company if Reid would fire him if he did not perform? Scarlett Johannsen, super alignment team quitting, NDAs tied to equity, this is a lot in a short amount of time, how does Reid analyse this? 4. Trump is the Biggest Threat to Democracy: What Lies Ahead? Why does Reid believe that Trump is a threat to democracy and evil? What were Reid's biggest takeaways from a two hour lunch with Joe Biden? How does a Trump administration change the world of AI, technology and startups? 5. The Future of TikTok: Is TikTok a threat to US democracy? Should it be banned? What will be the outcome of the current judicial process? Will they sell to a US entity? How could Trump impact the future of TikTok in the US? 6. Reid Hoffman: AMA: What are Peter Thiel's biggest strengths and weaknesses? I believe Mark Zuckerberg is one of the most unappreciated public market CEOs, what are the core components that Reid believes makes Mark so special? How did Reid miss out on investing in SpaceX's first round? What did he not see that he should have seen? What do we think is crazy today but will be a no brainer and very normal in 10 years?  

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Perplexity's Aravind Srinivas on Will Foundation Models Commoditise, Diminishing Returns in Model Performance, OpenAI vs Anthropic: Who Wins & Why the Next Breakthrough in Model Performance will be in Reasoning

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jun 5, 2024 55:03


Aravind Srinivas is the Co-Founder & CEO of Perplexity, the conversational "answer engine" that provides precise, user-focused answers to queries. Aravind co-founded the company in 2022 after working as a research scientist at OpenAI, Google, and DeepMind. To date, Perplexity has raised over $100 million from investors including Jeff Bezos, Nat Friedman, Elad Gil, and Susan Wojciki. In Today's Episode with Aravind Srinivas We Discuss: Biggest Lessons from DeepMind & OpenAI What was the best career advice Sam Altman @ OpenAI gave Aravind? What were Aravind's biggest takeaways at DeepMind? How did DeepMind shape how Aravind built Perplexity? What did Aravind mean by “competition is for losers?” What did he learn about talent assembly at DeepMind? The Next AI Breakthrough: Reasoning Does Aravind think we are experiencing diminishing returns on compute & model performance? Does Aravind agree reasoning will be the next big breakthrough for models? What are the reasons Aravind thinks models suck at reasoning today? What is the timeline for reasoning improvement according to Aravind? What does Aravind think are the biggest misconceptions about AI today? Will Foundation Models Commoditise? Does Aravind think foundation models will commoditise? What will the end state of foundation models look like? Why does Aravind think the second tier models will get commoditised? Why does Aravind think the subscription model will not work for AI models with true reasoning?  Why does Aravind think the application layer companies will benefit from foundation models commoditising? Why does Aravind think foundation models will not verticalize? When does Aravind think is the right time to go enterprise? What is his strategy to differentiate Perplexity from its competitors? AI Arms Race: Who Will Win? Who does Aravind think will be the winners of foundation models? What do AI companies need to do to win the model arms race? How does Aravind think startups can compete against incumbents' infinite cash flow? What are the reasons Aravind thinks Perplexity's browsing is better than ChatGPT? What is Aravind's biggest challenge at Perplexity today?    

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: OpenAI's Sam Altman, Mistral's Arthur Mensch and more discuss: Will Foundation Models Be Commoditised | Which Startups Are Threatened vs Enabled by OpenAI | Is the Value in the Infrastructure or Application Layer?

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later May 24, 2024 21:51


Sam Altman is the CEO @ OpenAI, the company on a mission is to ensure that artificial general intelligence benefits all of humanity. OpenAI is one of the fastest-scaling companies in history with a valuation of $90BN and $2BN+ in revenue. Brad Lightcap is the COO @ OpenAI and the man responsible for the incredible scaling of sales, GTM, partnerships and business to today being over $2BN in revenue. Arthur Mensch is the Co-Founder and CEO of Mistral AI. Since its inception in May 2023, Mistral has raised over $520M in funding from investors like Andreeseen Horowitz, General Catalyst, Lightspeed Venture Partners, and Microsoft with a current valuation of $2 billion.  Des Traynor is a Co-Founder of Intercom, and has built and led many teams within the company, including Product, Marketing, and Customer Support. Today Des leads all of Intercom's R&D efforts, and parts of Intercom's marketing. Tom Hulme is a Managing Partner of GV (Google Ventures), and leads the European team. Today, GV has over $10BN in AUM and Tom has led investments in Lemonade.com (IPO), Snyk, Secret Escapes, Blockchain.com, GoCardless, and Currency Cloud (exited to Visa). Tomasz Tunguz is the Founder and General Partner @ Theory Ventures, just announced last week, Theory is a $230M fund that invests $1-25m in early-stage companies that leverage technology discontinuities into go-to-market advantages. Sarah Tavel is a General Partner @ Benchmark, one of the most successful and renowned venture firms in the world. At Benchmark, Sarah has led rounds in Chainalysis, Hipcamp, Medely, Rekki, Glide, Cambly and more. In Today's Episode We Discuss: Will foundation models be commoditised? What is the end state for the foundation model landscape in 10 years? How will large cloud provider incumbents approach M&A with smaller foundation model providers? When will we see marginal revenue exceed marginal cost in the foundation model business model? Where is the value: the application layer or the infrastructure layer? How can startups know whether they will be threatened by OpenAI? What are good tests/questions to know if you are in the path of one of the large foundation models? How does the business model of SaaS fundamentally change in a world of AI? Will we see the end of per-seat pricing in a new world of AI? What is the right way to approach pricing in a world of AI? Consumption? Tokens?