POPULARITY
Berkeley professor Ion Stoica, co-founder of Databricks and Anyscale, transformed the open source projects Spark and Ray into successful AI infrastructure companies. He talks about what mattered most for Databricks' success -- the focus on making Spark win and making Databricks the best place to run Spark. He highlights the importance of striking key partnerships -- the Microsoft partnership in particular that accelerated Databricks' growth and contributed to Spark's dominance among data scientists and AI engineers. He also shares his perspective on finding new problems to work on, which holds lessons for aspiring founders and builders: 1) building systems in new areas that, if widely adopted, put you in the best position to understand the new problem space, and 2) focusing on a problem that is more important tomorrow than today. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital Mentioned in this episode: Spark: The open source platform for data engineering that Databricks was originally based on. Ray: Open source framework to manage, executes and optimizes compute needs across AI workloads, now productized through Anyscale MosaicML: Generative AI startups founded by Naveen Rao that Databricks acquired in 2023. Unity Catalog: Data and AI governance solution from Databricks. CIB Berkeley: Multi-strategy hedge fund at UC Berkeley that commercializes research in the UC system. Hadoop: A long-time leading platform for large scale distributed computing. VLLM and Chatbot Arena: Two of Ion's students' projects that he wanted to highlight.
Robert Nishihara is co-founder of Anyscale and co-creator of Ray, the open source project that has emerged as the AI Compute Engine.Subscribe to the Gradient Flow Newsletter: https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.Detailed show notes - with links to many references - can be found on The Data Exchange web site.
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Kai-Hsun Chen, Software Engineer at Anyscale and maintainer of the KubeRay project. The discussion focuses on how the open source Ray project can help organizations use a single tool for data prep, model training, fine tuning and model serving workflows, both for their predictive AI and generative AI models. The discussion also dives into the KubeRay project and how it provides three different Kubernetes CRDs for Data Scientists to deploy Ray clusters on demand. Check out our website at https://kubernetesbytes.com/ Cloud Native News: https://azure.github.io/AKS/2024/08/23/fine-tuning-language-models-with-kaito https://orca.security/resources/blog/kubernetes-testing-environment/ https://www.redhat.com/en/about/press-releases/red-hat-openstack-services-openshift-now-generally-available Show links: Kai's LinkedIn: https://www.linkedin.com/in/kaihsun1996/ KubeRay doc: https://docs.ray.io/en/latest/cluster/kubernetes/index.html Ray Summit registration: https://raysummit.anyscale.com/flow/anyscale/raysummit2024/reg/createaccount (code: KaiHsunC15) KubeRay repository: https://github.com/ray-project/kuberay Ray repository: https://github.com/ray-project/ray Ray Slack workspace: https://docs.google.com/forms/d/e/1FAIpQLSfAcoiLCHOguOm8e7Jnn-JJdZaCxPGjgVCvFijHB5PLaQLeig/viewform Timestamps: 00:02:40 Cloud Native News 00:07:20 Interview with Kai 00:49:15 Key takeaways
In this episode, guest host and AI correspondent Mofi Rahman interviews Richard Liaw and Kai-Hsun Chen from Anyscale about Ray and KubeRay. Ray is an open-source unified compute framework that makes it easy to scale AI and Python workloads, while KubeRay integrates Ray's capabilities into Kubernetes clusters. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week CNCF Blog - LitmusChaos audit complete! Kubernetes Podcast from Google episode 234 - LitmusChaos, with Karthik Satchitanand Google Cloud Blog - Run your AI inference applications on Cloud Run with NVIDIA GPUs Diginomica article - KubeCon China - at 33-and-a-third, Linux is a long player. So, why does Linus Torvalds hate AI? CNCF-Hosted Co-Located Event Schedule for KubeCon NA 2024 Google Kubernetes Engine Release Notes - August 20, 2024 (1.31 available in Rapid Channel) Kubernetes Podcast from Google - Kubernetes v1.31: "Elli", with Angelos Kolaitis Red Hat Press Release - Red Hat OpenStack Services on OpenShift is Now Generally Available Red Hat Enables OpenStack to Run Natively on OpenShift Platform Broadcom Revamps Tanzu to Simplify Cloud-Native App Development and Deployment Tanzu Platform 10 Offers Cloud Foundry Users Deep Visibility and Productivity Enhancements VMware Explore Conference Website CNCF Blog - Announcing 500 Kubestronauts CNCF - Kubestronaut FAQ Dapr Day 2024 Virtual Event Website Links from the interview Kai-Hsun Chen on LinkedIn Richard Liaw on LinkedIn Ray from the RISE Lab at UC Berkeley Ray: A Distributed System for AI by Robert Nishihara and Philipp Moritz - Jan 9, 2018 KubeRay Docs KubeRay on GitHub PyTorch Apache Airflow Apache Spark Kubeflow Apache Submarine (retired) Jupyter Notebooks VS Code Examples of schedulers for Batch/AI workloads in Kubernetes Kueue Volcano Apache Yunikorn Examples of observability tools for Batch/AI workloads in Kubernetes Prometheus Grafana Fluentbit Examples of loadbalancers Nginx Istio Ray Data: Scalable Datasets for ML Dask Python - Parallel Python Ray Serve: Scalable and Programmable Serving HPA - Horizontal Pod Autoscaling in Kubernetes Karpenter - “Just-in-time nodes for any Kubernetes cluster” Lazy Computation Graphs with the Ray DAG API Types of hardware accelerators Google Cloud Tensor Processing Units (TPUs) AMD Instinct AMD Radeon AWS Trainium AWS Inferentia Pandas Numpy KubeCon EU 2024 - Accelerators(FPGA/GPU) Chaining to Efficiently Handle Large AI/ML Workloads in K8s - Sampath Priyankara, Nippon Telegraph and Telephone Corporation & Masataka Sonoda, Fujitsu Limited NVidia Megatron Links from the post-interview chat DRA - Dynamic Resource Allocation in Kubernetes Different ways of Running RayJob on Kubernetes Ray framework diagram in the docs
In this episode of the AI + a16z podcast, Anyscale cofounder and CEO Robert Nishihara joins a16z's Jennifer Li and Derrick Harris to discuss the challenges of training and running AI models at scale; how a focus on video models — and the huge amount of data involved — will change generative AI models and infrastructure; and the unique experience of launching a company out of the UC-Berkeley Sky Computing Lab (the successor to RISElab and AMPLab).Here's a sample of the discussion, where Robert explains how generative AI has turbocharged the appetite for AI capabilities within enterprise customers:"Two years ago, we would talk to companies, prospective customers, and AI just wasn't a priority. It certainly wasn't a company-level priority in the way that it is today. And generative AI is the reason a lot of companies now reach out to us . . . because they know that succeeding with AI is essential for their businesses, it's essential for their competitive advantage."And time to market matters for them. They don't want to spend a year hiring an AI infrastructure team, building up a 20-person team to build all of the internal infrastructure, just to be able to start to use generative AI. That's something they want to do today."At another point in the discussion, he notes on this same topic:"One dimension where we try to go really deep is on the developer experience and just enabling developers to be more productive. This is a complaint we hear all the time with machine learning teams or infrastructure teams: They'll say that they hired all these machine learning people, but then the machine learning people are spending all of their time managing clusters or working on the infrastructure. Or they'll say that it takes 6 weeks or 12 weeks to get a model to transition from development to production . . . Or moving from a laptop to the cloud, and to go from single machine to scaling — these are expensive handoffs often involve rewriting a bunch of code."Learn more:AnyscaleSky Computing LabRayFollow everyone on X:Robert NishiharaJennifer LiDerrick Harris Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
A reminder for new readers. That Was The Week includes a collection of my selected readings on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest to me. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are snippets sized to convey why they are of interest. Click on the headline, contents link, or the ‘More' link at the bottom of each piece to go to the original. I express my point of view in the editorial and the weekly video below.Hat Tip to this week's creators: @reidhoffman, @dougleone, , @credistick, @rex_woodbury, @NathanLands, @ItsUrBoyEvan, @berber_jin1, @cityofthetown, @keachhagey, @pmarca, @bhorowitz, , @signalrank, @steph_palazzolo, @julipuli, @MTemkin, @geneteare, @lorakolodny, @jasminewsun, @JBFlint, @asharma, @thesimonetti, @lessinContents* Editorial: * Essays of the Week* Crossing The Series A Chasm* The Consumer Renaissance* The Creator Economy on AI Steroids* AI Is Transforming the Nature of the Firm* The Opaque Investment Empire Making OpenAI's Sam Altman Rich* Video of the Week* The American Dream - Marc Andreessen and Ben Horowitz* AI of the Week* SignalRank Version 3 Improves Performance Again* How Long Can OpenAI's First-Mover Advantage Last?* OpenAI Employees Warn of Advanced AI Dangers* A Right to Warn about Advanced Artificial Intelligence* Nvidia hits $3tn and surpasses Apple as world's second-most valuable company* VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market* News Of the Week* Crunchbase Monthly Recap May 2024: AI Leads Alongside An Uptick In Billion-Dollar Rounds* Elon Musk ordered Nvidia to ship thousands of AI chips reserved for Tesla to X and xAI* Introducing video to Substack Chat* Instagram's Testing Video Ads That Stop You From Scrolling Further* Startup of the Week* NBA Nears $76 Billion TV Deal, a Defining Moment for Media and Sports* X of the Week* Doug Leone - I am supporting Trump. * Reid Hoffman - I am supporting BidenEditorialI woke on Tuesday to Doug Leone of Sequoia Capital on X saying:I have become increasingly concerned about the general direction of our country, the state of our broken immigration system, the ballooning deficit, and the foreign policy missteps, among other issues. Therefore, I am supporting former President Trump in this coming election.Doug has the right to support Trump. It is also clear that the immigration system is broken, the deficit is ballooning, many things are wrong with foreign policy, and there are “other issues.” Trump as the solution is less obvious. But there it is—hot on the tails of Chamath Palihipitaya and David Sacks announcing a fund-raiser for Trump on the All-In podcast (they said they would do the same for Biden).Reid Hoffman followed up a day later with:On one level, this is a straightforward choice, but any literate attempt to analyze Leone's issues might arrive at the following conclusions:* Like many Western nations, the USA is aging rapidly and has a shrinking working-age population across all skill sets. Immigrants are needed, and pro-immigration leadership is needed, creating a path to entry for large numbers of skilled and unskilled workers to fill empty jobs as we get close to full employment.* The deficit is large, and there are many palliatives available. Selling more to China would help, but both party leaders are protectionist. Taxes to reduce the divide between the 1% and the rest would help a bit. However, what would help the most is economic growth, which requires investment in technology and productivity. Neither leader seems too focused on innovation and investment.* Foreign Policy - well, sheesh, it's a big issue. However, saber-rattling about Taiwan and provoking China seems to be a hobby shared by both parties and does not seem smart. Ukraine and the future of Europe are better in Biden's hands, but not by a lot. Europe looks very shaky. The US is increasingly isolationist. The appetite for world leadership is on the decline. Again, the solution would focus on economic growth, which seems absent.Voting for Trump is a big no-no for me. But voting for Biden is, at best, a lesser evil instinct, not a belief system. The election will not be where the future is built, but it is important. Politicians are collectively disappointing.This week's video of the week from Marc Andreessen and Ben Horowitz is called “The American Dream” and champions their view about American Dynamism. And I must confess that this comes closer to a vision of the future than either political outfit. Their vision requires political support, massive government financial commitment, and private capital investment. I see no evidence of those happening.The real winning effort seems to be happening on the ground. This week, Nvidia hit $3 trillion, eclipsing Apple as the world's second-most valuable company. This is even though Apple has 7 times the revenue of Nvidia.This week's first essays also focus on prospects for boom time. Rex Woodbury's ‘The Consumer Renaissance' examines the impact of consumer spending on our lives. In ‘The Creator Economy on AI Steroids, ' Nathan Lands focuses on how emerging tools will transform creativity. But in ‘AI Is Transforming the Nature of the Firm, ' Evan Armstrong gets closest to a future vision.”AI is the first universally flexible technology. It can interact with our digital environments in similar ways to humans, so it can have all the flexibility that we do. In that way, it may be the last technology we ever need.This seems to be the crux of hope in a world where dreams and nightmares are strangely devoid of detail. What the world needs (not only America) is hope. And hope is born from optimism. Optimism is born from success. The most likely success of the next decades will result from specific uses of AI that improve human life.I know and like Doug Leone. I know and like Reid Hoffman. Doug's bar for success needs to be higher. Voting for Trump is not right, and even if it were, it would not be sufficient.Reid also needs a higher bar. Voting for Biden will not be sufficient even if it is right.Let's focus on where success can be found, grow optimism, and breed hope. There is a need for a broad technical revolution and the social rebirth it enables. Silicon Valley and its friends globally need to invent the next version of human existence to the benefit of all. The social rebirth requires a conscious effort; technology will not magically bring it about. More in this week's video.Essays of the WeekCrossing The Series A ChasmDan GrayDan Gray, a frequent guest author for Crunchbase News, is the head of insights at Equidam, a startup valuation platform, and a venture partner at Social Impact Capital.June 5, 2024As we get deeper into 2024, there is increasing concern about the state of Series A fundraising. The bar for investment appears much higher, and fewer startups are reaching it.This is a problem for founders, and investors like Jenny Fielding, managing partner of Everywhere Ventures, who said, “Every Seed investor's dilemma: All my Series A buddies want to meet my companies early! All my companies are too early for my Series A buddies.”To attach some data to this, we can see that the median step-up in valuation from seed to Series A has gone from $19.5 million in Q1 2022 to $28.7 million in Q1 2024. Series A firms seem to be looking for much stronger revenue performance, with targets of $2 million to $3 million in ARR, compared to $1 million to $2 million just a few years ago.The outcome is that while 31.8% of Q1 2020 seed startups closed their Series A within two years, that fell to just 12% for Q1 2022 — which should worry everyone.Why are Series A investors so much more demanding?Today's Series A investors are looking at startups that raised their seed between 2021 and 2023, which identifies the root of the problem: it spans the Q2 2022 high-tide mark for venture capital.For example, there were 1,695 seed rounds of more than $5 million in 2021, rising to 2,248 in 2022, then falling to 1,521 in 2023. As a comparison, there have been just 137 so far in 2024.The result is two categories of startups that are looking to raise their Series A today:* Pre-crunch startups that raised generous seed rounds and stretched the capital out as far as they could, to grow into inflated valuations.* Post-crunch startups that raised modest seed rounds on more reasonable terms, with shorter runways and less demonstrable growth.Strictly speaking, neither is more appealing than the other; the first group has less risk, the second offers more upside, and both are adapted to current market realities. It shouldn't cause a problem for investors, provided they can distinguish between the two.The cost of market inefficiencyVenture investors have a market-based lens on investment decisions, which means looking fairly broadly at trends in revenue performance and round pricing to determine terms, e.g. a typical Series A is within certain bounds of revenue performance and valuation. While that approach may be serviceable and efficient under ideal conditions, the past few years have been far from ideal.Without distinguishing between the two cohorts, investors are now looking at the performance of Series A candidates that spent more than $5 million on a war chest for two to three years of growth alongside the valuations of candidates that raised around $2 million to prove scalability. It just doesn't work as an average, and thus the unreasonable expectations...MoreThe Consumer RenaissanceFrom Predicting Consumer AI Applications to Analyzing Consumer SpendREX WOODBURY, JUN 05, 2024“Consumer” has become something of a bad word in venture capital circles.We see this reflected in the early-stage markets: recent data from Carta showed that just 7.1% of Seed capital raised last year went to consumer startups. That's less than half the share from 2019 (14.3%).But I think consumer is actually a great place to be building and investing. Whenever something is out of favor, that's a sign it's probably a good place to spend time: this is an industry built on being contrarian, not built on following the herd. We're entering a compelling few years for consumer entrepreneurship.First, I'd argue that consumer is too narrowly defined. When people think consumer, they often think consumer social (a tough category) or consumer brands (a tough fit for venture compared to internet and software businesses, with typically lower return profiles). But consumer is broader. Consumer encompasses businesses that sell to consumers and those that rely on consumer spending. This means the obvious names—apps on our phones like Uber, Instacart, Spotify—and the enablers: Shopify, for instance, powers online retail; Faire powers offline retail; Unity powers game development. Each of the latter three is B2B2C, in its own way, but I would categorize each is also a consumer technology business.The wins in consumer can be massive. The biggest technology businesses in history began as consumer businesses—Google, Facebook, Apple, Amazon. The original companies comprising FAANG—with Microsoft conspicuously absent—were allconsumer.And some of the best returns of the last five years have stemmed from consumer tech IPOs. At Daybreak, we invest ~$1M at Pre-Seed and Seed. Here's how much a $1M investment in the Seed round of five recent consumer IPOs would yield:Big consumer wins compare favorably to big enterprise wins—relative to Snowflake's market cap, Uber is ~3x in size, Airbnb is ~2x in size, and DoorDash is roughly equal. (Snowflake is the biggest enterprise IPO of the last decade.) The last few years produced a windfall of consumer outcomes, yet investors today almost write off the category.At Daybreak, we don't focus exclusively on consumer; my view is that you need to balance more binary consumer outcomes with B2B SaaS and B2B marketplaces. But we do approach investing through the lens of the consumer—how people make decisions. The buyers of products like Figma and Ramp, after all, are people, and software companies are increasingly selling bottom-up into organizations. The line between consumer and enterprise has been blurring for years.This week's Digital Native makes the argument that consumer tech is a compelling place to build and invest. We'll look at the data to back up this argument, then delve into three categories of consumer that I'm particularly interested in right now:* Checking in on Consumer Spend* Consumer Tech: The Data Doesn't Lie* What to Watch: AI Applications* What to Watch: Shopping* What to Watch: Consumer Health* Rule of Thumb: Follow the SpendThis week we'll cover #1-3, and next week in Part II we'll tackle #4-6.Let's dive in
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Matthew McClean is a Machine Learning Technology Leader with the leading Amazon Web Services (AWS) cloud platform. He leads the customer engineering teams at Annapurna ML helping customers adopt AWS Trainium and Inferentia for their Gen AI workloads. Kamran Khan, Sr Technical Business Development Manager for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium. AWS Tranium and Inferentia // MLOps podcast #238 with Kamran Khan, BD, Annapurna ML and Matthew McClean, Annapurna Labs Lead Solution Architecture at AWS. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia! // Bio Kamran Khan Helping developers and users achieve their AI performance and cost goals for almost 2 decades. Matthew McClean Leads the Annapurna Labs Solution Architecture and Prototyping teams helping customers train and deploy their Generative AI models with AWS Trainium and AWS Inferentia // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AWS Trainium: https://aws.amazon.com/machine-learning/trainium/ AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Kamran on LinkedIn: https://www.linkedin.com/in/kamranjk/ Connect with Matt on LinkedIn: https://www.linkedin.com/in/matthewmcclean/ Timestamps: [00:00] Matt's & Kamran's preferred coffee [00:53] Takeaways [01:57] Please like, share, leave a review, and subscribe to our MLOps channels! [02:22] AWS Trainium and Inferentia rundown [06:04] Inferentia vs GPUs: Comparison [11:20] Using Neuron for ML [15:54] Should Trainium and Inferentia go together? [18:15] ML Workflow Integration Overview [23:10] The Ec2 instance [24:55] Bedrock vs SageMaker [31:16] Shifting mindset toward open source in enterprise [35:50] Fine-tuning open-source models, reducing costs significantly [39:43] Model deployment cost can be reduced innovatively [43:49] Benefits of using Inferentia and Trainium [45:03] Wrap up
Register here for The AWS for Software Companies "Generative AI and Business Applications Executive Forum", Wednesday May 15, The Sofitel, New York City---------Ep035: Today's podcast features panel discussion from the "AWS for Software Companies Generative AI and Data Executive Forum" featuring leaders from Anyscale, Qlik and Rapid7 sharing insights on data models, policies and optimization for Generative AI applications.Panelists:Zhe Zang, Head of Open Source Engineering, AnyscaleBrendan Grady, EVP & GM, Analytics Business Unit, QlikLaura Ellis, VP - Data Engineering and Platform Analytics, Rapid7Sherry Marcus Ph.D., Director, Applied Science, AWSTopics Include:Introductions of the panelWhat big bets are you making with your companies and why?What have the learnings been with data preparation?Balancing present needs with the future of AI – growing data literacyCost optimization for AI and Data platformsConstraints of hardware and growing expenseWhat models and solutions have customers been asking for?High stakes risks for AI and DataHow customers are reacting to different compute modelsGuardrails, filters and risk-mitigation best practicesData and AI pain pointsNew global policies challenging data and AI developmentChallenges with data qualityHow has AI changed and evolved with digital natives?Open and closed models in the marketMetrics and selection for modelsBiggest lessons learned in the AI and data spaceSession conclusion
On this episode of The Six Five - On the Road, host Patrick Moorhead is joined by Dr. Dario Gil SVP and Director for IBM Research and Ion Stoica, Executive Chairman, Anyscale and Databricks and Professor at UC Berkeley. They engage in a captivating conversation about the significance of an open future for Artificial Intelligence and introduce us to the AI Alliance's goals and initiatives. Our discussion covers: The mission and objectives of the AI Alliance within the AI industry Historical context and motivations behind the formation of the AI Alliance and its unique positioning The diverse makeup of the AI Alliance, including pivotal roles of IBM and Anyscale, in promoting AI accessibility Strategies for fostering open and responsible AI development through the Alliance Overview of current projects and future endeavors of the AI Alliance Learn more at IBM Research and Anyscale.
Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well
Our first ever demo day aimed for 15-20 people and ended up ballooning to >200 and covered in the news. We are now running the 2024 edition in SF on Feb 23: Latent Space Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ”Beyond Transformers & GPUs”, and “Embodied AI”. RSVP here! You can find all LS online/IRL events on our new calendar. Super Early Bird tickets have just gone on sale for AI Engineer World's Fair, June 25-27!Today we have the honor of hosting two of Together AI's co-founders: Ce Zhang (CTO) and Vipul Ved Prakash (CEO). This is a rare opportunity to recap the history of the company since our last check-in with Tri Dao (Chief Scientist), some of their big releases, and do a deep dive into the state of the AI inference market. Together has emerged as one of the most consequential new startups in the new AI summer, last announcing a ~$100m Series A raise in November (at a ~$360-565m valuation). But there are at least three Togethers - Together the Research Lab, Together the Fine Tuning & Inference platform, and Together the custom models service. As we clarify on the pod, the overarching philosophy of Together is the ability to improve on all these fronts simultaneously by being “full stack”, from the lowest level kernel and systems programming to the highest level mathematical abstractions driving new model architectures and inference algorithms.Bringing Research and Industry TogetherIn just one year, Together has been behind some of the most exciting research in AI:* RedPajama, a fully open source dataset for model pre-training which mirrored the Llama1 recipe. Then followed by RedPajama2, a 30T tokens dataset of filtered and de-duplicated tokens. * RedPajama-INCITE-3B and 7B, which were SOTA in a few benchmarks at the time of release. * FlashAttention-2, developed by Together's Chief Scientist Tri Dao. We covered FA-2 in a previous episode with him.* Mamba-3B, the most promising transformer-alternative model that they released in collaboration with Cartesia. * StripedHyena, a SOTA graft of Hyena state space models and transformer models together* Medusa, an alternative to speculative decoding that lets you use multiple decoding heads instead of a draft model. * MonarchMixer, which was one of the most popular orals at NeurIPS 2023. It's an approach to transformers that replaces many of its core parts with Monarch matrices for better computational efficiency. And I'm sure we missed something! As Vipul reveals, almost 50% of Together staff is researchers, and two of their co-founders (Chris Ré and Percy Liang) are professors at Stanford, so we can expect a lot more here.Bringing “Disaggregated” GPUs TogetherOn their cloud, they offer inference as a service, fine-tuning, pre-training, etc, but unlike other providers they think of themselves as a disaggregated cloud. Today, they have ~8,000 A100 and H100 GPUs on their platform (an exclusive revealed on the pod!) totaling over 20 exaflops of compute, but instead of just buying more and putting them in a cluster and then exposing a `us-east-1` option for customers, they are taking heterogenous compute sources and adding a unified layer on top of it for developers to consume. Building on Ce's research, Together's GPU Clusters are taking on comparable AWS and GCP offerings in both cost and speed:Take the Hessian AI center in Germany or the DoE's INCITE; they have GPUs that they want to share with researchers, but they lack the cloud layer over it. Similarly, there's starting to be more and more differentiation amongst types of GPUs: H100s, A100s, MI3000s, etc. Each of them has different availability and performance based on task, and the end user shouldn't have to be an hardware expert to run inference on a model, so Together abstracts a lot of that away.A big theme of the Together inference stack, a “bag of 50 tricks” that we discuss on the pod, is also “hardware-aware” algorithms like FlashAttention and Mamba, which further emphasize the benefits of co-developing everything together:Special Focus: Transformer AlternativesAs we mentioned above, they are also funding a lot of research in Transformer alternatives. To reiterate a few points on why they matter:* Longer context is not the motivation for sub-quadratic architectures: Transformers don't inherently have hard limitations on context size, but they just get extremely expensive. When developing sub-quadratic alternatives, you easily enable very long context, but that's now how you should compare them. Even at same context size, inference and training is much cheaper on sub-quadratic architectures like Hyena.* Emergence of hybrid architectures: a lot of early conversations have been around the “post-Transformers” era, but it might be more like “half-Transformers”. Hybrid architectures could have split layers with some transformer-based and some state-space ones. One of the challenges is that a lot of hardware kernels are optimized for transformer operations, so you'd lose a lot by moving away completely.* Higher speed = higher GPU throughput: if we could reach the same benchmark performance on subquadratic architectures, it'd solve a lot of the GPU crunch. Today we peak at ~170 tok/s on inference in some open models; if we could reach 5,000 tok/s on the same card, you'd be able to serve 30x more customers on the same hardware. As a cloud provider, you're obviously incentivized to get there.We had a lot of fun chatting with the Together guys and we covered a lot of ground, so enjoy the conversation!Note: This is the first episode of a “cloud providers mini-series”. We have Erik from Modal and Ben from Replicate coming up next!Video PodcastJoin us to watching the video version of this pod on our snazzy YouTube!Show Notes* Together AI* RedPajama Dataset v1 Announcement* RedPajama Models v1 Announcement* Together Embeddings* StripedHyena-7B* Mamba-3B-SlimPJ* Vipul's X thread on Anyscale* Vipul's Razor* SemiAnalysis' "Inference Race to the Bottom" post* Chris Ré* Mike Conover's episode* Slim Pajama by Cerebras* Dolma by AI2* Jina AI* Tengyu's Voyage AITimestamps* [00:00:00] Introductions* [00:00:43] Origin and current state of Together.ai* [00:02:15] Transition from Apple to Together and the vision for open AI* [00:04:54] How Chris Ré introduced Ce and Vipul* [00:08:43] How RedPajama came to be* [00:13:34] Model training and Transformer alternatives* [00:15:37] DSIR and the importance of data in LLMs* [00:21:19] Inference vs Fine-tuning vs Pre-training usage on Together* [00:23:20] Together's GPU stash* [00:27:02] Why standardization of inference metrics is important* [00:29:26] Building moats in AI inference* [00:31:49] Federated vs disaggregated cloud computing* [00:34:57] Opportunities for improvement in the inference stack* [00:36:13] Anyscale benchmarking drama* [00:41:27] Not just an inference platform* [00:43:50] Together Embeddings and the future of embedding models* [00:45:53] State space models and hybrid architectures* [00:53:52] The need for 5,000 tokens/s speed in AI inference* [01:00:23] What's the most interesting unsolved question in AI?TranscriptAlessio [00:00:00]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're together with Together. Welcome to the studio, guys.Ce / Vipul [00:00:20]: Thank you.Swyx [00:00:21]: I don't know how you typically give self intros, but does anyone want to go first? How do we get our audience acquainted, especially to who's speaking, because it's unusual for us to do a four-person pod. Yeah.Ce [00:00:33]: Hi, everyone. I'm Ce. I'm one of the co-founders of Together and the CTO, working with the team on technical things.Vipul [00:00:40]: I'm Vipul Ved Prakash, co-founder and CEO of Together.Swyx [00:00:43]: I always consider you guys as one of the sort of all-in-one companies. I always want to say labs, but I feel like you're not a lab. What is the sort of origin of Together, and then what is it today? I feel like it used to be Together.xyz, and then now you're Together.ai.Vipul [00:01:00]: I think fundamentally, Together is about open and independent AI systems. We think this is one of the most consequential technologies of our time, and when we started the company in June 2022, our focus was to build a platform for open source, independent, user-owned AI systems. One way to think about it is big labs, frontier model labs, have built their own platforms for developer platforms for their models. We think of Together as a platform for everything else, whether these are open models, whether these are models being built by companies that are owned by them. Our sort of XYZ roots, we have a fairly deep decentralization and open ethos that kind of reflects in all our platform and strategy and business. And we also, the way we structure our cloud is by combining data centers around the world instead of, you know, we are today not located in hyperscalers, we have built a footprint of AI supercomputers in this sort of very disaggregated, decentralized manner.Alessio [00:02:15]: I know before Together, you were at Apple, so you go from like the most walled garden, private, we don't say anything company, to we want everything to be open and everybody to know somebody. What maybe did you learn from like the Apple way of being super close and polished and maybe what are you taking now to Together to make it open, but also a very nice developer experience?Vipul [00:02:37]: Yeah, I would say, you know, one sort of my, you know, background has been in open source for a long time. One of the first things I created was a collaborative spam filter, you know, this was back in the day. It's called Vipul's Razor. And it became quite popular. And the first company I founded called CloudMark was built around, you know, taking open source and building both an open side of it and a commercial product around it. I think Apple is sort of very focused on providing this amazing experience to its customers with, you know, most of the technology sort of hidden behind the product. And certainly the focus on fluidity and applying complex technology to make everyday things simple is something that Apple does really well. And, you know, that's been a sort of big part of how we think about our developer platforms. I think it informs it. The other thing is that during my years at Apple, we, you know, worked a lot on deep learning. And one of the things that was sort of very viscerally accessible to me was how well these systems worked. We, you know, we built an open domain Q&A system. This was based on Facebook's LSTM paper in 2016. And it was remarkable because we had a parallel system based on sort of information retrieval techniques, which is extremely complicated, didn't work that well. And you know, this thing we wrote in a week was just incredible performance. So I think some of those experiences, at least for me personally, sort of were creating this roadmap of how important and powerful this technology is. And you know, when the scaling loss paper was published, I was very clear, like it was in some ways something very profound. We've never had algorithms that improve in capabilities with scale out. So this is almost a new era of computing. So that's been, I think, the influence of Apple, my years at Apple, really for me, like crystallized the value of what we are doing together.Alessio [00:04:54]: And how did you decide to join forces? Because you did a postdoc with Chris Ré at Stanford. You know, we already had Tri Dao from Together and we talked about Hazy. What was like the meeting of the mind of, hey, I come from like the more technical postdoc assistant professor background and we've got yet a more product thing. What got you excited to like build this now?Ce [00:05:15]: So we have been working on this together, Chris, in the essentially last like 10 years, right? So it was like a machine learning system 10 years ago was like Power BI's graphic model, right? And then convolutional neural network and then all the foundation model that we see today. But if you look at this, I think that fundamentally the thing we are actually optimizing is actually not that different. It's always about data movement across essentially all the stacks, right? So when you do distributed like computing, it's about communication across different machines. When you do, for example, flash attention, it's about data movement at a different essentially memory hierarchy, right? So we have been doing this in the last 10 years and seeing the field start grow, grow, grow. So we kind of feel the current kind of this like wave of technology is actually the perfect time to actually bring all the research essentially into something real. And we are super lucky that we got introduced to Weibo, right? And then we hope to join forces and bring this to real world.Swyx [00:06:10]: It's an unusual team of like sort of research and industry. Like you've been like a third or fourth time founder now. Third time founder, yeah. And so like what is your first order of business when you like set up together? Like how do you sort of put something like this together? Oh my God, I'm going to use this word so much.Vipul [00:06:27]: I feel AI companies are really kind of driven by research. And Chris and I had been talking about how to reduce the cost of building models. We felt that there aren't really big data modes around foundation models. They are built from a subset of the web. What is difficult is the cost of capital to build these. And one of the ways in which you can reduce this cost is by making more efficient systems. With that, it was really about finding the right set of co-founders and team. In fact, when Chris introduced me to Ce, and I think within the first five minutes of talking to Ce, I was like, we are starting this company. And our early focus was thinking about this more sort of disparate set of resources, you know, GPUs around the internet. Can we use those to build? And we really have to compress communication for, you know, when we do gradient averaging, there's just a lot of traffic. And if you can reduce that somehow, you sort of open up the possibility of using cheaper compute, you know, across the network. And Ce's research for a decade has been in that subject. You know, and from there, finding, you know, other folks in the network, I think there is generally a lot of excitement and philosophical alignment around what we are doing, which, you know, we publish papers, we publish open source libraries and code, we build open models. And I think the people in academia in, you know, machine learning and NLP, that's really what they want to do. So I think that's been really a kind of kernel for, you know, composition of the company. And we're lucky to have, you know, at this point, attracted some of the best researchers in the field. So I think that's the most important thing. And, you know, the rest of it is sort of driven by us. A couple of these philosophies around independent systems and decentralization and good developer interfaces, you want to make it accessible. That's, you know, just as important. And the rest follows from there, I think.Alessio [00:08:43]: I want to try and fill in some of the blanks in the history of Together. I think people come on your website today and they say, you raised a hundred million dollars Series A. They're like, wow, these guys are like super legit company. But it feels like Red Pajama just came out a year ago. I remember we had Mike Conover in the studio, who had built Dolly at Databricks. And you announced it literally the morning we were recording. So we're like in the studio on our phones, looking at it. And it's like, wow, this is like the first time now there's like a good curated dataset to do open pre-training. So maybe let's start from there. Like, what was the motivation behind it? Why did you decide to do that? It's, datasets are one of the things that most people don't want to work on. They just want to do models, not datasets.Ce [00:09:27]: Yeah. So, yeah, first one is not the first, right? So I think it's actually built on a whole bunch of amazing effort the community already have. For example, Eleuther have the pile, right? There's a whole bunch of amazing datasets they have, like C4, right, from Google, right? So I think really get inspired by the impact those like datasets have on the community, right? So I think when we did Red Pajama, it was a time that people are really fascinated by Lama, the model, like Lama 1, right? Which I feel like decades ago, right? But it's kind of, people are really excited about the quality, right? So that's really like a big shift in people how to think about open model. People start to see hope, right? So, but the one problem of Lama is the data recipe is being described in a pretty detailed way in the paper, but the data is actually not there. So, and our original thinking is how about we take the recipe and we try to do our best effort reproduction and try to put it out, such that we can learn from our mistakes in the reproduction together, right? So that's essentially the original thinking behind Red Pajama. And we have been pretty happy and excited about what community have been kind of build on it. For example, there's a dataset called Slim Pajama, right? Which do deduplication over our data, right?Swyx [00:10:38]: From Cerebras, did they talk to you before?Ce [00:10:39]: Oh, yeah, yeah, yeah, yeah. So, yeah, so we are very good friends so we can discuss about technical perspective. We are pretty excited because I think it's kind of why we do Red Pajama in the first place is that people can actually build not only models, but also datasets essentially over that piece of artifact, right? So that's actually what inspired us to do the first version of Red Pajama dataset.Swyx [00:11:01]: Yeah, and then you released V2 maybe two months ago.Ce [00:11:04]: Yeah.Swyx [00:11:05]: 30 trillion tokens.Ce [00:11:06]: Yeah, 30 trillion tokens. So I think what's exciting about Red Pajama V2 is not only the number of tokens, but we start to kind of learn from Red Pajama V1. So one thing that we learned was that data quality is really the core, right? So you want to take this couple trillion token dataset and try to bring them down maybe to one trillion or two trillion, right? The way that you actually filter them, deduplicate them is not something that kind of pre-decided before you see the application, right? So you kind of want to have a modular framework to think about data quality, right? So like given application, let's automatically or maybe semi-automatically try to come up with a way to filter it down. So that's why in Red Pajama V2, we kind of overlay the dataset with like 40 different pre-computed quality signal, right? If you want to reproduce your best effort, like C4 filter, it's kind of like 20 lines of code, right? And this open up this opportunity you can actually put different filter together, learn the combination of filter. We are very excited to see what community actually come up with using Red Pajama V2.Swyx [00:12:11]: It was retrospectively so obvious that this is a good idea that I wonder how come more datasets don't do this. You release the dataset with all these toggles that you can turn on and off, right? And you can sort of tune up and down the quality in ways that you believe is important to you. Yeah, I just, it makes so much sense now in retrospect. Because everyone just publishes like their pipeline and then the end result. But what about all the intermediate stages? Yeah.Ce [00:12:35]: Yeah, so I think, so there are multiple things there. I don't think we are the only one like doing that. For example, like Doma from AI2, right? They have this very flexible format to actually put in those quality signals, right? Think like, we are actually calling them some, right? So you can actually load Red Pajama using their tool. That whole thing should work, right? So I think one fundamental thing that changed in the last year, essentially, in the beginning when people think about data, it's always like a byproduct of the model, right? You release the model, you also release the data, right? The data side is there essentially to show people, ah, if you train on this data, you'll get a good model. But I think what started to change is when people started building more and more of those models, people started to realize like different subset of data side is kind of valuable for different applications, right? The data becomes something to play with, right? So I think we are kind of lucky that we happen to release Red Pajama right at that point that we get this opportunity to actually learn from that.Alessio [00:13:34]: And you guys have a custom model training platform on Together 2. You have a bunch of stuff in there for data selection, like the DSIR and things like that. How did you decide to work on that versus, because you first started with like some of the fine tunes on LLAMA. Do you see a lot of interest there? And I know you've been doing a lot of research on state space models and other transformer alternatives. Like, do you also see that as something you'll keep working on this year and push more people towards?Vipul [00:14:02]: Yeah, I mean, we, you know, we think of how to make training more efficient and building models more efficient. Part of that is being able to select the right dataset. This is why you have signals, DSIR. You can start with a small dataset and find similar documents, build models with that. So we think it's an important part of the kind of model build tooling that, you know, sort of widely useful for people building different kinds of models. Similarly, you know, we are running into the limits of how fast you can make transformers. And we want inference at 5,000 tokens per second. I don't think we will get there with transformers and we need to learn longer sequences. Data, again, becomes very, very expensive with transformers. So I work on space state models and all the research that we are doing there. And hopefully other labs will pick up on this and make it a kind of important target for optimization. But we think that, you know, open source is a great place for this. We can provide these recipes for data and for training to our customers who are building, you know, custom models themselves. And, you know, we are quite excited about the sort of progress we are seeing there.Alessio [00:15:18]: Do you have some of these models available for inference on Together? Can people play around with a strictly, you know?Swyx [00:15:25]: Yeah.Vipul [00:15:25]: Yeah, they're available for inference on our serverless platform.Swyx [00:15:29]: I always try to be the person who asks about acronyms in case, you know, people want to understand. Should we explain importance resampling, you know, that kind of stuff?Ce [00:15:37]: Oh, yeah. So DSIR essentially, it's a fundamental idea. So it's one of the paper from Percy, right? So essentially, if you know what you are doing, you can actually use that as a very strong signal about what data to put in to insert training process, right? So that's essentially the fundamental idea, right? So, and then more concretely, right? So there are actually different versions of DSIR, right? So one version is like if you have a validation site, right? You can actually somehow measure the similarity between the validation site and also your pre-trained corpus and essentially subset, like the subset. And often there's actually like less targeted version of DSIR where you'll say, yeah, maybe Wikipedia is actually a very good corpus. Let's try to find more Wikipedia, right? And you can think about it in two ways, either as a way to come up with different weights for different data slices. Yeah, so as like filter type of step. Yeah, for a data set, or think about that as like data augmentation. So that's how, yeah, that's how we think about DSIR.Swyx [00:16:33]: That makes sense. I will have to read the paper to understand a little bit more. Because when you say things like, we have to know in advance what we were trying to do with the model, then we do importance resampling. That is against the principle of general intelligence, right? Like the point is to train AGI.Ce [00:16:48]: Yeah, so it depends on what do you mean by being general or generic, right? So I think, I mean, you can always take a meta-learning perspective that we know the distribution of tasks that we care about, right? So you can always go kind of up in the ladder of how general the whole thing is, right? But also for many of the customers that we are actually talking to, right, they have kind of very targeted application, right? The benefit you can get out of that is you could build a better open model, often smaller, often easier to do inference, if you know what you want, right? So I think the whole trade-off would be, and the x-axis would be how generic the whole thing will be. The y-axis would be not only the top accuracy, but also a whole bunch of the deployment cost, right? The size of the model, right? The robustness of the model. So I think different people will navigate the space in different way. And we want to be the platform, essentially, whatever point that you want, we have a solution for you.Swyx [00:17:43]: One more thing on data before we go deeper on state-space models. Are we running out of data? Can we go in order of magnitude? Can we go five orders of magnitude? How do both of you think about how much data we have and how much we need?Ce [00:17:55]: Yeah, so I think that's a very, very good question. So I don't think we are running out of data on Earth.Swyx [00:18:02]: Right, so think about it globally. Training data, training class data.Ce [00:18:05]: Yeah, yeah, so I think, I mean, some of them are not accessible, right? But I do think there are many organizations in the world have enough data to actually train very, very good models, right? So, I mean, they are not publicly available, right? But there are people who actually have access to those, right? So I think in general, right? So if you think about the data in the open space, right? So I guess that was specifically that you actually mean whether we are running out of data. I do think there need to be some way, right? That people who are training open models get connected with essentially data that's not internet data. So I think that channel need to be opened up for the open model to get more data, right? But I'm kind of on the optimistic side that the society will figure out a way that we can train open models that's beyond this internet data.Swyx [00:18:57]: Beyond internet, meaning books?Ce [00:19:00]: I mean, there are a lot of those, right?Swyx [00:19:02]: Books, right?Ce [00:19:02]: Transcripts, right? Videos, audios, right? So there are a whole bunch of data sources that we are not integrating into open data side, right? So, and maybe they shouldn't be open, right? So I think the community need to figure out a way, yeah, like the best balance, yeah? Such that we can have open models, but on the other hand, also have a reasonable collection of data that we can actually use.Swyx [00:19:29]: I think a lot of people think that, there's a theory that Whisper was released so that you could transcribe YouTube and then use that as a source of tokens. Then I talked to other researchers who are like, you know, YouTube has very low quality tokens. You know, do you want your model to talk like a live streamer from YouTube? Because that's what they're going to do. So it's not clear, like what the quality of this data could be.Ce [00:19:53]: Yeah, I guess that depends on your application, right? So I think as a platform, right? So our goal is whatever application that you have, yeah, so we have a platform that you can actually achieve your goal, right? So there are definitely applications that kind of make sense to speak like YouTube, right? So, but there are probably also other application that kind of more on the formal side, right? So I think there are going to be a diverse collection of models, both open and closed, right? So, and we kind of want to be the engine that powers that.Swyx [00:20:21]: There's a lot of people who own data sources who are doing the locally optimal thing and humanity as a whole is losing out. So like New York Times is swinging open AI, you know, Stack Overflow shut down their API, Reddit shut down their API, X, you know, made their own model, right? On Twitter data. We're just going to have all these like tiny little gardens of data that it would be useful in a general model, but everyone's just trying to make their own model. And it seems like globally suboptimal.Vipul [00:20:47]: I think you need to have some kind of a marketplace for figuring out how to get this, you know, data into models and have, I think we'll increasingly see more of that. You know, I think there's a positive aspect to it too. There is a incentive for creators to participate in a system, which is sort of more fair relative to, you know, the capture of value by an AI company that's taking their data. But I agree. I think this is a big open problem that needs to be solved. And I hope there will be, you know, serious efforts around it.Alessio [00:21:19]: Let's talk about the most precious resource on planet earth, GPUs. You have a lot of compute obviously, but you also have a lot of product pieces. You have inference, you have fine tuning, you have pre-training. What's the split in terms of usage? Do you see most people are just running inference on off the shelf models? Do you see maybe some last mile fine tuning?Vipul [00:21:40]: I would say right now, the top five models on our inference stack are probably all fine-tuned versions of open models. And we've seen- Who fine-tuned them?Swyx [00:21:51]: You fine-tuned them?Vipul [00:21:52]: They were fine-tuned by our customers.Swyx [00:21:54]: By your customers.Vipul [00:21:55]: You know, either on our platform or off our platform. And we are generally seeing that, you know, that is the sort of trend where you can get better quality on your task by sort of now easily adapting these models to your data. We also have, I would say, over 20 big model builds happening on the platform, which are customer. We see a lot of training and it's also somewhat surprisingly a more continuous kind of workload. We sort of imagine that this would be more episodic. You train a model and then you do inference. But what we find is, you know, we train a model and then they train the next version and then the next version, which sort of grows in scale. I would say training is still the bigger portion. Some ways inference is super linear to model quality. And as the models are getting better, there's more and more inference.Swyx [00:22:48]: Oh, because they're more useful. Yeah, they're more useful, yeah. So, okay, so training is bigger. This is actually consistent with what we've heard from Mosaic, that, you know, people think that training is sort of like a one-time deal. You do one big run and then you're done. It's never true. And so I'm interested in, like, putting some numbers and I don't know what you have disclosed or what you want to disclose, but, like, how many GPUs do you have? What is the equivalent amount of compute that you have? Because I understand that your GPU setup is different than what people typically think of, like, a giant data center somewhere, right?Vipul [00:23:20]: I don't think we have shared this number publicly. It's, you know, so this will be the first time, I guess. Like, we have close to 7,000 to 8,000 GPUs today. It's growing monthly.Swyx [00:23:31]: What class of GPU are they?Vipul [00:23:32]: They're mostly A100s and H100s.Swyx [00:23:35]: Okay.Vipul [00:23:36]: And probably more, I think, split towards H100s now. You know, we'll be sort of building this best-of-class hardware. So as there are other versions of these coming out later this year, we plan to have those in the fleet as well.Alessio [00:23:53]: I know when we talked last year, you were also using some of the supercomputers by the Department of Energy. There was kind of like a lot of random GPU compute in the world. Have you seen that kind of getting timed out? I think maybe a year ago, people were like, oh, yeah, you can use this GPU computer that is going to be end-of-life. Has the bar changed to give access to those resources?Ce [00:24:13]: From our perspective, it's actually getting better. Yeah, so from the community perspective, because many of the institutions in the world, they're actually investing in hardware, right? So for example, we are working with one of the institutes in Germany called Hessian AI, right, which gives us a lot of help on the compute side. So they start to have this very big GPU cluster, and they're actually sharing that with the community, right? And it's not super big, right, but also not a small one, right? So you start to see this, like, different lives that start to pop up, right? And because of the power of the community, they start to actually share that. So we actually find as a researcher today, it's probably easier for them to actually get a GPU than last year.Swyx [00:24:56]: Interesting.Alessio [00:24:56]: And then for you to buy them, what's the state of the market right now? Is it still extremely hard to get any? Do you have Jensen's phone number? Do you have like GM phone number? Do you guys get like the SDR because you're like under 10,000?Vipul [00:25:12]: NVIDIA is obviously motivated to help us, both as an investor and we are their customers. I would say the market is very tight still, and it's likely going to be this way for a while, is my sense that the demand for AI computing is just kind of ramped up very, very quickly, and it will take a while for supply to catch up.Swyx [00:25:37]: So how tight it is, and let's say compared to like a year ago, two years ago, what do you mean when you say tight? The things you want, you can't get?Vipul [00:25:42]: You can't get them immediately. They're sort of, you know, minimally like two to three months out. Any inventory that shows up tends to clear very, very rapidly. And, you know, we obviously sort of look at this in a very detailed and analytic. There is four to 5 million GPUs that will be sold this year from NVIDIA and others buying. And if you think about 512 to 1,000 GPU cluster for a company, that's 4,000 to 8,000 companies, right? So it's in some ways a very small number. In other ways, the cost of GPUs will be, you know, 80 to $100 billion, and then you layer servers and data center space and electricity on top of that, and that's, you know, close to $250 billion worth of kind of compute, which when you compare it to the cloud computing of today, you know, AWS's last year was $88 billion in revenue. So this is really kind of a build-out happening of AI hyperscalers. It is much more disaggregated, and it's very, very global. So, you know, we think that GPUs are going to be sort of a precious resource for a long time, and using them optimally is very valuable.Swyx [00:27:02]: Yeah.Alessio [00:27:02]: Our friend, Dylan Patel from Semianalysis, he wrote a post about the inference market recently and obviously mentioned you guys. In his post, he said, our model indicates that Together is better off using two A180 gig system rather than a H100-based system. The temperature and performance testing also point to Together utilizing speculative decoding. Any thoughts? Is Dylan right? I don't know, what's-Swyx [00:27:26]: What is his model, man? What does he know that they don't know? Yeah, exactly.Alessio [00:27:30]: I wanna know, I guess like from the outside, and sometimes we even do it, we try and speculate on what people are actually doing. So for the first time, now we have a former guest writing about a current guest. So we wanna know what you guys thought and maybe what are some of the misconceptions that people from the outside have on what it takes to run like a GPU cloud today?Vipul [00:27:50]: Yeah, big fan of Dylan's, by the way. I religiously read Semianalysis. I think there were some errors in that analysis. In particular, we were trying to decode it and one of the things we noticed is that it assumed that input tokens weren't being priced. So I think that may have been an error in the model. I also don't think that there's this assumption that people are running this at a loss. I think it's very expensive. You can't do that for very long. And there are trade-offs in terms of batch sizes you use and the kind of tokens per second performance that are kind of system trade-offs. We've done a lot of work. This is one of the key areas of research for us. So our inference stack is a combination of 50 different sort of tricks and techniques and we think there's a lot of room for optimization here. So whichever hardware provides better performance, whether it's H100 or A100s or L40s, we can sort of measure price performance on particular hardware and we tend to use that for that model or in some cases, certain customers have data streams which can be then optimized for a particular configuration regime. So we do fairly detailed work on how to make this more efficient and so it's hard to, from the outside, looking at memory bandwidth and estimating what's actually happening.Alessio [00:29:26]: How much of these 50 tricks are you giving to yourself and how many are you gonna open? Because we have three now, obviously Flash Attention 2 is open source. He mentioned he'd love to come work together because of how much you care about open source. Yeah, how do you weigh that as a CEO and CTO?Vipul [00:29:43]: A lot of it is open, right? Flash Attention, Flash Decoding, et cetera, and we publish something that's very generally universally useful. It's going to produce better open source AI. We tend to publish as open source. I think on the inference stack, there are open source inference stacks which are pretty good and definitely today, it gives us a competitive advantage to have the best one. So we are not sort of rushing out to release everything about it. It's not overall that additive to open source out there and it is particularly useful as a business for us to provide best price performance. Yeah, we make these decisions. We have discussions. Anything that we keep closed, we generally talk about it quite a bit and decide like this is the piece that is closed for today and it may not be the case six months from now. It may not matter as much.Ce [00:30:40]: Yeah, so I think being open is kind of very important, right? So I think the whole company actually built on this idea that there's going to be ecosystem built on our open models, right? And that's also how we are really lucky to attract this top group of talents to actually join us because of the dream and the mission that we have on our side to really facilitate the open ecosystem, right? So I think in general, it's like I think all the ideas should be open. So that's why we publish papers, right? We actually talk about ideas, right? So I don't think it makes any sense to keep idea like close, right? So there are some software artifact that are kind of really deeply embedded into our kind of own kind of like stack. It kind of only useful when you're trying to build a disaggregated cloud, right? Maybe at some point that we're going to be open as people said, right? But at this moment, right? So we are kind of busy actually building it, right? So that's probably kind of getting to the picture about when that piece is going to be open, right? But I think on the research side, the ideas and for our people to publish things, I think that's really, really important, right? So I think that's how we get talent. That's how I think we as a company going to move the field forward.Swyx [00:31:49]: I noticed that you never used the word federated learning or inference. Is there a distinction that you draw?Ce [00:31:55]: So, I mean, it's definitely not intentional, but I think federated learning is, have been used in so many different ways by so many different people. It starts to lose a very precise meaning about what that really mean, right? If you go back to the original Google paper of federated learning, I think that's very different from what people are talking about today when they say federated. Yeah, we kind of want to be really precise about it.Swyx [00:32:18]: And so your term is disaggregated.Ce [00:32:19]: Yeah, so as an infrastructure, right? So that's disaggregated.Swyx [00:32:22]: Aren't most clouds disaggregated? Like what's different about it?Ce [00:32:27]: So one way is that most of the cloud are disaggregated, but some of that is actually being exposed to the user, right? If you go to AWS, you do know which region you are in, right? So I think one thing that we are trying to do is you have this disaggregated cloud, not only about location or geographically where they are, but about this reliability and also this diversity of this infrastructure. So, and if we want to build a reliable, high-quality layer over that, the user actually don't know, right? What's actually happening under the cover, right? So I think that's one of the difference of the way that we are thinking about infrastructure.Swyx [00:33:06]: Yeah, a bit closer to Cloudflare than AWS. Yeah. Yeah. We have one question here, which we'll just throw out, it's kind of fun. So going back to this sort of inference stack piece, maybe if you had to pull out like a call for researcher or just like point out interesting areas of work that you're interested in, what pieces of the stack have the most opportunity for improvement?Ce [00:33:27]: Yeah, so I think the way we are thinking about the inference stack is, so there are multiple things that can happen, right? So you can do better algorithms, like speckle decoding, you can change the model architecture, you can go really crazy on the system side, right? And you can also code it on the hardware, right? So it's not really clear innovation on a single dimension will get you there. So the key thesis on our side is, if you only push on one direction, you are going to reach diminishing return really, really quickly. Yeah, there's only that much you can do on the system side, only that much you can do on the algorithm side. I think the only big thing that's going to happen is when you ask all those dimensions to actually compound, right? So to have algorithm, model, and system all come together, so I think that's how we reach the next 10 times improvement on inference, right? So I don't think there's a single dimension that is particularly important, but looking at this space in a joint way, right? Try to co-optimize jointly multiple dimensions, I think that's going to be really important for the community to look at.Vipul [00:34:28]: Yeah, we often see, I see numbers from the team and you have these multiple methods, not all of them compound. So you mix these together, it's still similar results and some combination of them will have this incredible effect that is really, really super interesting. So it's very systems, you know, a kind of broad systems approach to it that's the most effective.Swyx [00:34:51]: I think I finally get the name of the company, like- Bring it together, yeah. Everything needs to be automated together.Alessio [00:34:57]: All right, just quickly, how does all this work change, just like some of the architectures change? I know a mixture of experts like speculative decoding is a little less efficient because of memory bandwidth. How much of it do you invest when it's a maybe model-specific improvement versus more horizontal thing? Also, you're researching different architectures, so how much do you want to spend time optimizing what state of the art today versus what's coming next?Vipul [00:35:24]: We do spend time on what state of the art today as well as what's next. You know, the value we get from doing specific optimization, even for, you know, what works well for a particular model on A100s with a particular bus versus H100s, it's a worthwhile investment for us. So we will go down fairly deep into a specific architecture and specific hardware. It does also inform what works better where, and you don't have to take the same approach for, you know, every model and every sort of hardware setup. We can take these different approaches and we do have these multiple systems now. We know that this, you know, system B is better for mixed role and system C is going to be better for stripe tying or Mamba.Alessio [00:36:13]: Before we move on from inference, we need to talk about any scale of drama. So we're actually having Sumit on the podcast tomorrow, who also talked about, kind of came to your guys' support about how, yeah, how important it's not just like, oh, together saying this benchmark's not good because they look bad in it. How, I guess like, it's a hard question to ask, but like, why did you decide to just come out and say it? And how maybe does that also reflect the values that you guys have about open source and openness and kind of like being transparent about what's real and maybe hopes for standardizing some of these benchmarks to make it more clear?Ce [00:36:56]: So it's a great service and skills doing for the community, right? I mean, it's very hard to do benchmark. The moment you do benchmark comparing N players, right, N minus one will be unhappy. You have two tables, then maybe N of them will be unhappy, right? So it's a very great thing that they're doing. And in some of the work that we are doing, we actually use RMOperf, right? So it's a great thing that they're actually doing. So I think one thing about benchmark is, and probably the professor part of me are talking, is a good benchmark should think about how it's going to incentivize the field to actually move forward, right? So if the benchmark really become a kind of standard, how are people going to over-optimize to the benchmark if you are going to do that? And when people are doing that, what are we actually trying to incentivize, right? Will that move the world to a better place? Or will that essentially have every single player focus on marketing or spending time or money on something that actually do not matter on technical side, right? It's very hard to actually strike a balance, right? So I think the reason we kind of try to give feedback on the benchmark is kind of want to open up the discussion about how does the industry should come together and define maybe a common way that we compare with each other, right? So like how database people doing TPC, right? Maybe you should have something actually similar, right? So we are trying to start some of the conversation. So it's not really that we jump out to say it's not good because there's no way we can have a perfect benchmark. That doesn't really exist, right? So just try to kickstart a conversation that maybe we should come together and do something that the community agree and align with the benefit a user going to get, right? So just get the conversation started.Vipul [00:38:42]: I've spoken to the AnyScale team after that, and I think they had really great intentions. And partly, I think it felt very objective and everyone sort of had a reaction to it because it just didn't match their benchmarks that we've all run internally against different services. I think a common industry benchmark run by an independent party versus one of the vendors.Swyx [00:39:04]: Is there one that you appoint to?Vipul [00:39:06]: I don't think one exists today. I think there should be. We're having some conversations about someone setting one up. And there's lots of interesting aspects of this. Time to first token is a function of where the test was run from. There is different load on these services at different times of the day and weekday or weekend. So you have to measure that well. And I think if all of that were done very well by an independent source, that will be a very useful service to customers and in the services themselves.Swyx [00:39:39]: Yeah, I'll point people to artificialanalysis.ai, which is a new one that recently emerged. I don't know if they've done it right. It looks like a side project of a couple people. But I think it's in all the provider's interest to work with them. And ensure that there's an independent third party that's measuring these things, right? At least on the baseline. For me, what's worrying is more about what Toa was saying, which is, do these benchmarks skew things in ways that customers might not be mindful of? Like, what are these things overemphasizing that we might be missing? And I don't really know. It seems like a lot of these services bundled together, they're a version of quantization as well. So that means there's performance trade-offs, right? You're not comparing apples to apples, the same model itself, even though it's like a llama variant or whatever. So what do people trade off? They trade off latency, they trade off price. Obviously, those are the first two. But what else, right? What factors matter in an inference business?Ce [00:40:33]: Yeah, so I think there's also the throughput, right? So there's the time to first token, right? So, and then there are things that users do not often see, for example, the reliability, right? The capacity, right? So that also have impact on user experience at a global scale. Maybe not a single query, right? But in aggregation, you can also see a whole bunch of, like, whether you are emphasizing P50, P95, right? So the whole bunch of things that you can actually play with. And of course, there's also quality. So there are different ways to actually make the whole thing faster, specification, quantization, or combination of those, right? So yeah, so there are so many things to actually play with. So they probably need a benchmark that the protocol is transparent to make sure, like, it's very clear what we are doing and a whole bunch of check on the quality to make sure we are putting the right group of stories in the same table. So I think then essentially the user can actually navigate the space. So I think that's going to be good for everyone.Swyx [00:41:27]: Yeah, makes sense. It's a very important field and I think hopefully there's a good third party that emerges from this. So I just want to touch on one more piece, which is I think I'm appreciating from this discussion that fine tuning is a bigger part of your business than I thought. The other big player in fine tuning is Mosaic. Well, Mosaic is more training, but like there's a bunch of other players in the fine tuning space. If I was a prospective fine tuning customer, what do I come to you with? Do I come to you with my custom data and that's it? Do I also have to write the fine tuning code? What level of engagement do you do with your customers?Vipul [00:42:01]: I think across the spectrum, our customers are training models, pre-training models from scratch and many of them will bring their data sets, you know, user infrastructure and training stack to train their models. There are others who have trained smaller models and want to scale up, scale up across infrastructure, scale up across data. So we'll sort of help them do that. We will have customers who are sort of initially started a little bit more consultative. They have a particular task and idea in mind and we will help them get from there to the data set and the right model to achieve that task. So it's a spectrum and, you know, our goal is to, we're trying to productize as much of this as possible. So that the whole process can be fast and scalable. I would say there is a lot more understanding around fine tuning now, like even the last six months, there are, you know, source tools, recipes, literature, podcasts, discord channels where people are figuring out and it really is in many ways, one of the successes of open source is you have small collectives of, you know, engineers who have created, who are now creating the top models on open source leaderboards. And I have tried out all sorts of different sort of, you know, data recipes, creating synthetic data. Merging models. Merging models. So it's, that's really fun to see. And I think that sort of agency that exists now is exciting. And that is, we see a lot of that sort of being applied into products and, you know, more commercial models that people are deploying in their applications.Alessio [00:43:50]: And then just to, I guess, wrap up the together, it's almost becoming like a platform as a service, because now you release together embeddings. How did you get 92.5 accuracy on 32K retrieval? And do you think we're kind of like getting to embeddings or just like, we did everything that we could, you know, we're getting to like the most optimized it's gonna get and then we should just focus on models and inference or do you think there's still room there to improve?Ce [00:44:17]: Oh, I don't think we haven't even got started on embedding. Yeah. So I think there are so many things. So like embedding is really fundamental for many things, for example, rack, right? So deep in application. So that's how people bring knowledge in. That's also the fundamental piece when you want to build a better model, right? So that's give you this understanding about what actually get into the model. You can actually use that to actually build a better data set, get a better model, then get better embedding, you'll start this loop, right? Without the good embedding, the loop is not closed, right? So I think both on the quality side, how to embed more like dedicated semantics, like into those vectors, how to deal with negation, for example, right? So, and how can you make the whole thing really, really fast? So I think for the next couple years, yeah, we will see a whole bunch of new embeddings maybe of different size and much, much faster than today. Yeah, so I think it's a very active research area. I think people should invest more, yeah.Swyx [00:45:14]: I was surprised to see, I think Jina or, yeah, there's Jina AI, and then there's another guy, Tengyu's Voyage. They are coming out as startups purely focused on embeddings.Ce [00:45:25]: Yeah. Yeah, so I think it's a very, very important piece of the system, right? So you people haven't focused on a lot on them before, and they should definitely start to do that.Swyx [00:45:36]: Yeah. Why are the Chinese universities so good at embeddings? You know what I mean, right? Like the BGE and- Yeah, yeah, yeah.Ce [00:45:44]: So I don't know. We just released our first embedded model, so we still try to learn how to build an embedded model. Yeah, so ask me again in six months.Swyx [00:45:53]: I'll probably have more insight about how to build a better one. I just noticed that you saw 8002 was used to be at the top of the MTB chart, and then it's just like sliding down and down and down, and all the new models are coming out of China for some reason. And I'm like, I don't know what's going on there. So we cannot leave this discussion without talking about state space models. But first of all, how much of the company is dedicated to research? Like it's obviously like not production quality yet, but-Vipul [00:46:17]: I would say it's like 40, 45% I was counting this morning. That's huge.Swyx [00:46:22]: Yeah, so that's the biggest- It's a big investment. Yeah. Okay, well, I mean, it looks like it's paying off, so. And then high level, I will confess or admit or mention for the listeners who are also similarly skeptical, I did not used to care about long contexts because I was like, you know, 30K is enough, 100K is enough, right? I'm not, you know, modeling DNA sequences or anything like that. Why do I need long context? And I mean, first of all, I'll throw that open to you. But second of all, I think what Mamba did for me was change that perception of that. It's only about a long context. The only reason you want sub-quadratic architectures is for long context. Actually, that's not true. And it's also just more efficient to train, period. Right? I'll just leave that open to you. Like what's the motivation that people should keep in their heads? There are multiple things, right?Ce [00:47:09]: So one thing is that, I mean, the moment a model can do for long context well, so it often means that it's kind of cheaper. Yeah, so I mean, that's why it's kind of long. I mean, in principle, transformer can do long context. It's just very expensive. So I think what those like state-based models trying to do is try to push the size of the state, right? Like as small as possible. That's why it's kind of long context, right? And try to kind of like decouple this like quadratical dependency, right? To make sure you can have a much better execution pattern.One direct consequence of those is you can do long context really cheaply, but on the other hand, also introduce a whole bunch of benefit even you are not doing long context. Right? So I think that's actually probably equally important. Because data gets smaller, you can do really large batch size, right? You can actually be very faster. Right? So yeah. And another thing is like, one of the hypothesis that we have is, like in Stripe Hyena, it start to have a hybrid architecture, right? It has part of it has like state-based model and part of it is still the transformer. So different component probably deal with different things kind of better. So maybe by putting them together, by thinking about how information propagate, over this whole horizon of this context, you can probably get an even better quality model than transformer. Right? So I think that's why we are kind of invest a lot of things, on those models. Not only for the context, which is very important, but also for a whole bunch of benefit it could get.Swyx [00:48:42]: Yeah. How should people treat the distinction between Mamba and Stripe Hyena? Like what's the point of releasing these two as separate models? Is one like sort of the together proprietary one and then the other is like the more open research one?Ce [00:48:53]: Yeah. So I think it's pretty much a different stage of exploration. So they kind of have different hypothesis when we try to build those. Yeah. Like for instance, there are different view about state-based model. One is Hyena, another is like Mamba, right? They're actually different architecture. So when we build Stripe Hyena, right? So the curiosity that we have is how good can we... So what is the highest quality non-transformer model we can ever build? The goal of Stripe Hyena is try to see whether we can match Mistral. And by fine-tuning well, whether we can outperform that in some way, right? So it has a very, very strong baseline that we are trying to beat. So that's why there's hybrid scene, like getting the picture, right? And for Mamba, it's kind of more... The curiosity was how far can we push for pure architecture? Then we start from this very system make from small to large, right? All the way to 3 billion, right? So the baseline was essentially the best 3 billion model. So I guess at a different stage of exploration, at some point, I think they are going to converge. We actually learn different things, like when building different models. I think they are just like this intermediate stage in the exploration at different points.Alessio [00:50:02]: You mentioned the hybrid architecture. Is that the model grafting that you mentioned in the Stripe Hyena post where I mentioned you can have transformers and not together? Like this is a concept that I hadn't heard before reading about this. So I think most people's mental models, like transformers or something else, it's not transformers AND something else. How do you train a model that is hybrid? Is there any difference in like how you construct your datasets? Is there any difference in then how you run inference on it? How should people think about starting research in this field?Ce [00:50:36]: Yeah, so we were also very surprised. Yeah, so when we come up with this hybrid architecture. So the way to think about it is like you have different layers in the neural network, right? So like the stateless model for some layer will already give you the benefit. For the other layer, they could be transformers, right? They could give you this more global view of the sequence, but for me, for other layer, don't have to have that, right? I still can have all the other things that kick in, right? So we don't know what is the optimal mixture between different architectures. I mean, in principle, we can have a mamba, hyena, and transformer, all those things that come together, right? And then you can see what makes sense. We have no idea what is optimal doing that. So what we are excited about is now the community have a whole bunch of building blocks that they can actually like playing like a Lego, right? So just put together and see what happen, right? So we are kind of very excited about that. Yeah, we are in the process of trying to learn more like about this architecture. And when we know what we are talking about, we will definitely share with the community about how to do that in a systematic way.Swyx [00:51:41]: Cool. What are we still unsure about? Like, why don't we just, you know, put all the money in the world and training these things now? Like what is left to figure out before we scale this thing?Ce [00:51:53]: So like if you look at how transformer like it's been developed, right? In the last like five to 10 years, right? So people don't start from like, you have this attention to all you need the paper and then let's put all the money in, right? Always start from this very systematic understanding about the scaling, about data quality, about essentially the limits, right? I think for a state-based model from the labs to the real world, you kind of need to go through the same process. But of course, the second time doing that is kind of easier, right? But I think there's no way we can get rid of this systematic step of studying scaling law, study what data to put in, right? So what's the impact of different data slices to the data, yeah, to the final model quality.Swyx [00:52:33]: Do you expect that the data inputs will be different?Ce [00:52:37]: I don't know, but I wouldn't take that for granted that they should be the same, right? So that's one of the hypothesis that, so we have no opinion on that because I think that's the result of the study, not the assumption. Yeah, we do not need to assume that.Swyx [00:52:51]: Okay, scaling laws and data, anything else like architectural that we are not sure about? Because now you have this selection mechanism that you're pretty happy with.Ce [00:52:59]: Yeah, so, I mean, first of all, how to mix them, right? So, and second is what is the architecture? So if you look at transformer, right? So one very interesting piece there is people optimize also the hardware, yeah, to make sure that things run very fast, right?They're very efficient kernel, they're very efficient hardware. And then that's add another boost, right, for the transformer architecture, right? So that's something that should happen for state-based model. Which architecture is kind of easier kind of to run on the hardware, right? So, hosting going kind of faster, you can put more data, it add another dimension in the scaling law. So I think we just need to plow the whole space and just be really systematic from small model to 1 billion, 3 billion, 7 billion, just go all the way up, right? So I wouldn't jump around in the space. I would just like be patient and just like be systematic. Yeah, I think we'll get there, yeah.Swyx [00:53:52]: Yeah, well, I'm looking forward for more research from you guys to figure that out. So one dimension, which we didn't talk about, we talked about long context, we talked about efficiency, but speed is very, speed is also very important. A good inference provider provides, let's say 70 tokens per second, and then maybe that's faster than less good inference providers that are more like 30 tokens per second. But that's the rough range, right? State-of-the-art today. That's around the human speaking speed, human reading speed is about 200 words per minute. Why do we need 5,000 tokens per second is my question back to Vipul. And maybe is this something that is an emphasis for research as well, or is this more just an inference only thing?Vipul [00:54:29]: There are applications that are consuming the tokens that are produced from unmodeled, so they're not necessarily being read or heard by humans. That's a place where we see that level of requirement today that really nobody can quite satisfy. There is, can I think about, as intelligence grows, how do you sort of increase the bandwidth of, you know, how do you reduce the latency of it? If we can do 5,000 tokens a second, the same card can produce, the throughput of that card goes up significantly and can support more applications. So I think it's important from that perspective. And then there are, it opens up new UX possibilities. Once you can get sort of an immediate answer
Note for Latent Space Community members: we have now soft-launched meetups in Singapore, as well as two new virtual paper club/meetups for AI in Action and LLM Paper Club. We're also running Latent Space: Final Frontiers, our second annual demo day hackathon from last year.For the first time, we are doing an audio version of monthly AI Engineering recap that we publish on Latent Space! This month it's “The Four Wars of the AI Stack”; you can find the full recap with all the show notes here: https://latent.space/p/dec-2023* [00:00:00] Intro* [00:01:42] The Four Wars of the AI stack: Data quality, GPU rich vs poor, Multimodality, and Rag/Ops war* [00:03:17] Selection process for the four wars and notable mentions* [00:06:58] The end of low background tokens and the impact on data engineering* [00:08:36] The Quality Data Wars (UGC, licensing, synthetic data, and more)* [00:14:51] Synthetic Data* [00:17:49] The GPU Rich/Poors War* [00:18:21] Anyscale benchmark drama* [00:22:00] The math behind Mixtral inference costs* [00:28:48] Transformer alternatives and why they matter* [00:34:40] The Multimodality Wars* [00:38:10] Multiverse vs Metaverse* [00:45:00] The RAG/Ops Wars* [00:50:00] Will frameworks expand up, or will cloud providers expand down?* [00:54:32] Syntax to Semantics* [00:56:41] Outer Loop vs Inner Loop* [00:59:54] Highlight of the month Get full access to Latent Space at www.latent.space/subscribe
By now, almost everyone has heard of generative AI and its potential to revolutionize many industries. But does the hype align with the reality? As businesses search for ways to harness its power and gain a significant competitive edge, there are many questions: Which current generative AI opportunities can businesses start to take advantage of today? And what limitations do we still need to overcome? In this podcast, we dive into key benefits as well as challenges of building generative AI business solutions. We also talk about how to successfully build and deploy solutions that can help businesses take their efforts to the next level. Join us as we explore these ideas with: Teresa Tung, Cloud First Chief Technologists, Accenture Ramtin Davanlou, AI & Analytics Principle Director, Accenture Waleed Kadous, Chief Scientist, Anyscale Christina Cardoza, Editorial Director, insight.tech Teresa, Ramtin, and Waleed answer our questions about: The rise of and interest in generative AI Considerations for deploying generative AI solutions Taking generative AI out of the experimentation phase Biggest opportunities and use cases for generative AI The importance of partnerships and collaboration What's next for generative AI Final thoughts and key takeaways Related Content For the latest innovations from Accenture and Anyscale, follow them on: Twitter: @Accenture_US and @anyscalecompute LinkedIn: Accenture and Anyscale
Waleed Kadous, Chief Scientist at Anyscale, is one of my go-to experts for best practices on building applications leveraging large language models.Subscribe to the Gradient Flow Newsletter: https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Google • AntennaPod • Podcast Addict • Amazon • RSS.Detailed show notes can be found on The Data Exchange web site.
Our Bolt Action Christmas gift guide, with some of the usual wargaming and hobby advice sprinkled in. Ready to do your Christmas Bolt Action shopping? Check out the ranges at Firestorm Games and Wayland Games: not only will you get great discounts, you'll also help us continue to create great content for you every week! https://www.firestormgames.co.uk/wargames-miniatures/bolt-action/?aff=64a025ee621f1 https://affiliates.waylandgames.co.uk/1240.html Some non-affiliate links for things we like and discussed in the episode: Charlie Foxtrot (for quality mdf terrain and pin markers): https://www.charliefoxtrotmodels.com/products/dial-counter-pin-marker-gaming-aid-5-piece-set) Anyscale models (for scatter terrain): https://www.anyscalemodels.com/shop/scale-1-56-for-28mm.html Army Painter wet palette: https://www.firestormgames.co.uk/army-painter---wet-palette Double sided gaming mat: https://www.waylandgames.co.uk/gaming-mat-brown-city-bb956 3D files for printing: Deweycat: https://www.wargaming3d.com/vendor/deweycatproductions/ Stackable tray for Vallejo paints: https://www.thingiverse.com/thing:4581816 You can also support our endeavour to produce weekly listenable Bolt Action content on Patreon: https://www.patreon.com/TabletopTommies Support these two mugs by buying a fancy mug for someone this Christmas: https://tabletoptommies.com/collection/all-products/
In this episode of B2BaCEO, I speak with Robert Nishihara, co-founder and CEO of Anyscale. Anyscale's aspiration is to build the fastest, most cost-efficient infrastructure for running LLMs and AI workloads. When it is successful, Anyscale will be to the AI era what Microsoft was for the PC era: the underlying operating system on which all AI applications are developed and run. Anyscale is built on Ray, an open-source compute framework that Robert and his co-founders developed as PhD students at UC Berkeley. Under the guidance of Professor Ion Stoica, who also co-founded Conviva and Databricks, the team sought to make distributed computing broadly accessible. Anyscale was then launched as a fully managed platform for Ray, making even the toughest problems in distributed computing easy for developers to tackle. Today, Anyscale is a billion-dollar business powering mission-critical AI use cases at companies like Amazon, Cohere, Hugging Face, NVIDIA, OpenAI, and Visa. If you've been a PhD student at UC Berkeley, created a popular open-source framework, and built a billion-dollar business on top of it, you've likely learned a thing or two along the way. Robert's story offers valuable lessons for fellow founders and builders at all stages of the startup journey.
Philipp Moritz (Co-founder and CTO) and Goku Mohandas (ML and Product Lead) of Anyscale do a deep dive into retrieval augmented generation (RAG) and large language models (LLMs). Subscribe to the Gradient Flow Newsletter: https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Google • AntennaPod • Podcast Addict • Amazon • RSS.Detailed show notes can be found on The Data Exchange web site.
In this episode, host Amir Bormand interviews Rich Adao, the Head of Talent at Anyscale. They discuss various aspects of recruiting, including the market, hiring managers' perception of the market, and the state of recruitment and recruiters. Rich shares his responsibilities as the head of talent and how his team recruits top talent for engineers, product managers, marketers, and salespeople. Tune in to gain insights into the world of recruiting and the challenges faced in finding the right talent. Highlights [00:01:03] Responsibilities of a head of talent. [00:05:31] Retaining top talent during COVID. [00:09:31] Educating hiring managers. [00:12:09] Hiring for long-term value. [00:16:46] Paired sourcing and training. [00:18:21] The craft of recruiting. [00:21:20] Choices in your career. [00:25:01] Improving your pitch better. Guest: Rich Adao is currently the Head of Talent at Anyscale, a tech startup in San Francisco focused on helping companies scale their AI and ML applications (they're hiring!). Earlier in his career, he was a senior recruiter at organizations such as Google and Facebook. Rich has more recently been a Recruiting leader at Udemy and helped build them up from a 150 person company to over 1400 employees in 4+ years. He believes in a strong work ethic that stems from his Chicago-born roots. LinkedIn: https://www.linkedin.com/in/richadao/ --- Thank you so much for checking out this episode of The Talent Tango, and we would appreciate it if you would take a minute to rate and review us on your favorite podcast player. Want to learn more about us? Head over at https://www.elevano.com Have questions or want to cover specific topics with our future guests? Please message me at https://www.linkedin.com/in/amirbormand (Amir Bormand)
Max Pumperla designed his own career path in data science. He is a freelance software engineer at AnyScale, and also a data science professor. We talked about reinforcement learning, open source contributions, Ray for data scientists, and his view on the data scientists role. If you enjoy the show, subscribe to the channel and leave a 5-star review. Subscribe to Daliana's newsletter on www.dalianaliu.com for more on data science and career. Max's LinkedIn: https://www.linkedin.com/in/max-pumperla-a8099354/ Max's GitHub: https://github.com/maxpumperla Daliana's Twitter: https://twitter.com/DalianaLiu Daliana's LinkedIn: https://www.linkedin.com/in/dalianaliu (00:00:00) Introduction (00:09:19) How he got a remote job through Twitter (00:14:06) Introduction to Ray (00:18:52) Reinforcement learning (00:23:56) Key lessons on integrating customer feedback (00:35:12) Flaws in data science job titles (00:45:51) How to be irreplaceable as a data scientist (00:48:55) An unconventional career path as a data scientist (01:12:24) Productivity and work-life balance (01:28:10) Advice for building a personal brand
HF0, or Hacker Fellowship Zero, is a start-up accelerator that provides 12-week residencies for batches of fellows from 10 different start-ups. Their experience, which culminates in a demonstration day, is supposed to be the most productive three months of the fellows' lives. Dave Fontenot, one of HF0's founders, was inspired by the two years he spent living in monasteries in his 20s: While monastery life was materially ascetic, he found that it was luxurious in the freedom it gave residents to focus on the things that really mattered. And this year at the Archbishop's Mansion in San Francisco, the home of the fellows, almost everyone has been monastically focused on what has become the city's newest religion: artificial intelligence.The A.I. gospel had not yet spread in 2021, when Fontenot and his two co-founders, Emily Liu and Evan Stites-Clayton, started the accelerator. Even a year ago, when HF0 hosted a batch of fellows at a hotel in Miami, six out of the eight companies represented were cryptocurrency start-ups. But at the mansion in San Francisco, eight of the 10 companies in HF0's first batch this year were working on A.I.-based apps.That generative A.I. has largely supplanted crypto in the eyes of founders and venture capitalists alike is not exactly surprising. When OpenAI released ChatGPT late last year, it set off a new craze at a time when the collapsing crypto and tech markets had left many investors and would-be entrepreneurs adrift, unsure of where to put their capital and time. Suddenly users everywhere were realizing that A.I. could now respond to verbal queries with a startling degree of humanlike fluency. “Large language models have been around for a long time, but their uses were limited,” said Robert Nishihara, a co-founder of Anyscale, a start-up for machine-learning infrastructure. “But there's a threshold where they become dramatically more useful, and I think now it's crossed that.”This story was recorded by Audm. To hear more audio stories from publications like The New York Times, download Audm for iPhone or Android.
This episode features an interview with Jaya Gupta, Partner at Foundation Capital, where she leads early-stage investments across the enterprise software stack. Previously, Jaya was a Senior Business Analyst at McKinsey & Company focusing on software diligence and helping startups expand their go-to-market strategies.In this episode, Sam and Jaya discuss her journey to Foundation Model Ops, how software is becoming more accessible, and the democratization of AI tools.-------------------"At the end of the day, FMOps isn't just about the new tools. It's actually more about the new builders, the new workflows, and a completely new market of customers. I was on the other day, looking at LangChain's page of integrations, I don't know if you've seen it, but it's like Anyscale, Databricks, all these other huge legendary companies are integrating with LangChain, and I think it's clear that there's a huge community that is building something real and valuable." – Jaya Gupta-------------------Episode Timestamps:(01:05): What open source data means to Jaya(08:51): Jaya's journey to Foundation Model Ops(15:58): How software is becoming more accessible(23:04): The democratization of AI tools(27:01): One question Jaya wishes to be asked(29:32): Jaya's advice for the audience(31:51): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with JayaFollow Jaya on TwitterLearn more about FMOps
MLOps Coffee Sessions #154 with Waleed Kadous, ML Scalability Challenges, co-hosted by Abi Aryan. // Abstract Dr. Waleed Kadous, Head of Engineering at Anyscale, discusses the challenges of scalability in machine learning and his company's efforts to solve them. The discussion covers the need for large-scale computing power, the importance of attention-based models, and the tension between big and small data. // Bio Dr. Waleed Kadous leads engineering at Anyscale, the company behind the open-source project Ray, the popular scalable AI platform. Prior to Anyscale, Waleed worked at Uber, where he led overall system architecture, evangelized machine learning, and led the Location and Maps teams. He previously worked at Google, where he founded the Android Location and Sensing team, responsible for the "blue dot" as well as ML algorithms underlying products like Google Fit. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: anyscale.com https://www.youtube.com/watch?v=hzW0AKKqew4https://www.anyscale.com/blog/WaleedKadous-why-im-joining-anyscale Ray Summit: https://raysummit.anyscale.com/ Anyscale careers: https://www.anyscale.com/careersLearning Ray O'Reilly book. It's free to anyone interested. https://www.anyscale.com/asset/book-learning-ray-oreilly --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Abi on LinkedIn: https://www.linkedin.com/in/goabiaryan/ Connect with Waleed on LinkedIn: https://www.linkedin.com/in/waleedkadous/ Timestamps: [00:00] Waleed's preferred coffee [00:38] Takeaways [07:37] Waleed's background [13:16] Nvidia investment with Rey [14:00] Deep Learning use cases [17:52] Infrastructure challenges [22:01] MLOps level of maturity [26:42] Scale overloading [29:21] Large Language Models [32:40] Balance between fine-tuning forces prompts engineering [35:51] Deep Learning movement [42:05] Open-source models have enough resources [44:11] Ray [47:59] Value add for any scale from Ray [48:55] "Big data is dead" reconciliation [52:43] Causality in Deep Learning [55:16] AI-assisted Apps [57:59] Ray Summit is coming up in September! [58:49] Anyscale is hiring! [59:25] Wrap up
In this weeks episode, I'll be talking with Jules Damji, Lead Developer Advocate on the Ray team at Anyscale, about the ever evolving landscape of enterprise grade distributed computing. From Hadoop to Spark and now to Ray, we talk all things compute, and where the landscape may be headed in the future!
Every Friday at 8:30 am PST; we host the WP-Tonic This Week in WordPress & Tech round-table show, where we discuss the latest WordPress and the general web news of the week. You can also watch the show LIVE on our WP-Tonic’s Facebook Group Page & YouTube channel. This Weeks Special Guest Panelists Brian Jackson Joint Founder of Forgemedia - https://forgemedia.io/ Nyasha Green of MasterWP- https://masterwp.com/ This Week's Articles That We Discuss During The Show #1 - DigitalOcean to Acquire Cloudways https://www.businesswire.com/news/home/20220823005306/en/DigitalOcean-to-Acquire-Cloudways #2 - A Prominent Vaccine Skeptic Returns to Twitter https://www.theatlantic.com/technology/archive/2022/08/alex-berenson-twitter-ban-lawsuit-covid-misinformation/671219/ #3 - Announcing Spectra 2.0: The visual page builder for ultra fast websites https://wpastra.com/updates/spectra-2-0/ #4 - Anyscale raises $99M to build out its distributed AI computing platform https://siliconangle.com/2022/08/23/anyscale-raises-99m-build-distributed-ai-computing-platform/ #5 - WordPress Community Engagement: The best WordPress accounts to follow on Twitter https://masterwp.com/wordpress-community-engagement-the-best-wordpress-accounts-to-follow-on-twitter/ #6 - T-Mobile and SpaceX get set to announce ‘something special for Starlink connectivity https://www.geekwire.com/2022/t-mobile-and-spacex-get-set-to-announce-something-special-for-starlink-connectivity/
这次关于基础软件的硬核直播,干货实在是太多了!如果你对基础软件、开发者工具、开源这些话题感兴趣,一定不能错过。国内开源独角兽 PingCAP, Coinbase 数据平台负责人,刚创业的Google Tensorflow 大牛,一起来聊聊全球视角下的基础软件创业的经验和坑,用户视角的选型逻辑,更有未来的技术展望。 小提示:本次内容需要对database 等领域有一定技术背景知识。 Hello World, who is OnBoard?! 过去3年,国内外的基础软件、开源和开发者工具领域都涌现出前所未有的热潮。作为这个领域的投资人,Monica 看到,这些企业在走向国际竞技场的过程中,真切感受到了中美市场从用户需求、技术生态、人才组织、创业环境等等方面的异同。这中间要经历的挑战、思考、与调整,非常需要来自中美的多方经验来一起碰撞、讨论、提升。 这次三位重磅嘉宾,正是代表了三个非常重要的视角。有中国和美国本土的创业公司,更有美国科技企业infra负责人,都是资深大牛。不同的视角,同样的犀利。有市场和生态的现实,又有未来展望。这次讨论有些长,全程无尿点又不好分开。不过,只要你对打造全球一流的基础软件公司这个话题感兴趣,相信这次的内容绝对不会让你失望。 大家 Enjoy! 几位嘉宾所在的公司 三位嘉宾的具体介绍,可以参见这一篇文章。 Dongxu/Ed Huang,PingCAP co-founder & CTO, TiDB, TiKV 作者 Leo Liang: Coinbase 数据平台负责人,前 Cruise ML 平台负责人 Mingsheng Hong: Bluesky co-founder & CEO,前 Google Tensorflow Runtime 机器学习负责人 我们聊了什么 02:03 开场 & 几位嘉宾自我介绍 & fun fact: 最近看到有意思的开源项目 (Vercel, AnyScale) 11:46 PingCAP 的出海体会 14:16 Dongxu 对美国市场的观察:Developer will be the king, 开发者体验越来越重要 16:20Dongxu 对美国市场的观察:cloud native 已经是事实标准 18:28 Dongxu 对美国市场的观察:storytelling 太重要 & supabase 的例子 24:36 Leo: 开发者想要听怎样的故事 27:23 Mingsheng: 什么是一个开发者体验好的产品 30:12 Leo: 硅谷科技公司如何做技术选型:opensource, composable, componentize 42:11 Dongxu: 中国基础软件公司走向海外的节奏与重要性 47:01 Mingsheng: 创业公司如何选择早期用户 52:10 讨论:销售模式应该选择PLG (自下而上产品驱动增长)vs 传统大客户销售 55:59 讨论:不同阶段如何选择不同用户?中美用户购买决策有何差异? 63:10 Dongxu: 全球化的社区运营有什么挑战?运营与产品之间的关系? 68:14 Leo, Mingsheng 推荐了解的开源项目:Anyscale/Ray, Tensorflow 73:58 Dongxu:开源项目如何考虑商业化的节点与方式? 79:26 重点讨论:美国Digital Native Business (DNB) 的公司如何做开源产品的购买决策,中美有什么异同,为什么 91:16 划重点:为什么客户关注的是 ROI>易用性>性能>功能,但是公司宣传的时候往往反过来 92:07 Mingsheng: 什么是下一代的云计算成本优化 96:45 讨论:展望未来,哪些让你们感兴趣的创新机会?文件传输格式分离,serverless, ML in infra 101:27 Q&A: 存储领域的挑战和新机会? 113:36 Q&A: 企业用户如何考虑基础软件产品ROI Reference/提到的公司 Neon: serverless Postgres Vercel: serverless frontend stack for web developers, started from hosting node.js Upstash: serverless data for Redis and Kafka Supabase: opensource Firebase alternative Toolchain: ergonomic open source developer workflow system FaunaDB: serverless Anyscale: company behind Ray, an open-source Python framework for running distributed computing Tensorflow: Dbt: opensource data transformation tool for ELT 提到的文章 Dongxu 登顶Hackernews 的文章:Some notes on DynamoDB 2022 paper Leo 关于Serverless 的文章 欢迎关注M小姐的微信公众号,了解更多中美连线对话! M小姐研习录 (ID: MissMStudy) 大家的点赞、评论、转发是对我们最好的鼓励!希望你分享给对这个话题感兴趣的朋友哦~ 如果你有希望我们聊的话题,希望我们邀请的访谈嘉宾,都欢迎在留言中告诉我们,我们会认真看每个评论的!
Sven Mika is the Reinforcement Learning Team Lead at Anyscale, and lead committer of RLlib. He holds a PhD in biomathematics, bioinformatics, and computational biology from Witten/Herdecke University. Featured ReferencesRLlib Documentation: RLlib: Industry-Grade Reinforcement LearningRay: DocumentationRLlib: Abstractions for Distributed Reinforcement LearningEric Liang, Richard Liaw, Philipp Moritz, Robert Nishihara, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, Ion StoicaEpisode sponsor: AnyscaleRay Summit 2022 is coming to San Francisco on August 23-24.Hear how teams at Dow, Verizon, Riot Games, and more are solving their RL challenges with Ray's RLlib.Register at raysummit.org and use code RAYSUMMIT22RL for a further 25% off the already reduced prices.
Jules Damji is lead developer advocate, and Richard Liaw is an engineering manager at Anyscale, the startup founded by the creators of Ray, the open source project that makes it simple to scale any compute-intensive Python workload. To learn more about Ray and how to scale machine learning applications, attend the Ray Summit (San Francisco / Aug 23-24) https://www.anyscale.com/ray-summit-2022?utm_source=gradientflow&utm_medium=DEpodcastSubscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.Detailed show notes can be found on The Data Exchange web site
Episode 32: Interview with Ion Stoica, Professor at University of California, Berkeley. Creator, leader, and founder of Apache Spark, Ray, Apache Mesos. Founder of Databricks, Anyscale, and Conviva. ACM Fellow, SIGOPS Mark Weiser Award. Immigrant from Romania in 1994.
Episode 31: Two distinguished technologists share their experiences of growing up in Communist Romania in the 1960s-1990s, of the 1989 Romanian Revolution, and of immigrating reluctantly to the US. Featuring: Ion Stoica (Prof at UC Berkeley and founder of Databricks, Anyscale, and Conviva, Inventor of Apache Spark, Mesos, and Ray), and Rada Mihalcea (Prof at U. Michigan, AAAI Fellow, Creator of Romanian PECASE Program). Topics include: Growing up in Communist Romania under the dictatorship of Nicolae Ceaușescu in the 1970s and 80s; Schooling and education system in Romania during 1970s, 80s, and 90s; Romania during and after the 1989 Romanian Revolution; Our guests' Immigration to the US (in 1994 and 1997 respectively); and Perspectives. This is the Lead Episode for the Immigrant Computer Scientists Podcast's segment on Romania (first of 3 episodes on Romania.)
Building a machine learning application is inherently complex. Once it becomes necessary to scale the operation or training of the model, or introduce online re-training the process becomes even more challenging. In order to reduce the operational burden of AI developers Robert Nishihara helped to create the Ray framework that handles the distributed computing aspects of machine learning operations. To support the ongoing development and simplify adoption of Ray he co-founded Anyscale. In this episode he re-joins the show to share how the project, its community, and the ecosystem around it have grown and evolved over the intervening two years. He also explains how the techniques and adoption of machine learning have influenced the direction of the project.
Panelists: Katrina Wong, VP Marketing and Demand Generation Segment Asawari Samant, Head of Marketing, Anyscale Jeffrey Yoshimura, CMO and Customer Experience Officer, Snyk Three marketing leaders discuss the importance of community, data, and experimentation, as well as the do's and don'ts of using agencies. This episode is an abbreviated version of the session. You can see the full session here: https://youtu.be/7wucuUQSQvQ Want to join the SaaStr community? We're the
Ion Stoica is co-creator of the distributed computing frameworks Spark and Ray, and co-founder and Executive Chairman of Databricks and Anyscale. He is also a Professor of computer science at UC Berkeley and Principal Investigator of RISELab, a five-year research lab that develops technology for low-latency, intelligent decisions. Ion and Lukas chat about the challenges of making a simple (but good!) distributed framework, the similarities and differences between developing Spark and Ray, and how Spark and Ray led to the formation of Databricks and Anyscale. Ion also reflects on the early startup days, from deciding to commercialize to picking co-founders, and shares advice on building a successful company. The complete show notes (transcript and links) can be found here: http://wandb.me/gd-ion-stoica --- Timestamps: 0:00 Intro 0:56 Ray, Anyscale, and making a distributed framework 11:39 How Spark informed the development of Ray 18:53 The story behind Spark and Databricks 33:00 Why TensorFlow and PyTorch haven't monetized 35:35 Picking co-founders and other startup advice 46:04 The early signs of sky computing 49:24 Breaking problems down and prioritizing 53:17 Outro --- Subscribe and listen to our podcast today!
Understanding the world around us increasingly involves Artificial Intelligence. Once the stuff of elite corporations, the barrier has now dropped substantially, and practically anyone can get in on the game! Check out this episode of DMRadio to learn more, as Host Eric Kavanagh interviews Robert Nishihara, Anyscale and Jans Aasman, Franz Inc.
Timestamps(01:40) Julia shared the differences growing up in New York and moving to San Francisco.(03:05) Julia discussed her overall undergraduate experience at Stanford — getting dual degrees in Computer Science and Management Science & Engineering_._(05:40) Julia went over her time as an Investment Banker at Qatalyst Partners — notably working on Microsoft's acquisition of LinkedIn.(09:11) Julia talked about her career transition to venture capital — working as an associate investor at New Enterprise Associates.(10:46) Julia emphasized the importance of getting up-to-speed and forming an investment thesis as a new investor.(15:05) Julia discussed her Series A investment in Metabase, an open-source business intelligence software project.(18:36) Julia unpacked her investment(s) in Sentry, an application monitoring platform that helps developers monitor apps in real-time to catch bugs early.(20:14) Julia explained her investment in the Series B round for Anyscale, an end-to-end computing platform that makes building and managing a scaled application across clouds as easy as developing an app on a single computer.(23:03) Julia contextualized her investments in the seed round for Datafold, a data observability platform that equips analytics engineers with the tools to address data quality issues.(24:24) Julia shared typical hiring and go-to-market decisions that companies need to make (depending upon their growth stages and product strategies).(27:05) Julia mentioned her Metabase application to help investors pick winning open-source startups.(29:05) Julia rationalized her switch to becoming a product manager at dbt Labs.(30:34) Julia peeked into the roadmap of dbt Cloud, a hosted service that helps data analysts and engineers productionize dbt deployments.(33:34) Julia went over an under-invested area and the role of interoperability within the broader data tooling ecosystem.(37:56) Julia reflected on the difference between being a venture investor and a product manager.(41:05) Closing segment.Julia's Contact InfoLinkedInTwitterdbt's ResourcesSlack CommunityCoalesce 2021 Replaysdbt LearnGitHubEvents and MeetupsMentioned ContentPeopleTristan Handy (Founder and CEO of dbt Labs)Ali Ghodsi (Co-Creator of Apache Spark, Co-Founder and CEO of Databricks)Dan Levine (General Partner at Accel Partners)Book“Working Backwards: Insights, Stories, and Secrets from Inside Amazon” (by Bill Carr and Colin Bryar)NotesMy conversation with Julia was recorded back in May 2021. Since the podcast was recorded, a lot has happened at dbt Labs! I'd recommend:Reading Julia's recent blog posts on adopting CI/CD and introducing Environment Variables in dbt Cloud.Watching the talk replays from Coalesce, dbt's 2nd annual analytics engineering conferenceListening to Season 1 of the Analytics Engineering Podcast, where Julia co-hosts with Tristan Handy to go deep into the hopes, dreams, motivations, and failures of leading data and analytics practitioners.About the showDatacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:Listen on SpotifyListen on Apple PodcastsListen on Google PodcastsIf you're new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.
Are you starting out in data science? Do you want to build your data science portfolio? Then don't miss this Lights On Data Show episode with our guest, Michael Galarnyk - Developer Relations at Anyscale. We will address: Why building a portfolio is important How can one address their lack of work experience Examples of work/projects to include or not to include in a portfolio Resume tips Networking tips Best practices and recommendations to make your portfolio and resume stronger
Michael Galarnyk is a Developer Relations at AnyScale and has nearly 10,000 follows on Medium. He joins the adventure to walk Chuck through how he's parallelized the training of his Machine Learning models on multi-core machines. He also walks Chuck through the ins and outs of being in Developer Relations. Panel Charles Max Wood Guest Michael Galarnyk Sponsors Dev Influencers Accelerator Links Speeding up Scikit-Learn Model Training How to Build a Data Science Portfolio Michael Galarnyk - Medium Twitter: Michael Galarnyk ( @GalarnykMichael ) LinkedIn: Michael Galarnyk Picks Charles- Words of Radiance: Stormlight Archive Charles- Buying cars on local classifieds Michael- The Cost of Financing a Car (Car Loans) Michael- Success is an Iceberg (we all fail sometimes) Michael- Destiny's Crucible | Olan Thorensen Contact Charles: Devchat.tv DevChat.tv | Facebook Twitter: DevChat.tv ( @devchattv )
In this episode of the Data Exchange, I speak Zhe Zhang, Engineering Manager at Anyscale where he leads the team that works on the Ray and its ecosystem of libraries and partners. Ray is an open source, general purpose framework for building distributed applications (more details in this post and video).Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.Detailed show notes can be found on The Data Exchange web site.Subscribe to The Gradient Flow Newsletter.
Ray is a general purpose distributed computing framework. At a low level, Ray provides fault-tolerant primitives that support applications running across multiple processors. At a higher level, Ray supports scalable reinforcement learning, including the common problem of hyperparameter tuning.In a previous episode, we explored the primitives of Ray as well as Anyscale, the business built around Ray and reinforcement learning. In today's episode, Richard Liaw explores some of the libraries and applications that sit on top of Ray. RLlib gives APIs for reinforcement learning such as policy serving and multi-agent environments. Tune gives developers an easy way to do scalable hyperparameter tuning, which is necessary for exploring different types of deep learning configurations. In a future show, we will explore Tune in more detail.
In this episode of the Data Exchange I speak with Dean Wampler, Head of Developer Relations at Anyscale, the startup founded by the creators of Ray. Ray is a distributed execution framework that makes it easy to scale machine learning and Python applications. It has a very simple API and as someone who uses both Python and machine learning, Ray has been a wonderful addition to my toolbox. Dean has long been one of my favorite architects, speakers and teachers, and we have known each other since the early days of Apache Spark. He has authored numerous books and is known for his interest in Scala and programming languages, as well as in software architecture.Our conversation spanned many topics, including:What is Ray and why should someone consider using it?The first Ray Summit (May 27-28 in San Francisco)Dean's first impressions of Ray, and his journey from Scala to Python.An update on Ray's core libraries, Ray on Windows, and distributed training with Ray.Detailed show notes can be found on The Data Exchange web site.For more on Ray and scalable machine learning & Python, come hear from Dean Wampler, Michael Jordan, Ion Stoica, Manuela Veloso, Wes McKinney and many other leading developers and researchers at the first Ray Summit in San Francisco (May 27-28).