Podcasts about Imbue

2015 studio album by The Early November

  • 86PODCASTS
  • 95EPISODES
  • 40mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 26, 2025LATEST
Imbue

POPULARITY

20172018201920202021202220232024


Best podcasts about Imbue

Latest podcast episodes about Imbue

Path of Night Podcast
2 - Coffee and Cigarettes

Path of Night Podcast

Play Episode Listen Later Mar 26, 2025 62:04


The Hunters looks to meet up at a dockside diner in Southeastern Massachusetts to meet with the new Imbue and discuss what they know of the post from Violin99 that called them all together.Content Warning: Thalassophobia; discussions of violence; discussions of loss of self;  car crash; physical trauma/bodily harm; loss of control; smoking; language;CastStoryteller: Rob MuirheadDallas McCoppin: Garrett GabbeyJohn Spencer: Tim DavisSopheia "Pheia" Quinn: Erika WebbNiyati "Nat" Chowdhurani: Rebecca SteigelfestTodd Keating: Lex LopezRecordingRebecca SteigelfestEditingRob MuirheadMusicSadness Room by Denis Goncharov, pixabay.comArtLogo: Rob MuirheadCharacter Art: Jay Steel, BlueSkyKo-Fi: ko-fi.com/pathofnightYouTube: YouTube.com/@pathofnightFacebook: Facebook.com/PathofNightPodcastTwitter: @PathofNightPodBluesky: pathofnight.bsky.socialEmail: pathofnightpodcast@gmail.com

Innovation to Save the Planet
AI, Data Centers & The Energy Crisis—What Comes Next?

Innovation to Save the Planet

Play Episode Listen Later Mar 21, 2025 47:19 Transcription Available


In this episode of KP Unpacked, host Jeff Echols sits down with Robert Cooper (CEO of Imbue) and Abhi Sastri (CEO of Fluix) to break down one of the most pressing challenges facing the built environment: How do we meet the surging energy demands of data centers, buildings, and critical infrastructure before nuclear and hydrogen power are viable?Based on KP Reddy's Sunday Scaries post, they explore:

Coin Concede: A Hearthstone Podcast
485 - Coin Concede "Dream On"

Coin Concede: A Hearthstone Podcast

Play Episode Listen Later Feb 23, 2025 132:37


We take you on a guided tour Into the Emerald Dream, which also happens to be the next Hearthstone expansion! We cover everything you need to know, including the new Imbue and Dark Gift keywords, and WickedGood and Edelweiss try to figure out if Druid or Rogue will abuse more of the new legendary minions. News – 18:22 Patch 31.6 Known Issues Shop Updates Decksplanations – 54:33 Into the Emerald Dream Imbue Dark Gifts Choose One Dragons Legendaries The Show Notes for this week's episode are on our Website Join us every week live, by following us on Twitch You can monetarily support our show on Patreon Join our community chats in our Discord channels and write in to our Email Follow us on Twitter as well as like share and follow us on Facebook Save our RSS feed or subscribe to us on iTunes or Google

The Wright Show
Human Agency vs "Agentic” AI (Robert Wright & Kanjun Qiu)

The Wright Show

Play Episode Listen Later Jan 23, 2025 60:00


Intro ... Kanjun's role as CEO of AI research lab Imbue ... Worrying about a “Wall-E world” ... Building human-centric AI agents ... How far away is truly “agentic” AI? ... The road to AI serfdom ... Does the AI revolution call for a psychological revolution? ... Heading to Overtime ...

Bloggingheads.tv
Human Agency vs "Agentic” AI (Robert Wright & Kanjun Qiu)

Bloggingheads.tv

Play Episode Listen Later Jan 23, 2025 60:00


Intro ... Kanjun's role as CEO of AI research lab Imbue ... Worrying about a “Wall-E world” ... Building human-centric AI agents ... How far away is truly “agentic” AI? ... The road to AI serfdom ... Does the AI revolution call for a psychological revolution? ... Heading to Overtime ...

Relay FM Master Feed
Thoroughly Considered 120: Imbue All of the Choices

Relay FM Master Feed

Play Episode Listen Later Jan 18, 2025 51:49


Sat, 18 Jan 2025 00:15:00 GMT http://relay.fm/tc/120 http://relay.fm/tc/120 Dan Provost, Tom Gerhardt, and Myke Hurley Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. clean 3109 Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. Links and Show Notes: Support Thoroughly Considered with a Relay FM Membership Thoroughly Considered #110: Mid-life Crisis Hobby - Relay Offset P-90 Build — Dan Provost Tech Talk - Relicing Guitars – Northwest Guitars Leo Fender - Wikipedia Keen – Studio Neat

Thoroughly Considered
120: Imbue All of the Choices

Thoroughly Considered

Play Episode Listen Later Jan 18, 2025 51:49


Sat, 18 Jan 2025 00:15:00 GMT http://relay.fm/tc/120 http://relay.fm/tc/120 Imbue All of the Choices 120 Dan Provost, Tom Gerhardt, and Myke Hurley Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. clean 3109 Myke, Tom, and Dan chat about Dan's new guitar build, how Keen progress is going, and their business themes for the year. Then a Dan's Tech Corner about All-in-One computers. Links and Show Notes: Support Thoroughly Considered with a Relay FM Membership Thoroughly Considered #110: Mid-life Crisis Hobby - Relay Offset P-90 Build — Dan Provost Tech Talk - Relicing Guitars – Northwest Guitars Leo Fender - Wikipedia Keen – Studio Neat The The

The AI Podcast
Imbue CEO Kanjun Qiu on Transforming AI Agents Into Personal Collaborators - Ep. 239

The AI Podcast

Play Episode Listen Later Dec 16, 2024 33:36


In this episode of the NVIDIA AI Podcast, Kanjun Qiu, CEO of Imbue, explores the emerging era where individuals can create and utilize their own AI agents. Drawing a parallel to the personal computer revolution of the late 1970s and 80s, Qiu discusses how modern AI systems are evolving to work collaboratively with users, enhancing their capabilities rather than just automating tasks.

The Nonlinear Library
LW - How you can help pass important AI legislation with 10 minutes of effort by ThomasW

The Nonlinear Library

Play Episode Listen Later Sep 16, 2024 4:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...

The Nonlinear Library: LessWrong
LW - How you can help pass important AI legislation with 10 minutes of effort by ThomasW

The Nonlinear Library: LessWrong

Play Episode Listen Later Sep 16, 2024 4:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How you can help pass important AI legislation with 10 minutes of effort, published by ThomasW on September 16, 2024 on LessWrong. Posting something about a current issue that I think many people here would be interested in. See also the related EA Forum post. California Governor Gavin Newsom has until September 30 to decide the fate of SB 1047 - one of the most hotly debated AI bills in the world. The Center for AI Safety Action Fund, where I work, is a co-sponsor of the bill. I'd like to share how you can help support the bill if you want to. About SB 1047 and why it is important SB 1047 is an AI bill in the state of California. SB 1047 would require the developers of the largest AI models, costing over $100 million to train, to test the models for the potential to cause or enable severe harm, such as cyberattacks on critical infrastructure or the creation of biological weapons resulting in mass casualties or $500 million in damages. AI developers must have a safety and security protocol that details how they will take reasonable care to prevent these harms and publish a copy of that protocol. Companies who fail to perform their duty under the act are liable for resulting harm. SB 1047 also lays the groundwork for a public cloud computing resource to make AI research more accessible to academic researchers and startups and establishes whistleblower protections for employees at large AI companies. So far, AI policy has relied on government reporting requirements and voluntary promises from AI developers to behave responsibly. But if you think voluntary commitments are insufficient, you will probably think we need a bill like SB 1047. If SB 1047 is vetoed, it's plausible that no comparable legal protection will exist in the next couple of years, as Congress does not appear likely to pass anything like this any time soon. The bill's text can be found here. A summary of the bill can be found here. Longer summaries can be found here and here, and a debate on the bill is here. SB 1047 is supported by many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), employees at major AI companies and organizations like Imbue and Notion. It is opposed by OpenAI, Google, Meta, venture capital firm A16z as well as some other academic researchers and organizations. After a recent round of amendments, Anthropic said "we believe its benefits likely outweigh its costs." SB 1047 recently passed the California legislature, and Governor Gavin Newsom has until September 30th to sign or veto it. Newsom has not yet said whether he will sign it or not, but he is being lobbied hard to veto it. The Governor needs to hear from you. How you can help If you want to help this bill pass, there are some pretty simple steps you can do to increase that probability, many of which are detailed on the SB 1047 website. The most useful thing you can do is write a custom letter. To do this: Make a letter addressed to Governor Newsom using the template here. Save the document as a PDF and email it to leg.unit@gov.ca.gov. In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context - instead, focus on how the risks are serious and how this bill would help keep the public safe. Once you've written your own custom letter, you can also think of 5 family members or friends who might also be willing to write one. Supporters from California are especially helpful, as are parents and people who don't typically engage on tech issues. Then help them write it! You can: Call or text them and tell them about the bill and ask them if they'd be willing to support it. Draft a custom letter based on what you know about them and what they told you. Send them a com...

Media Evolution
Anna Granath and Maria Törn – Why Play is Serious Stuff

Media Evolution

Play Episode Listen Later Sep 3, 2024 27:19


The Conference's partner IKEA hosted a talk during Wednesday's Getting Grounded session focusing on IKEA most recent findings on play, design and why it all matters more than we think.Why is play important? And why is IKEA so obsessed with playfulness? Anna Granath and Maria Törn are ready to answer those questions, freshly armed with insight from IKEA's recent children's play report. Simplicity, Inclusivity, and Playfulness. Those three words guide IKEA's design philosophy. Play is seen as the ticket to learn better, to improve psychological safety, and subconsciously acquire maths, physics, and social skills. But the clearest answer is the one we hear from children. It's fun to play. We get to imagine. It makes us happy.Adult concerns affect children's play. Global worries about the pandemic, war, climate, and economy have created stress for children. And play is not fun anymore. Struggles with money, space, and even playful capacity, mean that not all children have the same kind of play. Reassuringly, the best cure is play itself. IKEA's report reveals that families are prioritising play and spending more time playing together.“There's a little bit of a play revolution out there”What can we do? Embrace the diversity of play, from imagination to exploring outside, playing sport and being creative. Adopt play as a mindset. Imbue everyday situations with the possibility of playfulness.So, maybe we should all be obsessed with playfulness and for good reason.

Minus One
Kanjun Qiu, Imbue, & Agency in the Age of AI

Minus One

Play Episode Listen Later Aug 21, 2024 43:51


Imbue Co-founder and SPC alum Kanjun Qiu shares her journey from Dropbox to becoming a founder, why she thinks play is a key part of human and AI agency, and how community has played a key role in her journey and perspective on AI.

Marketing Speak
462. Imbue Authenticity in Marketing with Suzanne Reilley

Marketing Speak

Play Episode Listen Later Aug 14, 2024 47:17


In the latest episode of the Marketing Speak podcast, authenticity meets strategy with Suzanne Reilley, a business coach, marketing strategist, and copy advisor, to delve into the art of customer-centric marketing.  Suzanne emphasizes the importance of understanding your customer's needs and aspirations through insightful research methods while avoiding overreliance on surveys. Discover tips for crafting engaging and respectful copy that connects with your audience, and learn about ethical marketing practices that build trust without feeling spammy.  Suzanne also shares her thoughts on leveraging AI in writing, balancing personal well-being with business growth, and the critical role of intuition and authenticity in creating genuine marketing campaigns. Tune in for a conversation that promises to enlighten and empower your marketing approach!

Gradient Dissent - A Machine Learning Podcast by W&B
Reinventing AI Agents with Imbue CEO Kanjun Qiu

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Aug 8, 2024 48:37


In this episode of Gradient Dissent, Kanjun Qiu, CEO and Co-founder of Imbue, joins host Lukas Biewald to discuss how AI agents are transforming code generation and software development. Discover the potential impact and challenges of creating autonomous AI systems that can write and verify code and and learn about the practical research involved.✅ *Subscribe to Weights & Biases* → https://bit.ly/45BCkYzConnect with Kanjun Qiu: https://www.linkedin.com/in/kanjun/ https://x.com/kanjunGeneral Intelligent Podcast: https://imbue.com/podcast/Follow Weights & Biases:https://twitter.com/weights_biases https://www.linkedin.com/company/wandb Join the Weights & Biases Discord Server:https://discord.gg/CkZKRNnaf3

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

It's return guest season here at Latent Space! We last talked to Kanjun in October and Jonathan in May (and December post Databricks acquisition): Imbue and Databricks are back for a rare treat: a double-header interview talking about DBRX from Databricks and Imbue 70B, a new internal LLM that “outperforms GPT-4o” zero-shot on a range of reasoning and coding-related benchmarks and datasets, while using 7x less data than Llama 3 70B.While Imbue, being an agents company rather than a model provider, are not releasing their models today, they are releasing almost everything else: * Cleaned-up and extended versions of 11 of the most popular NLP reasoning benchmarks* An entirely new code-focused reasoning benchmark* A fine-tuned 70B model, built with Meta Llama 3, to identify ambiguity* A new dataset of 450,000 human judgments about ambiguity* Infrastructure scripts for bringing a cluster from bare metal to robust, high performance training* Our cost-aware hyperparameter optimizer, CARBS, which automatically and systematically fine-tunes all hyperparameters to derive optimum performance for models of any sizeAs well as EXTREMELY detailed posts on the infrastructure needs, hyperparameter search, and clean versions of the sorry state of industry standard benchmarks. This means for the FIRST TIME (perhaps since Meta's OPT-175B in 2022?) you have this level of educational detail into the hardware and ML nitty gritty of training extremely large LLMs, and if you are in fact training LLMs of this scale you now have evals, optimizers, scripts, and human data/benchmarks you can use to move the industry forward together with Imbue.We are busy running the sold-out AI Engineer World's Fair today, and so are unable to do our usual quality writeup, however, please enjoy our show notes and the excellent conversation! Thanks also to Kanjun, Ashley, Tom and the rest of team Imbue for setting up this interview behind the scenes.Video podTimestamps* [00:00:00] Introduction and catch up with guests* [00:01:55] Databricks' text to image model release* [00:03:46] Details about the DBRX model* [00:05:26] Imbue's infrastructure, evaluation, and hyperparameter optimizer releases* [00:09:18] Challenges of training foundation models and getting infrastructure to work* [00:12:03] Details of Imbue's cluster setup* [00:18:53] Process of bringing machines online and common failures* [00:22:52] Health checks and monitoring for the cluster* [00:25:06] Typical timelines and team composition for setting up a cluster* [00:27:24] Monitoring GPU utilization and performance* [00:29:39] Open source tools and libraries used* [00:32:33] Reproducibility and portability of cluster setup* [00:35:57] Infrastructure changes needed for different model architectures* [00:40:49] Imbue's focus on text-only models for coding and reasoning* [00:42:26] CARBS hyperparameter tuner and cost-aware optimization* [00:51:01] Emergence and CARBS* [00:53:18] Evaluation datasets and reproducing them with high quality* [00:58:40] Challenges of evaluating on more realistic tasks* [01:06:01] Abstract reasoning benchmarks like ARC* [01:10:13] Long context evaluation and needle-in-a-haystack tasks* [01:13:50] Function calling and tool use evaluation* [01:19:19] Imbue's future plans for coding and reasoning applications* [01:20:14] Databricks' future plans for useful applications and upcoming blog postsTranscriptSWYX [00:00:00]: Welcome to the Latent Space Podcast, another super special edition. Today, we have sort of like a two-header. John Frankel from Mosaic Databricks, or Databricks Mosaic, and Josh Albrecht from MBU. Welcome.JOSH [00:00:12]: Hey, glad to be here.SWYX [00:00:14]: Thank you for having us. Hey, so both of you are kind of past guests. Jonathan, you were actually one of the most popular episodes from last year talking about MPT7B. Remember the days when we trained large models and there was 7B?JONATHAN [00:00:30]: Yeah, back when reproducing LLAMA1-7B was considered a huge accomplishment for the field. Those are the good old days. I miss that.SWYX [00:00:38]: As the things have accelerated a lot. Actually, let's do a quick catch up and Josh, you can chime on in as well. So Databricks got acquired. I talked to you at New York.JONATHAN [00:00:45]: Mosaic got acquired, although sometimes it feels like Mosaic acquired Databricks because, you know, we're having a lot of fun being here. But, you know, yeah.SWYX [00:00:52]: Yeah. I mean, you are chief scientist now of Databricks.JONATHAN [00:00:55]: Chief AI scientist. Careful with the title. As much as I would love to understand how Spark works, I'm going to have to defer that to much smarter people than me.SWYX [00:01:03]: Got it. And I don't know about like what you would highlight so far as a post-acquisition, but the most recent news is that you guys released DBRX. Is that the thing that most people should be aware of?JONATHAN [00:01:13]: Actually, that's no longer the most recent news. Honestly, the most recent news, we announced this, but it was at our Data and AI Summit last week. So it was announced among like 100,000 other things, is that we finally released our text to image model, which has been a year in the making through a collaboration directly with Shutterstock. There was a lot of work put into finding a dataset that we were comfortable with working on and trying to build a model that honestly, I felt like I could trust and that others might be able to trust to put out in the world. So that model was released last week. It's unfortunately just available via API due to the fact that the data is quite sensitive and quite valuable. It's Shutterstock's entire business in a lot of ways, but I'm still really excited that there's now a model that is trained on a dataset where the provenance of every single image is known, and it's a damn good model. So I'm really proud of the team on that.SWYX [00:01:55]: Yeah, amazing. Josh, do you have any thoughts on image model questions?JOSH [00:01:59]: That is not my area of expertise, but I was excited to see the release of it last week as well, and very happy that you guys did a nice job on the data side of everything there. So that was cool to see.SWYX [00:02:09]: I think what's unusual is like, I think Shutterstock's doing multiple deals in multiple labs. So what is the Shutterstock model? Like, I guess, is this the house model for Shutterstock? Is this Databricks' version of the Shutterstock model? Like, what is this?JONATHAN [00:02:22]: The way that I would think about it is that Shutterstock is doing an amazing business in AI across the board. Their dataset is kind of widely known to be the best stock photos dataset in the world, the most comprehensive, the biggest. When you think about like, what dataset am I going to train a multimodal model on? You call Shutterstock. And I, at least I've heard in the news, like OpenAI, Google, Meta, Apple have all called Shutterstock and made those deals. So a lot of models have had Shutterstock data incorporated into them. But this is the only model I know of so far where it was, you know, exclusively and specifically trained just on the vanilla Shutterstock data. There was nothing else mixed in. We didn't go and scrape the web and find other data or combined datasets or anything like that. And so this is, in some sense, the house blend. But the other piece is that it's just a dataset where the provenance of every image is known in public. Where did the data come from? It is the Shutterstock collection. That's it. You know, nothing less, nothing more. And certainly being at Databricks, if I've learned one thing, I've learned about enterprise customers and what they want out of AI. And one of the things they ask for most is just, what can you tell me about the data the model was trained on? And here, especially for text to image models, where images are just tricky subject matter, there's been a lot of kind of legal conversation about images, especially. It's nice to just have something where I can point to it and say, you know, if you want to know where the images came from, these are what they are and this is how they got there.SWYX [00:03:36]: I will talk a little bit about Databricks because it's relevant to the rest of today's episode. So Databricks, sorry, I keep misspeaking. It's DBRX.JONATHAN [00:03:46]: DBRX, actually, there's been a pronunciation update. It is now D-B-Rex. So we have decided to add a dinosaur mascot because what model doesn't like a mascot? So literally, I wish I could pull it up. There is a little plush dinosaur that we had made. It's like the world's cutest dinosaur, but it is the official mascot of D-B-Rex. And there's a little dinosaur logo that, you know, you'll probably see around a little bit more because DBRX is a mouthful, but D-B-Rex, like, you know, it's just kind of...SWYX [00:04:13]: Rolls off the tongue. I love mascots. Like every company should have a mascot. And I think Hugging Face got it right. You need an emoji mascot because that's the minimal viable image.JONATHAN [00:04:21]: I probably shouldn't talk at all about, you know, Velociraptor, but, you know, that's a, maybe that's something we can talk about later in the summer. I'll just leave it at that.SWYX [00:04:28]: Okay. That's a hint to names. I feel like your names leak a lot of alpha. So just to quickly cover the headline details, DBRX, as Make Sure Experts model, that's fairly big, 132 billion total parameters, so 36 billion active on any input, pre-trained on 12 trillion tokens of text and code, and did really well on evals to the point where you had to dye your hair blue. That's my high level conclusion.JONATHAN [00:04:53]: Never make a bet with your team two weeks out from model launch, even when, you know, human eval is looking quite bad. Because if you set some bar, even if it's arbitrary and you think there's no way in hell they're going to hit it, apparently money doesn't motivate people anymore. Humiliating their boss motivates people. So Josh, you should really take a hint from this. You know, you cannot pay someone enough money to make up for you dyeing your hair blue.JOSH [00:05:15]: I'll keep that in mind for our next model.SWYX [00:05:17]: It works. So speaking of Imbue's next model, perhaps Josh, you want to actually just say hi to the general sort of latent space audience and talk about what we're releasing today. Yeah.JOSH [00:05:26]: I'm Josh, CTO of Imbue, and we're not releasing the model. We're not releasing the weights, but we are releasing a bunch of different things that should make it easier for other people to make their own models. So I think right now, training foundation models from scratch is like a very difficult, time-consuming, expensive, kind of risky endeavor, especially for smaller companies. And the things that we're releasing hopefully make that at least a little bit easier. So the things that we're releasing fall into kind of three different buckets. One is infrastructure and scripts for dealing with the kind of hardware and hardware failures and understanding how well is the actually lowest level of thing actually working so that you can actually do your training at all and at a reasonable speed without having to constantly restart, etc. So infrastructure and training scripts. A second set of things is around the evaluation. So after you've trained it, like how well is this actually working and how do you know how well it's working? We're releasing a whole bunch of different data there, a new benchmark about code, reasoning, understanding, as well as our own private versions of 11 different open source benchmarks. So things like pool queue or ANLI, where we've gone through and kind of cleaned up the data as much as possible by looking at all the ones that models get wrong or that are flagged for ambiguity and also our own kind of private reproductions of those where we've done like a kind of clean room black box, like, okay, this is what the data set is supposed to be. Here are some examples. Let's make our own version of this to make sure that there is no data contamination, etc. To make sure that we're actually, you know, not testing on train. And then I think a final thing that we're releasing there is around 450,000 human judgments about ambiguity and question quality, which we used in the process of cleaning these evaluations and we also hope will be helpful for other people training kind of similar models. And then the third thing is CARBS, our hyperparameter, our cost-aware hyperparameter optimizer, which was especially helpful for being able to experiment at much smaller scales and then scale those experiments up to the much larger scale kind of on the first try without having to retry it. You don't want to be training, you know, 10, 20 different 70B models. You really want to get these larger modelsSWYX [00:07:30]: right on the first try.JOSH [00:07:30]: And so the ability to kind of tune things very precisely and learn scaling laws, not just for, you know, the like data and flops, but also for learning rate and all the other hyperparameters and see like how should you scale these things up was extremely valuable to us as we were training the larger models. Yeah, that's a lot of stuff.SWYX [00:07:49]: Yeah, exactly. So there's a bunch of stuffJOSH [00:07:50]: we'll have to go through all of it.JONATHAN [00:07:52]: Yeah, I just want to throw in how excited I am about this. This is the stuff that nobody ever talks about. That is the difference between success and failure in this stuff. Like, can you get your cluster to run? Can you get software on your cluster? Can you figure out what broke? Because fault tolerance is still not really built into any of the fundamental primitives of training models. And so if something breaks, you have to go figure out what broke, your job stops, you have to restart your job. It is a nightmare just to get to the point where anything can train on the cluster. A basic MPI hello world that has the GPUs talk to each other is hard enough, let alone actually training a model, let alone getting good performance out of the GPUs, let alone actually getting a model that converges to anything interesting. There's so many levels of things you have to accomplish. This is the kind of stuff that matters. I think to a point that Josh made earlier, before we got on here, there are plenty of weights out there. Nobody's released this.JOSH [00:08:46]: Yeah, that was part of the motivation actually is that there are lots of other things that are complimentary, but I have not seen nearly as much discussion about some of these other things that we think are pretty important. I mean, in some sense,SWYX [00:08:56]: I'm very excited to have Jonathan on because this is a little bit, you're a bread and butter with Mosaic. And I think you've released some part with Composer. And I think it's just really interesting to see like a different take, basically a full stack take that's kind of open source today.JONATHAN [00:09:18]: Yeah, it's really kind of, it's been an ordeal to figure this out. And every time something changes, whether it's a new GPU or even a new driver update, you get new creative errors and new things go wrong. And, you know, we've dealt with the weirdest things from, you know, our InfiniBand cables getting stolen from the data center twice, like in boxes before they arrived at the data center. Like, you know, Porch Pirate basically had stolen our InfiniBand cables back when those were hard to come by. To like, you know, weird recalls of switches to like the strangest stuff has happened. I have my favorite GPU failures I've seen, like ones where the GPU doesn't fail, it has a correctable memory issue and the memory correction causes the GPU to become a straggler and hold up the whole job. Like weird stuff happens and figuring out how to not just identify all of that, but then eventually productize it, is in some sense, the entire story of Mosaic and now Databricks in terms of our ML offering. Really, the thing we offer is we have gone through this suffering and figured out how to even productize that. It has been a pain in the butt.SWYX [00:10:20]: Yeah, it's a lot of work.JOSH [00:10:20]: I think my favorite failure was GPU is just giving wrong math. Like if they give errors, great, because you can see the errors, but if they just give you the wrong math back, not so fun.SWYX [00:10:30]: When did they give you wrong math?JOSH [00:10:32]: Like literally you could just, you know, add two things. For example, the numbers come back. They're not the numbers that they're supposed to be.JONATHAN [00:10:40]: I think it's important to say at this stage, just because like it, I think it goes without saying for Josh and I, but it's worth saying here, this isn't to say that like anything is wrong with us. It's not like NVIDIA did a bad job or, you know, Mellanox did a bad job or the like the server builder, the data center operator, the cloud provider, like the million other parties that are involved in building this. We are running these insane chips that are huge and complicated and built on tiny transistors at insane frequencies with insane heat in data centers that for the most part, were not built remotely for this kind of power or heat and have been retrofitted for this. Like failures happen on a good day with normal CPUs. And this is not a good day and not a normal CPU for the most part. It's fun to joke about all the weird things we see. This is not to say anybody's done anything wrong. This is just kind of part and parcel of working on a massive cluster running at multiple megawatts of power at a time.SWYX [00:11:32]: It's crazy. Yeah.JONATHAN [00:11:33]: So optical cables, like all sorts, like everything.SWYX [00:11:37]: I'll take the opportunity to start going to the sort of infra piece. There's just like a description of the infra just to give people a sense of what we talk about when we talk about massive clusters. So I'm just going to read off the blog post here. This post is about one cluster that has 4,092 H100 GPUs spread across 511 computers. They use unified fabric manager nodes, which manage the infinite band network. And you talk a little bit about your networking. Is there anything unusual about this setup that you'll call out to people?JOSH [00:12:03]: Yeah, actually this particular cluster is a little bit non-standard. The normal, like vanilla setup for these large clusters as vanilla as it can be is what's normally like a 127 node cluster. So closer to like 1024 GPUs instead of 4,000. Here we have a larger cluster. As you start to get into the larger clusters, the networking becomes a little bit more custom. It's a little bit more, it's a little bit trickier. It's a little bit more difficult to get these things to all be able to talk to each other at the same speed. And so this has, in this particular case, this is a three tier network architecture instead of two tiers, kind of the normal one. So most of the clusters are a little bit smaller. As you get to even larger scales, then this becomes even much more complicated,SWYX [00:12:43]: much more expensive.JOSH [00:12:43]: So we chose this particular scale, kind of knowing our own workloads and kind of what we wanted to do. This was kind of the right size for us. But yeah, I think it's not exactly vanilla already. It's already getting into kind of the custom territory.SWYX [00:12:54]: So my understanding is that there, and is there any part of this that comes with the Voltage Park deal that you guys had? Is that part of the hardware that you got from the deal with them?JOSH [00:13:04]: Yeah, so we worked really closely with Voltage Park to set up all their clusters and infrastructure and everything and kind of decide even like what to order, how should the networking work? Like we were very involved in kind of the construction and bring up of this. And that's what this post is about, is about that process of like bringing up all these, there's like different clusters in different places of different scales. So in this particular post, we're talking about this one 4096 GPU, but there are other clusters that they have as well. And we were very closely involved with figuring out the exact architecture and kind of the trade-offs that go along with picking, you know, those exact components. You really don't want to like place the wrong order because it takes months to get it and it's very expensive. So yeah, we were happy to help out with that.JONATHAN [00:13:43]: And then your bit of good cables get stolen.SWYX [00:13:44]: Yeah, yeah, exactly.JOSH [00:13:47]: We wanted to make sure that we ended up with compute that would work for us and that would also work for their other customers. And so we kind of helped design something so that we would get exactly what we were looking for. We knew that these kinds of details would be super important and that getting down to the level of the hardware and like having these good scripts and everything was going to be a core part of like actually getting this to work. I'm very glad that we did that. I don't think that most companies kind of take that full stack approach, but for us, it certainly paid off.SWYX [00:14:12]: Yeah, it's basically sort of built to spec. It's interesting that relationship because you usually, for the rest of us who don't operate at your scale, we take whatever we can get from cloud providers, but you are basically co-designing from the single machine up. And you described that a little bit. Do you want to take us through the process that you described here?JOSH [00:14:27]: Yeah, so for the actual, like the blog post and kind of bringing these machines online.SWYX [00:14:32]: Yeah.JOSH [00:14:32]: So yeah, I think the process, as we have it broken down in the blog post, there's kind of a few different layers. First is like getting the individual machines to work at all and then getting the machines to actually be able to talk to each other. So getting the InfiniBand networking to work and then getting to a point where, you know, not just the machines are working and they can talk to each other, but everything is actually working correctly. There's a big gap between like it's working at all to it's working perfectly correctly. And then after you have all this stuff working perfectly correctly, nice and healthy, then now you get into kind of the software data, like training issues. And then after that, you're still not done. Like now, even once you're training at full speed, things are going to fail over time. Things are going to change. There's going to be new, you know, firmware updates. Like how do you kind of deal with this change and flux over time without going crazySWYX [00:15:16]: and pulling your hair out,JOSH [00:15:16]: trying to like reproduce things or understand why there were regressions. And so there's a lot of work to kind of automate the infrastructure tooling as well. And kind of the first step, like bringing these things online in the first place, you know, you have hundreds of machines at this point. So you don't necessarily want to be like walking around with like a CD-ROM or a USB drive, like plugging it in with your keyboard, like hitting next, next, next on the OS install. That's not how this works. You do that for one machine. And then you use, we use this thing called Metal as a Service to bring up all the other machines. So it's a kind of server that can kind of install the operating system on these other machines. So most like when you're talking about these machines, like each machine is, you know, on the order of hundreds of thousands of dollars. So they usually come with a kind of out-of-band management interface as well. So they don't, they have their InfiniBand networking. They have their normal 100 gigabit per second Ethernet networking. These are like dual, redundant, et cetera. And then you also have this extra out-of-band management network. So you can log in and you can see like the boot screen or you can see the blue screen of death. You can like get in there and actually see what was wrong, which is pretty fun. And it makes it like possible to automate a lot of this work. So the beginning of that, and the blog post goes into much more detail about like exactly how we set these up and kind of the other errors that we ran into. When you're bringing these online, you'll definitely have failures. Even if they all worked in the factory, they get shipped, some parts come loose, something fails, something goes wrong. So when you're bringing them online, there'll be some that don't quite work for all sorts of reasons. As you start to be working with machines at this scale, like if something happens one in a thousand times, you're like pretty likely to see it. And so you can get pretty rare, weird things, especially since we had fairly early builds and fairly early versions of this hardware. Like these are some of the like first machines that were ever produced, some of the first GPUs. So you've got some extra special things there. We definitely worked with Dell, for example, on making fixes in the firmware level to be like, okay, like this thing is wrong. Like we need to update this at the firmware to like actually fix this particular thing. So we worked pretty closely with Dell and Nvidia. Yeah, that's what I'm saying. Like this stuff gets complicated. And the thing is like, you know, taking a step back, the whole reason we're doing this, right, is that we knew that this was going to be complicated. There would be these kinds of failures. And if we're just using, you know, AWS or some other cloud provider, these errors are still gonna be there and you're gonna have no way to know and no way to debug this and no way to diagnose what's going wrong. And so we would much rather be able to like call up Dell and say, hey, this isn't working. And they're like, yep, okay, cool. Let's debug it together. Oh, I see. Yeah, cool. We'll ship a firmware update and actually fix this for you. That was a much better experience than like, great, just magically fails. I guess we restart and hope that that machine goes away. Like that's not a very good place to be. So yeah, that's kind of the first place is getting to a place where like GPU training is working on your single node machines. You can observe stuff. We have tons of tooling around like, you know, Prometheus and all sorts of other tools for understanding what's going on in these machines because you don't want to be like logging into each one and looking at the temperature or something you really need to have tooling to collect all these metrics, et cetera. Unfortunately, all of the scripts that we have for this are like for this entire cluster and for all this infrastructure are a little bit like special purpose for our particular thing. So it's not that every script that we have, it's not that you can just like take this and plug this in. Even if we did open source all the tooling that we have, you'd still have to do like a lot of work to open source it. What we are releasing is as many of the things that we can that are going to be useful for other people. You're still going to have to have some way of kind of managing these things, making your own like logging aggregators, et cetera, et cetera. So that's kind of bringing them up to the like, you know, the single nodes that are working. From there, it goes into, I'm happy to keep going if you want. Well, I just want to leave the opportunity for JohnSWYX [00:18:53]: to comment if there's anything that's different from how he runs things.JONATHAN [00:18:57]: Oh, I mean, all I'll say is I'll endorse this and say this s**t is hard. Like this is really, really hard. And, you know, I have a special props to, you know, the folks in Vue because they were building this from the ground up. You know, at Databricks and at Mosaic, we typically work with cloud providers because some of this stuff is just, there's too much to handle. It's complicated. There's a lot to deal with. And this doesn't even get into things like physical security, you know, securing power if you're the data center operator. Like this gets infinitely complicated and you have to abstract somewhere. Like, you know, and then you get to the folks who are literally building their own custom chips and like, good God.SWYX [00:19:36]: Like, oh my God, that's, you know,JONATHAN [00:19:38]: if you're one of those folks, you're having, you know, pour one out for the infra people at some of the AI chip startups who are having a really, really interesting time right now. But this stuff is really hard. And I don't think we talk about it much because there's so many other things that are hard. But the other hard things, I think everybody's becoming pretty familiar with at this point. This is something that I don't think there's ever really been a comprehensive discussion of, at least not that I've seen.SWYX [00:20:00]: Yeah, so my impression is that you guys, Mosaic, have your own software for sort of spinning up and down machines, just like Imbue had to build. But Imbue probably, it sounds like Imbue, you guys went fuller stack. I don't know how to describe it. Like Mosaic is not working with Dell on like their firmware.JONATHAN [00:20:21]: No, no, we're typically working with like, you know, pick your cloud provider on their Dell firmware or what have you. Like, it's kind of, I think one of the things, I don't know, Josh, you can correct me on this. It's kind of impossible if you're doing training to not go all the way through the entire stack, regardless of what happens. Like somehow I'm still chatting with cloud providers about power contracts, even though the whole point of dealing with the cloud provider is not to have to think about power contracts. Somehow I'm still asking them about which InfiniBand provider they used this time to see if this is part of the bad batch of cables I encountered on that cloud provider or what have you. Or like, we're still talking about a firmware update from pick your provider. You can't not do this. It's convenient that they have data center staff who are worrying about what to send back to which provider when, and they have people who can go and wait for the InfiniBand cables so they don't get stolen outside. But, you know, it's kind of, it's impossible not to really go full stack if you're thinking about the infrastructure at all. I don't know, Josh, correct me. No, I think that's right.JOSH [00:21:17]: That's what we expected from the beginning as well, is that we would inevitably have to get into the details here. And I'm glad that we kind of just planned for it. I think it made it a lot easier from our perspective to have direct control over this. Instead of having to go to the cloud provider that goes to the data center, that goes to the supplier, we could just go direct to NVIDIA or DellSWYX [00:21:37]: or the data center,JOSH [00:21:37]: whoever was responsible and be like, hey, this thing needs to change. And they're like, oh, okay. Yeah, that is our responsibility. Great, we can fix that. So it was just a lot easier for us to fix these bugs than if we had to go through an extra layer of email.SWYX [00:21:48]: Something we discussed in the pre-show was that you had a rule of thumb for your cluster of reliability. You say here in the post, by and large, you expect around 3% of your machines to break every week. So you're basically going to turn through all your machines in a year.JOSH [00:22:04]: As it says in the post. So that would be true if it was a uniform failure like that. But as it says in the post, it's usually these kind of problematic nodes. And to be clear, that is the number that we've heard from other people is like they're having about 3%. I don't think we're experiencing failure rates that are that high. I think ours is actually quite a bit lower than that, probably because we've taken the time to like dig into a large, maybe larger number than we should have of these failures and get to the root cause of it and be like, oh, okay, like that's exactly what's going wrong.SWYX [00:22:33]: How do we fix this?JOSH [00:22:33]: How do we prevent this from happening? How do we make automated checks for this so that if it does happen, it just goes back to whoever owns that particular part of the process and they can fix it immediately.SWYX [00:22:43]: And that's part of what you're also open sourcing, which is the health checks, right? You got the NIC health checks, GPU health check, this space health check, Docker D message. I don't know what that is.JOSH [00:22:52]: That one is just a lot of stuff.SWYX [00:22:54]: Yeah.JOSH [00:22:55]: That one is one where we realized that actually like when these machines boot, sometimes they wouldn't actually boot cleanly all the way. Or when they rebooted, they had problems that they didn't have when they were working before, which was kind of frustrating. Like usually if you restart your computer,SWYX [00:23:08]: it gets better.JOSH [00:23:08]: Here you restart. It did not get better.SWYX [00:23:10]: It got worse.JOSH [00:23:10]: That was very frustrating. So this health check looks at every particular line we've ever seen from the boot, like in D message, like every single log line that your computer emitsSWYX [00:23:21]: and says like,JOSH [00:23:21]: have we ever seen this before?SWYX [00:23:23]: Is this expected?JOSH [00:23:23]: Is this in the right order? Or is there something out of place? If there's anything out of place, let me say, okay, great. Like now it goes into this, like longer, more triage list of like, all right, great. Like, is this acceptable?SWYX [00:23:33]: Should we flag this?JOSH [00:23:33]: Like, should someone take a look at this? So we're looking down at a very, very granular detail level, what's happening on these computers to make sure that nothing is out of place. And that's critical because without that, if you're running your training, as Jonathan said, and this thing is slow, like what are you supposed to do? Right?SWYX [00:23:49]: Like you really,JOSH [00:23:49]: you really want to be very certain that like all 4,000 of these GPUs are working like they're supposed to.SWYX [00:23:54]: We know that.JOSH [00:23:54]: And so if it's slow, it's because like we messed up the config or something else and not because of this earlier thing that's like really hard to detect in software later.JONATHAN [00:24:01]: Yeah. I think the, I'm just curious to ask,SWYX [00:24:03]: like, you know,JONATHAN [00:24:03]: suppose you were to set up another, let's say another H100 cluster and it were at a different data center. And instead of the vendor being Dell, it was super micro or what have you. How much of this would be repeatable? And how much of this would you have to redo? I, you know, I genuinely don't know.SWYX [00:24:18]: A decent amount.JOSH [00:24:19]: I think it would go a lot faster the second time. I think there's lots of learnings that we had. And also the blog post,SWYX [00:24:24]: you know, yes,JOSH [00:24:24]: we are releasing the health checks, releasing some scripts, but a lot of the valuable stuff is also in the blog post itself, in the details and kind of the, you know, the learnings that we've had and the sort of errors that we run into. We tried to as much as possible surface those to other peopleSWYX [00:24:36]: could learn from thoseJOSH [00:24:36]: and avoid the same mistakes or failures as well. But I think it would go a lot faster.SWYX [00:24:41]: Although, yes,JOSH [00:24:41]: there would certainly be some things that'd be a little bit different. I mean, there'd probably be different CPUsSWYX [00:24:46]: or whatever,JOSH [00:24:46]: but I think a lot of that stuff is less,SWYX [00:24:49]: it's less,JOSH [00:24:49]: that's the like, that's less variable. I think most of it would apply the second time around. Although I'm sure next timeSWYX [00:24:56]: we're building one,JOSH [00:24:56]: it'll probably be, you know, at a scale that's 10x as big with a different chip or something like this.SWYX [00:25:00]: And then who knows?JOSH [00:25:01]: Yeah, with Kinect X8,JONATHAN [00:25:02]: that will have its own fun behavior and all that good stuff. Yeah.SWYX [00:25:06]: Perhaps there's something that people don't discuss about, and you don't even talk about this in the blog, but I always wonder is what is the timeline that's like kind of reasonable for this amount of work, at least the initial stages? And also what does the team composition look like for setting up a cluster, right? Like what are the mix of skills that you typically would require to get all this going?JOSH [00:25:27]: I'm, I can't really speak to typical. One thing I am very proud of is how much we accomplished with such a ridiculously small team. Like our infrastructure team is like, you know, fluctuates from week to week, depending on like how many things are on fire and how much we need to build. But it's like between like three and six people, like it's small. It's not like some huge team of like tons and tons of engineers. But those people are very, very good at what they do. And so that has allowed us to get a lot of mileage out of out of these things. I think it's not that we're building everything, right? It's not that three to six people build this whole thing. I definitely want to like, you know, say thanks very much to Dell and H5 and NVIDIA and the other people that have done a lot of the work, like to bring up this cluster, you know, with 4000 GPUs and three tier networking, networking architecture, you have 12,000 cables. So that's 24,000 things that need to be plugged in. Like that's just a lot of stuff to plug in, right? And you don't want to mess it up. Like each one needs to be done correctly. Like it's a little bit loose. Like it doesn't really work.SWYX [00:26:23]: If you break it,JOSH [00:26:23]: you need to replace it. Like there's a lot of workSWYX [00:26:26]: that goes into this.JOSH [00:26:27]: Yeah.SWYX [00:26:28]: And then, you know,JOSH [00:26:28]: that's just like that's it. That's if you were to do everything right the first time.SWYX [00:26:32]: And if you didn'tJOSH [00:26:32]: have to fix anything. But inevitably, you know, you will have to replace something, which means like taking all the wires out, pulling the thing out, taking all the GPUs out, going and fixing some cable, putting it all back correctly, putting it back in, doing this every time. So there were a lot of people at Dell, NVIDIA and at H5 that all helped a ton with this stuff. I don't know the exact size of the Dell team. It also fluctuated over time.SWYX [00:26:55]: Yeah, excellent. And then, you know, you so you have all the hardware set up and now you're firing it up for a single node. There's a long description that you guys have about just like monitoring the MFU, right? And what each situation might look might be indicative of. One of the most interesting things to me that I saw from here is like, you know, if training immediately starts off at 60 to 80% MFU, something's wrong.SWYX [00:27:24]: But like, you know, like what what are like, you know, some anecdotes or, you know, notable scenarios here that you might you might call out as maybe counterintuitive or super interesting.JOSH [00:27:36]: There's just so many of them. I mean, one of them, which I think is probably pretty common, like common knowledge by this point. But like we did have a sort of likeSWYX [00:27:46]: which one was this exactly?JOSH [00:27:47]: I think for the MFU, like gradually getting worse over time. I think that one, when we saw that the first time we were like, what the heck is going on? Like, why does it get just like a little bit worse? This is so strange. Like, what is it getting lazy or tired or something? Like, is it heat? Like what's going on? And in this particular case, it was memory fragmentation. Because you have hundreds of machines, they're doing garbage collection slightly different times. And then they get slightly further apart and slightly more and more jittered until eventually they're all happening kind of at random times. And just like really messing up each one of your steps. So you just turn off garbage collection and call it a day, basically,SWYX [00:28:20]: to be honest.JOSH [00:28:20]: There's other things you can do if you want to be a little bit more sophisticated about it. But you can also just manuallyJONATHAN [00:28:25]: have it all garbage collect on some interval. Like that's what we've done. We just have a garbage collection callback that just runs. But I've seen the exact same thing.JOSH [00:28:33]: Yeah, yeah, exactly. So I thought that one was kind of funny. And we did trace that one down and look and we did find the actual call. Like, again, this goes to like having good tools. So we had really good tools where we could look at a bunch of like actual traces in C and be like, OK, cool. This is the thing that's taking a lot of time. Or like, you know, this is the thing that doesn't quite line up here. Like, oh, I guess it's garbage collection. OK, cool.SWYX [00:28:52]: Interesting.JOSH [00:28:52]: Yeah, let's just try taking it off.SWYX [00:28:54]: OK, great.JOSH [00:28:54]: That's what it was. Now we can fix it. So for each of them, like basically bugs are not hard if you have good tools. But if you don't have good tools, bugs can be very, very hard. So similarly for like heat, another thing that we saw was like, oh, you know, the CPU is getting throttled. OK, well, it's easy to see if you're monitoring the CPU throttling or monitoring the heat. If you're not monitoring that, it's really hard to know why it's just suddenly one of them is going slower. I noticed also in the pieceSWYX [00:29:17]: that you mentioned FSDP with 0.3. Actually, we met, I went to iClear and Guanhua from the DSP team was there presenting 0++. I was wondering if you want to make any call outs to, you know, particular open source or open library or open whatever implementation teams that were super helpful in your process. I think we ended up actuallyJOSH [00:29:39]: pulling from a whole bunch of different ones to pull things in into our own particular pipeline. So we use things from NVIDIA's, you know, Megatron stuff. We use stuff from probably DeepSpeed. I think we pulled in a bunch of different pieces from a bunch of different places. So it was really nice to see all these working open source like examples. I think I really appreciate all the effort that has gone into actually tuning these things because you can tune them, but it's a lot of work to like tune this stuff and do all this stuff from scratch. It's really nice to have like a working example. I think those are probably the two biggest ones, DeepSpeed and Megatron alone, but there are probably other ones as well.SWYX [00:30:13]: Is there a particular thing in the ecosystem where you would call out as like, you know, there should be something here that is open source, but like it's not really, it's like everyone kind of builds it on their own. I want to say something with the file system because everyone talks about the file system eventually.JOSH [00:30:28]: The file system actually was,SWYX [00:30:30]: I mean, we did somethingJOSH [00:30:31]: kind of dumb there. Like we have our own sort of local mirror so that we can, you know, like a crappy version of S3SWYX [00:30:38]: that's local,JOSH [00:30:38]: but it's just a pretty simple script, right?SWYX [00:30:41]: Like I think we run likeJOSH [00:30:41]: a little web server that just like serves files and then, you know, it can upload themSWYX [00:30:45]: and download them.JOSH [00:30:45]: Okay, great. And part of the reason we did that is that our internet connectionSWYX [00:30:50]: in the beginningJOSH [00:30:50]: was not the like full speedSWYX [00:30:52]: one that we wouldJOSH [00:30:52]: eventually have. And so we are a little bit more kind of bottlenecked in terms of internet bandwidth. And so we had this. I think we looked at a bunch of services out there like Minio and some other ones, but a lot of these like come with a lot of extra overhead and maintenance. And since we already have so much infrastructureSWYX [00:31:09]: to deal with,JOSH [00:31:09]: we kind of didn't want to, you know, bring in a whole other like cloud provider, virtualize something, something.SWYX [00:31:14]: We just wanted something simple.JOSH [00:31:14]: So we went with that, which has been quite helpful. Like our toolsSWYX [00:31:19]: are usually quite simple.JOSH [00:31:19]: It's like Bash and Python and SSH and Docker. Like we'd like to keep things simple so that's easier to debug, like less layers of infrastructure, less layers of abstraction, make it a lot easier to work with. Like we don't use Kubernetes,SWYX [00:31:30]: for example,JOSH [00:31:30]: and we just directly launch these things. And it's just been much easier to debug this way. One tool actually that does come into mind that I will call out is Kraken from Uber. That was great. We love that tool. We were a little bit skeptical. What is it?SWYX [00:31:44]: I'm sorry. Yeah.JOSH [00:31:45]: So Kraken is this, yeah, it's a distributed like Docker registry, basically, that uses BitTorrent to like transfer things between the machines in a sort of nice optimal way. Like in the very beginning, the naive way is like you have this one Docker registry, which was outside of the cluster. So every time we change an image, you know, there's many gigabytes that each of the 500 machines needs to download.SWYX [00:32:07]: So that just takesJOSH [00:32:07]: a really long time. So what this thing does is like just one of them downloads it and then like they all sort of broadcast all the pieces to each other. And it was just like a really nice, fast way of getting these images down. And it was very robust.SWYX [00:32:19]: Like there's a lotJOSH [00:32:19]: going on under the hood, but I think it's a pretty cool tool that we haven't really had any bugs with it at all. Amazing.SWYX [00:32:26]: Yeah. I mean, that's all my questions, I guess, for the info piece. I don't know if, John, you had something that you were sort of burning to ask or.JONATHAN [00:32:33]: No, all I can say is just sameSWYX [00:32:36]: in a lot of places, like, you know, and they're done thatJONATHAN [00:32:38]: seeing this plus one. I think the one big difference, you know, perhaps in philosophies is we've tried to basically standardize on as much commodity stuff as possible, just because, you know, I think the reason I asked about trying to do thisSWYX [00:32:50]: on multiple differentJONATHAN [00:32:50]: pieces of infrastructure is like, I think we're running on like six or seven different clouds right now. And everybody has done something slightly different. And my gosh, the little differences add up as you know, you've seen. And so, you know,SWYX [00:33:04]: our philosophy has been like, whatever the hellJONATHAN [00:33:05]: we can standardize, please let's standardize it. Like vanilla off the shelf FSDB.SWYX [00:33:10]: And like, you know,JONATHAN [00:33:10]: we wrote our own data loader, but we've tried to make that as much of a standard as we can across our infrastructure and in Databricks, because things just start getting really complicatedSWYX [00:33:18]: or like we useJONATHAN [00:33:18]: Kubernetes extensively because it at least gives us a uniform set of APIs. Like that's our hardware abstraction layer to a certain extent for everything else. So it's just, you know, a difference in philosophy there. But otherwise, like, yeah, this stuff is really, really hard. And I feel like we take for granted how much of this, you know, is done for us when you go and you just query chat GPT, for example. Like, oh my God, everything going on underneath that, you know, it's kind of a miracle that the machines boot up, let alone that you can like query a giant language model that's probably doing inference across multiple machines and was trained across thousands of machines. Like, you know, minor miracle.SWYX [00:33:54]: Yeah, it is an awesome amount of power that we invoke with a single API call that we take for granted these days. It's absurd. Yeah, I mean, like Kubernetes, like that point about Kubernetes, I will say as a former AWS employee, like it seems like it would be ideal for imbue to at some point make it more abstracted or agnostic because you're going to want to, you know, replicate your setup. We do have our ownJOSH [00:34:19]: sort of replacement. It's just a much simpler version of Kubernetes. Kubernetes is really designed for running services, not for running experiments. Like that's not its like main architecture. And so for us, like we have everything that's like, cool, you're going to run an experiment. So you want it to run to completion, right?SWYX [00:34:34]: OK, great.JOSH [00:34:34]: Like the primitives are sort of built around a slightly different style. And that makes it a lot easier, like just a lot simpler to fit that the nature of like these machines are going to disappear. They will need to be rebooted for infrastructure upgrades. They will like something will happen to the GPUs. Failure is like baked into this as like a core part of our infrastructure. So it's not that we don't have an abstraction. It's that it's a sort of simpler, more tailored abstraction for the particular work that we're doing.JONATHAN [00:34:58]: Yeah, I think it all depends on what your goals are. And like, I think the challenge in a lot of the deep learning stuff right now is that people are trying to like, people often build things that are more complicated than necessary to get the job done. And the complication is the enemy of everything. You know, don't use a fancier parallelism strategy than you have to. Don't use a fancier set of libraries than you have to.SWYX [00:35:18]: Don't do anythingJONATHAN [00:35:18]: that you don't have to do because it's hard enough as it is. Like, don't overcomplicateSWYX [00:35:23]: your own life.JONATHAN [00:35:23]: Don't try to bring in more tools or more fancy architecture tweaks if you absolutely don't have to.SWYX [00:35:29]: Like getting to the minimumJONATHAN [00:35:30]: necessary to get the job done. And it's really tempting to want to try to use everything. So like, I totally understand that one.SWYX [00:35:37]: I think the last piece I'll maybe call out is that I'm just going to weave this in just because I see the opportunity to do it. Are there any infrastructure shifts that need to be, that need to rise because of changing architecture? So I think, for example,SWYX [00:35:57]: you're announcing a dense model, a 70B dense model, whereas John just worked on DBRX and the image-to-text model, which presumably has different bottlenecks.JONATHAN [00:36:10]: That's correct for us. You know, we train both dense and mixture of expert models. The one we happened to, you know, kind of get permission to open source was a mixture of expert model. And those models are very demanding when it comes to network bandwidth, at least if you're training them in kind of FSTP 03 style, where there's just a lot of parameters getting shuffled back and forth. And your ratio of kind of compute to amount of data that you have to shuffle back and forth becomes a lot worse because you're now, you know, you're only using a fraction of the parameters for every token instead of all the parameters. And so we had to really push the envelope on getting all the stuff to the right places on time. And so actually the networking part of DBRX was the single hardest thing, I think, of the entire process. Just get MOE training, working at scale across a big cluster. We still managed to, I think, do it all with commodity parts, which was very exciting. You know, we were using FSTP and we eventually used HSTP so that we could have HSTP as a version of FSTP where you have multiple smaller replicas and you're doing data parallel within those replicas. And that helped a lot with network latency issues that we were running into just because we were transmitting so much data, you know, for every single part of the process. I think it actually, like, it was instructive for how Google designs their hardware and software together personally. Their training, as far as I understand, using kind of a 03 style of training and have been for a while. They also train mixture of expert models. TPUs have a very different network bandwidth to compute ratio. They have a lot more bandwidth just objectively. And TPUs per chip tend to be a little bit less compute intensive and have a little bit less memory. You know, it's just a different design choice. So the ratio of flops to bandwidth is very different. And that means that it's much easier for Google to be able to pull offSWYX [00:37:54]: some of this stuff.JONATHAN [00:37:54]: They also have interesting, you know, Torus style network architecture or Torus style, like, literal network architectureSWYX [00:38:00]: is not like the model,JONATHAN [00:38:00]: but the network.SWYX [00:38:02]: Is this the sort of block attention? I forgot what you call it. So this is just more or the,JONATHAN [00:38:07]: yeah, this is more, not the ring attention, but these are the ring all reduces. Like you have three different dimensions of rings because they kind of put you in these three dimensional Toruses from what I understand. And so like, you know, Google's infrastructure in some sense is kind of, I wouldn't say built for this, but maybe the way that Google trains models is built for a slightly different bit of infrastructure they have. And it's kind of neat to think about that. You know, as one thing that I think NVIDIA announced for, you know, for, for both the GH200 and the GB200 is this hybrid networking where you'll have blocks of NVLink network chips. I think for the GB200, I think it's like groups of 72 GPUs will all have NVLink to each other. So higher bandwidth, then you'll have normal networking of some kind, InfiniBand or Rocky or what have you between these blocks. And that's kind of a, you know, it's a change due to the fact that, you know, it's hard to build really high bandwidth networks over very large groups, but it is now a blocked networking. And you have to think about how you architect your model and your parallelism differently. You also have to think about fault tolerance differently because it now matters where you lose a GPU, whereas it didn't before. So, you know, it's, it's, it's just all really interesting and really fun speaking personally, but it's going to mean new nightmares when we all move to that generation and have to think about, you know, new versions of these problems.JOSH [00:39:20]: As you go up to larger scales, it gets quite different. Like right now, you know, if you're experiencing, let's say, for example, you experience a GPU failure every day, that's fine.SWYX [00:39:31]: Just restart.JOSH [00:39:31]: If you make your thing 24 times as big, now it's once an hour. Now it stops being quite as easy to just restart, right? So now you have to kind of break, like bake in this sort of redundancy that you didn't have before. So I think as you go up in scale, you end up running into like a lot of really interesting problems that also inform the, the actual like design. Yeah, I mean, as an orchestration guy,SWYX [00:39:52]: this is why I always emphasize like very cheap storage or very fast storage. So you can checkpoint more, but I don't think that's probably not the best solution to for fast, you know, training.JONATHAN [00:40:05]: Which works fine when you're doing language and then you move to vision or video. And then, you know, you have multi petabyte datasetsSWYX [00:40:12]: and getting, you know,JONATHAN [00:40:13]: cheap, fast multi petabyte storage starts to bite. Like I've certainly encountered issues where the literal data center where my GPUs were did not have enough, you know, object store to fit the datasets that people wanted to bring into that data center from whichever users were, were trying to bring them in. And then you get to a wholeSWYX [00:40:31]: different world of hurtJONATHAN [00:40:31]: where you have to keep your data in a different region because the region is just out of storage. So things get fun really fast.SWYX [00:40:39]: Speaking of vision, Josh, actually, you know, Embu is an agents company, but you're only, you're announcing a text-only model. What, where does, where does the vision side come in?JOSH [00:40:49]: I think we've actually done a lot of work in the past and people can see kind of our blog posts about sort of self-supervised learning and some other kind of vision-related stuff in the past as well. So we're very familiar with, with that stuff. But I think our main focus right now is on kind of, as we say, coding and reasoning. And there, there's certainly a visual component to some problems. But, you know, it's not necessarily required for all problems. And actually we found that for most of the kind of like code writing and, and reasoning problems that we care about, the visual part isn't really a huge important part of it. Sometimes if you really need to, you can maybe describeSWYX [00:41:24]: the thing.JOSH [00:41:24]: There are other like, you know, multimodal models that you can use off the shelf to sort of plug in for those particular piecesSWYX [00:41:30]: that you need, right?JOSH [00:41:30]: Like if something is driving a browser or whatever, like you can sometimes get away with not having to have that baked into the original model. So our folk were, you know, in a sense, we kind of do a lot across the stack. We're working on our own infrastructure and pre-training and RL and fine tuning and products and everything. But in another sense, we're very narrowly focused on the application side. So all of the stuff across the stack is kind of going toward a very particular purpose. And so that particular purpose right now doesn't really need vision. So we think that people are going to make all sorts of really cool image modelsSWYX [00:42:00]: like Jonathan, right?JOSH [00:42:00]: And all sorts of interesting multimodal models into the future. We'll let them go do that. That's great. We'll take advantage of that, partner with those people in the future. And right now we're really focused on kind of the core reasoning and coding capabilities and aspects of the model.SWYX [00:42:14]: I wanted to go into carbs since that's kind of the next layer of the stack. We talked about carbs in the first episode with Kanjin because you've actually had a blog post about it like a couple of years ago. Maybe let's introduce it.JONATHAN [00:42:26]: Has that been a couple of years now?JOSH [00:42:28]: No, it must have been at least one year. Hopefully it's not multiple years.SWYX [00:42:32]: Sorry, I'm counting AI time. Yeah, yeah. Yeah, I was going to sayJONATHAN [00:42:35]: you're making me feel really old right now.SWYX [00:42:39]: I count everything before the generally intelligent rename as like, you know, prehistory. Yeah. And now sort of modernity, right? So I actually thought carbs was more about hyperparameter optimization in a sense of like sort of parameters, hyperparameter search. Whereas, you know, when you introduced it, especially in this blog post, it's more about scaling laws and predictability of like, are we sort of in the right ballpark before we scale things up? Maybe sort of recount the history of carbs.JOSH [00:43:10]: Yeah, so it really is a little bit of both. So carbs is, it's maybe a backronym, but it's for cost aware Pareto region Bayesian search. So this is about technically how it works, but carbs is like, you know, we like pastries and stuff.SWYX [00:43:26]: So great, why not? But the point is thatJOSH [00:43:29]: it's a cost aware hyperparameter tuner. So most hyperparameter tuners, you kind of say, OK, here's this objective function. I want you to make this number as big as possible or as small as possible, whichever direction you want to go. So yeah, just go make this number, you know, as small as possible. OK, so it'll try a bunch of differentSWYX [00:43:46]: hyperparameters,JOSH [00:43:46]: a bunch of different configurationsSWYX [00:43:48]: to figure out, like,JOSH [00:43:48]: how do I tweak your network and architecture, et cetera, to get the kind of best performance I possibly can. That's usually saying, like, you know, almost all of these hyperparameter configurations are, let's say they're all going to use the same number of GPUs or the same number of nodes.SWYX [00:44:01]: So it's going to runJOSH [00:44:01]: for the same amount of time.SWYX [00:44:03]: So you can do that.JOSH [00:44:03]: You can get a number out and that's great. But what carbs does is it says,SWYX [00:44:07]: OK, actually,JOSH [00:44:07]: what if we relax that constraint? What if we say each of these different points, we're going to model how expensive it will be to sample this configuration. So if what if we train with just one one hundredth of the data? Like, how well can we do?SWYX [00:44:19]: What if we trainJOSH [00:44:19]: with one tenth of the data? What if we train with all the data? That way you can understand, like, as we get more and more data, as we spend more and more compute,SWYX [00:44:26]: as we make a biggerJOSH [00:44:26]: and bigger network, how does performance change with these things that change? Like how expensive it is to even explore this data point. So by doing that, we can see the scaling laws for not just, you know,SWYX [00:44:36]: the scaling lawsJOSH [00:44:36]: from like the, you know, Chantilla paper, the scaling laws for all parameters. We can see how does how does the number of layers change with this? How does the, you know, the learning rate change? How do the like, you know, various types of regularization change? So you can see these nice scaling laws. And as you're going across costs, like how should this be changing as you're scaling up your model? So that, coupled with the kind of metric that we chose, which is a very precise way of measuring performance, allowed us to really like hone in on parameters that worked really wellSWYX [00:45:05]: and understand, like,JOSH [00:45:05]: how do we want to scale those up, especially as we're changingSWYX [00:45:08]: things about the network?JOSH [00:45:08]: Like one of the things that we did is we used a custom tokenizer. As we change this tokenizer, changes a bunch of other things about the model. So how should we scale up this entirely new tokenizer? Like no one has ever made a model this large with this tokenizer before. And so how do we want toSWYX [00:45:22]: change all these things?JOSH [00:45:22]: Harps kind of shows you, like, look, as you change these parameters, like these other ones are kind of dependent on this.SWYX [00:45:28]: Like this is the, these areJOSH [00:45:28]: the relationships between them. So you can better understand, like, OK, if I'm going to scale this up 10x or 100x, like, where do I want to be? I can only go so far. And so, you know, we did run, like, I think maybe it was like a 14b one or somethingSWYX [00:45:40]: like that to check.JOSH [00:45:41]: But and so we had a bunch of like 1b or 14b and then at 70b. I don't think we had a, I think we just did like one at 14b. So you can, we get to check that like, oh, is this on the curve? Like, is this where we expect? It was like right there. So then great, go on to the next one. Yeah, I mean, that makes a lot of sense.SWYX [00:45:56]: I wonder if, so one of the key questions, and correct me if I'm wrong, but like usually people do search or do their evals just based on loss. But you actually evaluate based on, you know, the sort of end state evals that people might expect, like HellaSwag and Lombata, whatever. What is the norm here? Is there a norm?JOSH [00:46:20]: Yeah, I don't know if there's a hundred percent.SWYX [00:46:21]: I don't know. I only see loss on most people's reports.JOSH [00:46:25]: I think it's easy to, like, loss is very nice because it's very precise. It will tell you, like, very fine grained differences between like really small changes in your hyperparameters or network architecture. Whereas, especially at the smaller scales, if you're looking at like accuracy, it's very noisy. Like it might be zero or a hundred or like, you know, fluctuating by like 10 or 20 percentage points, which makes it really hard to tell, like, did that change actually mean anything? So our loss is sort of a combination of these two. Instead of saying, like, let's just look at perplexity, we say, let's look at perplexity on the tasks that we care about for multiple choice questions effectively.SWYX [00:47:00]: So we're saying like, yes,JOSH [00:47:00]: this is formulated as a multiple choice question, and we're going to look at the, like, you know, the loss of perplexity for this particular answer token. And that ends up being something that's like both targeted to what you actually care about and also very precise. The nice thing about this though is that it's independent of the data that you train on. One thing that's annoying about perplexity or about loss is that as you change your data set, this is really obnoxious because now it fundamentally changes your loss, right? And so you can't tell, like, how do I tweak my data set? But because we have this held out evaluation dat

GREY Journal Daily News Podcast
Venture Capital Boost for Female Founders in AI 2023

GREY Journal Daily News Podcast

Play Episode Listen Later Jun 6, 2024 3:09


In 2023, the proportion of U.S. venture capital funding directed to startups with at least one female founder increased to 25%, totaling $34.7 billion, up from 15% in 2022. This notable rise was driven by significant investments in AI startups, with prominent examples including OpenAI, which raised $10 billion, and Anthropic, which secured over $6.5 billion across multiple rounds. The AI sector received over half of its U.S. investments, amounting to $21 billion, in companies with female founders, although this figure was influenced by a few large deals. Among other AI unicorns receiving significant funding were Adept AI, Replit, and Imbue, all led by female founders. Steady progress has been made, and early-stage and seed funding rounds show higher participation of female founders. Despite the positive trend, female-only founded companies have remained flat at 3% in funding year-over-year, with female/male co-founded companies seeing a rise in 2023. Since 2021, annual investments in female-founded companies in the U.S. have consistently exceeded $30 billion, with a noticeable increase in late-stage deal counts. Hosted on Acast. See acast.com/privacy for more information.

Practical AI
Full-stack approach for effective AI agents

Practical AI

Play Episode Listen Later May 15, 2024 47:04


There's a lot of hype about AI agents right now, but developing robust agents isn't yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue's CTO, tell us more about their approach and some of what they have learned along the way.

Changelog Master Feed
Full-stack approach for effective AI agents (Practical AI #269)

Changelog Master Feed

Play Episode Listen Later May 15, 2024 47:04 Transcription Available


There's a lot of hype about AI agents right now, but developing robust agents isn't yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue's CTO, tell us more about their approach and some of what they have learned along the way.

3 Books With Neil Pasricha
A diary to help imbue gratitude

3 Books With Neil Pasricha

Play Episode Listen Later May 4, 2024 5:20


Pages are 333-second or less highlights from Chapters of 3 Books.   They are released at 3:33am between Chapters.     Page 112 comes from Chapter 11 with Kerri Kolen, editor of 'The Happiness Equation,' 'Lion,' and 'A Stolen Life.'   Listen to the full chapter: https://www.3books.co/chapters/11   Get the 3 Books email: http://www.3books.co/3mail   Join our community: Follow @neilpasricha on Instagram, Facebook, Twitter, & YouTube

Stone United Methodist Church
April 28, 2024 - Audio

Stone United Methodist Church

Play Episode Listen Later Apr 28, 2024 67:20


FIFTH SUNDAY OF EASTER Rev. Kendra Lovelace Balliet Music Director: Jim Ross Prelude - Wondrous Love- Dale Wood(Organ) Mallet Melody-Kevin McChesney(Bells) Gathering at the Gate Welcome *Call to Worship Leader: The disciples returned to the Sea of Galilee, a place where they had spent much time in ministry with Jesus, when he appeared to them again by the shore. All: When we struggle or face an uncertain future, we sometimes seek familiar places, geographies of the heart, places that offer us a sense of being well. Leader: Simon Peter said to the disciples gathered with him: “I’m going fishing.” The disciples eagerly joined him in the boat, but they fished all night and caught nothing. All: Even when our lives have been altered and our paths have changed, setbacks can tempt us to try to go back to the way things were before. Leader: Just after daybreak, Jesus stood on the beach, but the disciples did not recognize him. Jesus said, “Children, you have no fish, have you?” The disciples said, “No.” All: Regardless of how futile our old ways were, we try them again, finding false comfort in the familiar, yet not truly finding the healing we seek. Leader: Jesus told the disciples to cast their net on the other side of the boat. They followed his direction and caught so many fish they were unable to haul in the net. All: We can become so overwhelmed with struggles that we forget to let go of our frustrations in order to seek new ways of being, new paths to peace. Leader: Amid the flopping fish, the disciple whom Jesus loved recognized him, saying, “It is the Lord.” Peter put clothes on and jumped into the sea–eager to rush to Jesus. All: When we embrace new ways of being in the midst of life-changing circumstances, we often are filled with hope. Leader: The other disciples, seeing that they were not far from land, used the boat to drag the bulging nets ashore. All: When faced with permanent obstacles, rather than repeat what once worked, we can shift our perspective to other ways of well-being. Leader: This is a Word of Hope for the people who long for it. All: Thanks be to the Living Word. *Opening Hymn “God of Grace and God of Glory” #577 *Opening Prayer Merciful One, you know when we are afraid to love; you know when we are too cowardly to show mercy. Remind us again that perfect love casts out such fears. Surround us and strengthen us with your perfect love, even in the face of our imperfections. Imbue us with a love so strong, with such growth toward perfection, that we may cast aside our pride and embrace the power of love. On thy people, pour thy power. Amen. Confession of Faith: Apostles’ Creed #881 Gloria Patri Children's Chat Music Ministry and Offertory - Praise to the Lord- Anna Laura Page (Bells) Doxology and Prayer of Gratitude Proclaiming Healing Scripture John 21:1-8 Sermon Series “Resurrection Stories” Sermon: “Healing” Responding to Healing Joys/Concerns Hymn "There’s a Wideness in God’s Mercy" #121 Pastoral Prayer/Lord's Prayer Closing Hymn “I Love to Tell the Story” #156 with 5 verses Verse 5: The Savior of our stories has brought his peace to you; now go and tell the story, for others need it too. To ev'ry one who’s hurting ring out the gospel call; proclaim that Christ is risen and grants his peace to all. Unlocking Healing Action Steps & Benediction Postlude - Guide Me, O Thou Great Jehovah-Don Hustad Thank you for sharing in this worship service. Please continue to stay in touch through our website (stoneumc.org) and/or by following us on Facebook (Stone UMC). If you have joys or concerns that you would like lifted up in prayer, please fill out the Prayer Card in the pew, on the website, share them by contacting us at 814-724-6736 or churchoffice@stoneumc.org

Stone United Methodist Church
April 28, 2024 - Video

Stone United Methodist Church

Play Episode Listen Later Apr 28, 2024 67:20


FIFTH SUNDAY OF EASTER Rev. Kendra Lovelace Balliet Music Director: Jim Ross Prelude - Wondrous Love- Dale Wood(Organ) Mallet Melody-Kevin McChesney(Bells) Gathering at the Gate Welcome *Call to Worship Leader: The disciples returned to the Sea of Galilee, a place where they had spent much time in ministry with Jesus, when he appeared to them again by the shore. All: When we struggle or face an uncertain future, we sometimes seek familiar places, geographies of the heart, places that offer us a sense of being well. Leader: Simon Peter said to the disciples gathered with him: “I’m going fishing.” The disciples eagerly joined him in the boat, but they fished all night and caught nothing. All: Even when our lives have been altered and our paths have changed, setbacks can tempt us to try to go back to the way things were before. Leader: Just after daybreak, Jesus stood on the beach, but the disciples did not recognize him. Jesus said, “Children, you have no fish, have you?” The disciples said, “No.” All: Regardless of how futile our old ways were, we try them again, finding false comfort in the familiar, yet not truly finding the healing we seek. Leader: Jesus told the disciples to cast their net on the other side of the boat. They followed his direction and caught so many fish they were unable to haul in the net. All: We can become so overwhelmed with struggles that we forget to let go of our frustrations in order to seek new ways of being, new paths to peace. Leader: Amid the flopping fish, the disciple whom Jesus loved recognized him, saying, “It is the Lord.” Peter put clothes on and jumped into the sea–eager to rush to Jesus. All: When we embrace new ways of being in the midst of life-changing circumstances, we often are filled with hope. Leader: The other disciples, seeing that they were not far from land, used the boat to drag the bulging nets ashore. All: When faced with permanent obstacles, rather than repeat what once worked, we can shift our perspective to other ways of well-being. Leader: This is a Word of Hope for the people who long for it. All: Thanks be to the Living Word. *Opening Hymn “God of Grace and God of Glory” #577 *Opening Prayer Merciful One, you know when we are afraid to love; you know when we are too cowardly to show mercy. Remind us again that perfect love casts out such fears. Surround us and strengthen us with your perfect love, even in the face of our imperfections. Imbue us with a love so strong, with such growth toward perfection, that we may cast aside our pride and embrace the power of love. On thy people, pour thy power. Amen. Confession of Faith: Apostles’ Creed #881 Gloria Patri Children's Chat Music Ministry and Offertory - Praise to the Lord- Anna Laura Page (Bells) Doxology and Prayer of Gratitude Proclaiming Healing Scripture John 21:1-8 Sermon Series “Resurrection Stories” Sermon: “Healing” Responding to Healing Joys/Concerns Hymn "There’s a Wideness in God’s Mercy" #121 Pastoral Prayer/Lord's Prayer Closing Hymn “I Love to Tell the Story” #156 with 5 verses Verse 5: The Savior of our stories has brought his peace to you; now go and tell the story, for others need it too. To ev'ry one who’s hurting ring out the gospel call; proclaim that Christ is risen and grants his peace to all. Unlocking Healing Action Steps & Benediction Postlude - Guide Me, O Thou Great Jehovah-Don Hustad Thank you for sharing in this worship service. Please continue to stay in touch through our website (stoneumc.org) and/or by following us on Facebook (Stone UMC). If you have joys or concerns that you would like lifted up in prayer, please fill out the Prayer Card in the pew, on the website, share them by contacting us at 814-724-6736 or churchoffice@stoneumc.org

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 11, 2024 56:20


Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 6, 2024 121:17


Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y

america god tv love ceo amazon spotify netflix world learning europe english google ai apple lessons pr magic san francisco phd friend digital chinese marvel reading data predictions elon musk microsoft events funny fortune startups white house weird economics wall street memory wall street journal reddit wars vr auto cloud singapore curious gate stanford connections mix israelis context ibm mark zuckerberg senior vice president average intel cto ram state of the union tigers vc signal minecraft adapt siri transformers ipo sol instructors lsu openai clouds gemini nvidia stability rust ux api lemon gi patel nsfw cisco luther b2c d d bro progression compass davos sweep bing makes disagreement mythology gpt ml lama github llama token thursday night apis quran stripe vcs amd devops captive baldur embody silicon opus sora dozen copilot bobo tab sam altman capital one mamba llm gpu altman boba waze generic dali agi upfront midjourney ide approve gdc zuck napster golem coliseum git kv albrecht prs diffusion rag cloudflare waymo klarna gpus coders gan deepmind tldr boston dynamics alessio gitlab minefields sergei anthropic grok ppa json fragmented lex fridman ena suno mistral stable diffusion nox inflection decibel counterpoint databricks a16z mts rohde adept cuda gpts cursor chroma asr sundar jensen huang lemurian gtc decoder iou stability ai singaporeans omniverse netlify etched sram cerebros nvidia gpus pytorch eac lamo day6 devtools not safe agis mustafa suleyman kubecon jupyter elicit vae autogpt project titan tpu milind nvidia gtc practical ai demis personal ai groq neurips marginally jeff dean andrej karpathy imbue nlw positron hbm ai engineer slido nat friedman entropic ppap lstm c300 boba guys technium mbu simon willison lpu you look xla latent space swix medex lstms mxu metax
Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Why Google failed to make GPT-3 + why Multimodal Agents are the path to AGI — with David Luan of Adept

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 22, 2024 41:52


Our next SF event is AI UX 2024 - let's see the new frontier for UX since last year! Last call: we are recording a preview of the AI Engineer World's Fair with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an “ex-technical co-founder type”. Reach out to him for more!David Luan has been at the center of the modern AI revolution: he was the ~30th hire at OpenAI, he led Google's LLM efforts and co-led Google Brain, and then started Adept in 2022, one of the leading companies in the AI agents space. In today's episode, we asked David for some war stories from his time in early OpenAI (including working with Alec Radford ahead of the GPT-2 demo with Sam Altman, that resulted in Microsoft's initial $1b investment), and how Adept is building agents that can “do anything a human does on a computer" — his definition of useful AGI.Why Google *couldn't* make GPT-3While we wanted to discuss Adept, we couldn't talk to a former VP Eng of OpenAI and former LLM tech lead at Google Brain and not ask about the elephant in the room. It's often asked how Google had such a huge lead in 2017 with Vaswani et al creating the Transformer and Noam Shazeer predicting trillion-parameter models and yet it was David's team at OpenAI who ended up making GPT 1/2/3. David has some interesting answers:“So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized…what they (should) have done would be say, hey, Noam Shazeer, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too…You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing. He's got this decoder only transformer that's probably going to get there before we do. And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused.”Cloning HGI for AGIHuman intelligence got to where it is today through evolution. Some argue that to get to AGI, we will approximate all the “FLOPs” that went into that process, an approach most famously mapped out by Ajeya Cotra's Biological Anchors report:The early days of OpenAI were very reinforcement learning-driven with the Dota project, but that's a very inefficient way for these models to re-learn everything. (Kanjun from Imbue shared similar ideas in her episode).David argues that there's a shortcut. We can bootstrap from existing intelligence.“Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there… I think we are ignoring the fact that you have a giant shortcut, which is you can behaviorally clone everything humans already know. And that's what we solved with LLMs!”LLMs today basically model intelligence using all (good!) written knowledge (see our Datasets 101 episode), and have now expanded to non-verbal knowledge (see our HuggingFace episode on multimodality). The SOTA self-supervised pre-training process is surprisingly data-efficient in taking large amounts of unstructured data, and approximating reasoning without overfitting.But how do you cross the gap from the LLMs of today to building the AGI we all want? This is why David & friends left to start Adept.“We believe the clearest framing of general intelligence is a system that can do anything a human can do in front of a computer. A foundation model for actions, trained to use every software tool, API, and webapp that exists, is a practical path to this ambitious goal” — ACT-1 BlogpostCritical Path: Abstraction with ReliabilityThe AGI dream is fully autonomous agents, but there are levels to autonomy that we are comfortable giving our agents, based on how reliable they are. In David's word choice, we always want higher levels of “abstractions” (aka autonomy), but our need for “reliability” is the practical limit on how high of an abstraction we can use.“The critical path for Adept is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that.”We saw how Adept thinks about different levels of abstraction at the 2023 Summit:The highest abstraction is the “AI Employee”, but we'll get there with “AI enabled employees”. Alessio recently gave a talk about the future of work with “services as software” at this week's Nvidia GTC (slides).No APIsUnlike a lot of large research labs, Adept's framing of AGI as "being able to use your computer like a human" carries with it a useful environmental constraint:“Having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path (to economic value).”This realization and conviction means that multimodal modals are the way to go. Instead of using function calling to call APIs to build agents, which is what OpenAI and most of the open LLM industry have done to date, Adept wants to “drive by vision”, (aka see the screen as a human sees it) and pinpoint where to click and type as a human does. No APIs needed, because most software don't expose APIs.Extra context for readers: You can see the DeepMind SIMA model in the same light: One system that learned to play a diverse set of games (instead of one dedicated model per game) using only pixel inputs and keyboard-and-mouse action outputs!The OpenInterpreter team is working on a “Computer API” that also does the same.To do this, Adept had to double down on a special kind of multimodality for knowledge work:“A giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents……I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera… (but) where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so Adept spent a lot of time building that.”With this context, you can now understand the full path of Adept's public releases:* ACT-1 (Sept 2022): a large Transformers model optimized for browser interactions. It has a custom rendering of the browser viewport that allows it to better understand it and take actions.* Persimmon-8B (Sept 2023): a permissive open LLM (weights and code here)* Fuyu-8B (Oct 2023): a small version of the multimodal model that powers Adept. Vanilla decoder-only transformer with no specialized image encoder, which allows it to handle input images of varying resolutions without downsampling.* Adept Experiments (Nov 2023): A public tool to build automations in the browser. This is powered by Adept's core technology but it's just a piece of their enterprise platform. They use it as a way to try various design ideas.* Fuyu Heavy (Jan 2024) - a new multimodal model designed specifically for digital agents and the world's third-most-capable multimodal model (beating Gemini Pro on MMMU, AI2D, and ChartQA), “behind only GPT4-V and Gemini Ultra, which are 10-20 times bigger”The Fuyu-8B post in particular exhibits a great number of examples on knowledge work multimodality:Why Adept is NOT a Research LabWith OpenAI now worth >$90b and Anthropic >$18b, it is tempting to conclude that the AI startup metagame is to build a large research lab, and attract the brightest minds and highest capital to build AGI. Our past guests (see the Humanloop episode) and (from Imbue) combined to ask the most challenging questions of the pod - with David/Adept's deep research pedigree from Deepmind and OpenAI, why is Adept not building more general foundation models (like Persimmon) and playing the academic benchmarks game? Why is Adept so focused on commercial agents instead?“I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from “Can we make a better agent”…… I think pure play foundation model companies are just going to be pinched by how good the next couple of (Meta Llama models) are going to be… And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.”and the commercial grounding is his answer to Kanjun too (whom we also asked the inverse question to compare with Adept):“… the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build AGI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations are not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals.. I think that's a degree of practicality that really helps.”And his customers seem pretty happy, because David didn't need to come on to do a sales pitch:David: “One of the things we haven't shared before is we're completely sold out for Q1.”Swyx: “Sold out of what?”David: “Sold out of bandwidth to onboard more customers.”Well, that's a great problem to have.Show Notes* David Luan* Dextro at Data Driven NYC (2015)* Adept* ACT-1* Persimmon-8B* Adept Experiments* Fuyu-8B* $350M Series B announcement* Amelia Wattenberger talk at AI Engineer Summit* FigureChapters* [00:00:00] Introductions* [00:01:14] Being employee #30 at OpenAI and its early days* [00:13:38] What is Adept and how do you define AGI?* [00:21:00] Adept's critical path and research directions* [00:26:23] How AI agents should interact with software and impact product development* [00:30:37] Analogies between AI agents and self-driving car development* [00:32:42] Balancing reliability, cost, speed and generality in AI agents* [00:37:30] Potential of foundation models for robotics* [00:39:22] Core research questions and reasons to work at AdeptTranscriptsAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:15]: Hey, and today we have David Luan, CEO, co-founder of Adept in the studio. Welcome.David [00:00:20]: Yeah, thanks for having me.Swyx [00:00:21]: Been a while in the works. I've met you socially at one of those VC events and you said that you were interested in coming on and glad we finally were able to make this happen.David: Yeah, happy to be part of it.Swyx: So we like to introduce the speaker and then also just like have you talk a little bit about like what's not on your LinkedIn, what people should just generally know about you. You started a company in college, which was the first sort of real time video detection classification API that was Dextro, and that was your route to getting acquired into Axon where you're a director of AI. Then you were the 30th hire at OpenAI?David [00:00:53]: Yeah, 30, 35, something around there. Something like that.Swyx [00:00:56]: So you were VP of Eng for two and a half years to two years, briefly served as tech lead of large models at Google, and then in 2022 started Adept. So that's the sort of brief CV. Is there anything else you like want to fill in the blanks or like people should know more about?David [00:01:14]: I guess a broader story was I joined OpenAI fairly early and I did that for about two and a half to three years leading engineering there. It's really funny, I think second or third day of my time at OpenAI, Greg and Ilya pulled me in a room and we're like, you know, you should take over our directs and we'll go mostly do IC work. So that was fun, just coalescing a bunch of teams out of a couple of early initiatives that had already happened. The company, the Dota effort was going pretty hard and then more broadly trying to put bigger picture direction around what we were doing with basic research. So I spent a lot of time doing that. And then I led Google's LLM efforts, but also co-led Google Brain was one of the brain leads more broadly. You know, there's been a couple of different eras of AI research, right? If we count everything before 2012 as prehistory, which people hate it when I say that, kind of had this like you and your three best friends write a research paper that changes the world period from like 2012 to 2017. And I think the game changed in 2017 and like most labs didn't realize it, but we at OpenAI really did. I think in large part helped by like Ilya's constant beating of the drum that the world would be covered in data centers. And I think-Swyx [00:02:15]: It's causally neat.David [00:02:16]: Yeah. Well, like I think we had conviction in that, but it wasn't until we started seeing results that it became clear that that was where we had to go. But also part of it as well was for OpenAI, like when I first joined, I think one of the jobs that I had to do was how do I tell a differentiated vision for who we were technically compared to, you know, hey, we're just smaller Google Brain, or like you work at OpenAI if you live in SF and don't want to commute to Mountain View or don't want to live in London, right? That's like not enough to like hang your technical identity as a company. And so what we really did was, and I spent a lot of time pushing this, is just how do we get ourselves focused on a certain class of like giant swings and bets, right? Like how do you flip the script from you just do bottom-up research to more about how do you like leave some room for that, but really make it about like, what are the big scientific outcomes that you want to show? And then you just solve them at all costs, whether or not you care about novelty and all that stuff. And that became the dominant model for a couple of years, right? And then what's changed now is I think the number one driver of AI products over the next couple of years is going to be the deep co-design and co-evolution of product and users for feedback and actual technology. And I think labs, every tool to go do that are going to do really well. And that's a big part of why I started Adept.Alessio [00:03:20]: You mentioned Dota, any memories thinking from like the switch from RL to Transformers at the time and kind of how the industry was evolving more in the LLM side and leaving behind some of the more agent simulation work?David [00:03:33]: Like zooming way out, I think agents are just absolutely the correct long-term direction, right? You just go to find what AGI is, right? You're like, Hey, like, well, first off, actually, I don't love AGI definitions that involve human replacement because I don't think that's actually how it's going to happen. Even this definition of like, Hey, AGI is something that outperforms humans at economically valuable tasks is kind of implicit view of the world about what's going to be the role of people. I think what I'm more interested in is like a definition of AGI that's oriented around like a model that can do anything a human can do on a computer. If you go think about that, which is like super tractable, then agent is just a natural consequence of that definition. And so what did all the work we did on our own stuff like that get us was it got us a really clear formulation. Like you have a goal and you want to maximize the goal, you want to maximize reward, right? And the natural LLM formulation doesn't come with that out of the box, right? I think that we as a field got a lot right by thinking about, Hey, how do we solve problems of that caliber? And then the thing we forgot is the Novo RL is like a pretty terrible way to get there quickly. Why are we rediscovering all the knowledge about the world? Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there. Right.Swyx [00:04:44]: The biological basis theory. Right.David [00:04:46]: So I think we are ignoring the fact that you have a giant shortcut, which is you can behavioral clone everything humans already know. And that's what we solved with LLMs. We've solved behavioral cloning, everything that humans already know. Right. So like today, maybe LLMs is like behavioral cloning every word that gets written on the internet in the future, the multimodal models are becoming more of a thing where behavioral cloning the visual world. But really, what we're just going to have is like a universal byte model, right? Where tokens of data that have high signal come in, and then all of those patterns are like learned by the model. And then you can regurgitate any combination now. Right. So text into voice out, like image into other image out or video out or whatever, like these like mappings, right? Like all just going to be learned by this universal behavioral cloner. And so I'm glad we figured that out. And I think now we're back to the era of how do we combine this with all of the lessons we learned during the RL period. That's what's going to drive progress.Swyx [00:05:35]: I'm still going to pressure you for a few more early opening stories before we turn to the ADET stuff. On your personal site, which I love, because it's really nice, like personal, you know, story context around like your history. I need to update it. It's so old. Yeah, it's so out of date. But you mentioned GPT-2. Did you overlap with GPT-1? I think you did, right?David [00:05:53]: I actually don't quite remember. I think I was joining right around- Right around then?Swyx [00:05:57]: I was right around that, yeah. Yeah. So what I remember was Alec, you know, just kind of came in and was like very obsessed with Transformers and applying them to like Reddit sentiment analysis. Yeah, sentiment, that's right. Take us through-David [00:06:09]: Sentiment neuron, all this stuff.Swyx [00:06:10]: The history of GPT as far as you know, you know, according to you. Ah, okay.David [00:06:14]: History of GPT, according to me, that's a pretty good question. So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized, where like, again, you and your three best friends write papers, right? Okay. So zooming way out, right? I think about my job when I was a full-time research leader as a little bit of a portfolio allocator, right? So I've got really, really smart people. My job is to convince people to coalesce around a small number of really good ideas and then run them over the finish line. My job is not actually to promote a million ideas and never have critical mass. And then as the ideas start coming together and some of them start working well, my job is to nudge resources towards the things that are really working and then start disbanding some of the things that are not working, right? That muscle did not exist during my time at Google. And I think had they had it, what they would have done would be say, hey, Noam Shazir, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too.Swyx [00:07:17]: He's talking about trillion parameter models in 2017.David [00:07:20]: Yeah. So that's the core of the GPT story, right? Which is that, and I'm jumping around historically, right? But after GPT-2, we were all really excited about GPT-2. I can tell you more stories about that. It was the last paper that I even got to really touch before everything became more about building a research org. You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing, right? He's got this decoder only transformer that's probably going to get there before we do. And I was like, but like, please just like let this model finish, right? And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why, right? At the time, there was a thing called the brain credit marketplace. And did you guys know the brain credit marketplace? No, I never heard of this. Oh, so it's actually, it's a, you can ask any Googler.Swyx [00:08:23]: It's like just like a thing that, that, I mean, look like, yeah, limited resources, you got to have some kind of marketplace, right? You know, sometimes it's explicit, sometimes it isn't, you know, just political favors.David [00:08:34]: You could. And so then basically everyone's assigned a credit, right? So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused. And I think, again, that's like part of the narrative of like this phase one of AI, right? Of like this modern AI era to phase two. And I think in the same way, I think phase three company is going to out execute phase two companies because of the same asymmetry of success.Swyx [00:09:12]: Yeah. I think it's underrated how much NVIDIA works with you in the early days as well. I think maybe, I think it was Jensen. I'm not sure who circulated a recent photo of him delivering the first DGX to you guys.David [00:09:24]: I think Jensen has been a complete legend and a mastermind throughout. I have so much respect for NVIDIA. It is unreal.Swyx [00:09:34]: But like with OpenAI, like kind of give their requirements, like co-design it or just work of whatever NVIDIA gave them.David [00:09:40]: So we work really closely with them. There's, I'm not sure I can share all the stories, but examples of ones that I've found particularly interesting. So Scott Gray is amazing. I really like working with him. He was on one of my teams, the supercomputing team, which Chris Berner runs and Chris Berner still does a lot of stuff in that. As a result, like we had very close ties to NVIDIA. Actually, one of my co-founders at Adept, Eric Elson, was also one of the early GPGPU people. So he and Scott and Brian Catanzaro at NVIDIA and Jonah and Ian at NVIDIA, I think all were very close. And we're all sort of part of this group of how do we push these chips to the absolute limit? And I think that kind of collaboration helped quite a bit. I think one interesting set of stuff is knowing the A100 generation, that like quad sparsity was going to be a thing. Is that something that we want to go look into, right? And figure out if that's something that we could actually use for model training. Really what it boils down to is that, and I think more and more people realize this, six years ago, people, even three years ago, people refused to accept it. This era of AI is really a story of compute. It's really the story of how do you more efficiently map actual usable model flops to compute,Swyx [00:10:38]: Is there another GPT 2, 3 story that you love to get out there that you think is underappreciated for the amount of work that people put into it?David [00:10:48]: So two interesting GPT 2 stories. One of them was I spent a good bit of time just sprinting to help Alec get the paper out. And I remember one of the most entertaining moments was we were writing the modeling section. And I'm pretty sure the modeling section was the shortest modeling section of any ML, reasonably legitimate ML paper to that moment. It was like section three model. This is a standard vanilla decoder only transformer with like these particular things, those paragraph long if I remember correctly. And both of us were just looking at the same being like, man, the OGs in the field are going to hate this. They're going to say no novelty. Why did you guys do this work? So now it's funny to look at in hindsight that it was pivotal kind of paper, but I think it was one of the early ones where we just leaned fully into all we care about is solving problems in AI and not about, hey, is there like four different really simple ideas that are cloaked in mathematical language that doesn't actually help move the field forward?Swyx [00:11:42]: Right. And it's like you innovate on maybe like data set and scaling and not so much the architecture.David [00:11:48]: We all know how it works now, right? Which is that there's a collection of really hard won knowledge that you get only by being at the frontiers of scale. And that hard won knowledge, a lot of it's not published. A lot of it is stuff that's actually not even easily reducible to what looks like a typical academic paper. But yet that's the stuff that helps differentiate one scaling program from another. You had a second one? So the second one is, there's like some details here that I probably shouldn't fully share, but hilariously enough for the last meeting we did with Microsoft before Microsoft invested in OpenAI, Sam Altman, myself and our CFO flew up to Seattle to do the final pitch meeting. And I'd been a founder before. So I always had a tremendous amount of anxiety about partner meetings, which this basically this is what it was. I had Kevin Scott and Satya and Amy Hood, and it was my job to give the technical slides about what's the path to AGI, what's our research portfolio, all of this stuff, but it was also my job to give the GPT-2 demo. We had a slightly bigger version of GPT-2 that we had just cut maybe a day or two before this flight up. And as we all know now, model behaviors you find predictable at one checkpoint are not predictable in another checkpoint. And so I'd spent all this time trying to figure out how to keep this thing on rails. I had my canned demos, but I knew I had to go turn it around over to Satya and Kevin and let them type anything in. And that just, that really kept me up all night.Swyx [00:13:06]: Nice. Yeah.Alessio [00:13:08]: I mean, that must have helped you talking about partners meeting. You raised $420 million for Adept. The last round was a $350 million Series B, so I'm sure you do great in partner meetings.Swyx [00:13:18]: Pitchers meetings. Nice.David [00:13:20]: No, that's a high compliment coming from a VC.Alessio [00:13:22]: Yeah, no, I mean, you're doing great already for us. Let's talk about Adept. And we were doing pre-prep and you mentioned that maybe a lot of people don't understand what Adept is. So usually we try and introduce the product and then have the founders fill in the blanks, but maybe let's do the reverse. Like what is Adept? Yeah.David [00:13:38]: So I think Adept is the least understood company in the broader space of foundational models plus agents. So I'll give some color and I'll explain what it is and I'll explain also why it's actually pretty different from what people would have guessed. So the goal for Adept is we basically want to build an AI agent that can do, that can basically help humans do anything a human does on a computer. And so what that really means is we want this thing to be super good at turning natural language like goal specifications right into the correct set of end steps and then also have all the correct sensors and actuators to go get that thing done for you across any software tool that you already use. And so the end vision of this is effectively like I think in a couple of years everyone's going to have access to like an AI teammate that they can delegate arbitrary tasks to and then also be able to, you know, use it as a sounding board and just be way, way, way more productive. Right. And just changes the shape of every job from something where you're mostly doing execution to something where you're mostly actually doing like these core liberal arts skills of what should I be doing and why. Right. And I find this like really exciting and motivating because I think it's actually a pretty different vision for how AGI will play out. I think systems like Adept are the most likely systems to be proto-AGIs. But I think the ways in which we are really counterintuitive to everybody is that we've actually been really quiet because we are not a developer company. We don't sell APIs. We don't sell open source models. We also don't sell bottom up products. We're not a thing that you go and click and download the extension and like we want more users signing up for that thing. We're actually an enterprise company. So what we do is we work with a range of different companies, some like late stage multi-thousand people startups, some fortune 500s, et cetera. And what we do for them is we basically give them an out of the box solution where big complex workflows that their employees do every day could be delegated to the model. And so we look a little different from other companies in that in order to go build this full agent thing, the most important thing you got to get right is reliability. So initially zooming way back when, one of the first things that DEP did was we released this demo called Act One, right? Act One was like pretty cool. It's like kind of become a hello world thing for people to show agent demos by going to Redfin and asking to buy a house somewhere because like we did that in the original Act One demo and like showed that, showed like Google Sheets, all this other stuff. Over the last like year since that has come out, there's been a lot of really cool demos and you go play with them and you realize they work 60% of the time. But since we've always been focused on how do we build an amazing enterprise product, enterprises can't use anything that isn't in the nines of reliability. And so we've actually had to go down a slightly different tech tree than what you might find in the prompt engineering sort of plays in the agent space to get that reliability. And we've decided to prioritize reliability over all else. So like one of our use cases is crazy enough that it actually ends with a physical truck being sent to a place as the result of the agent workflow. And if you're like, if that works like 60% of the time, you're just blowing money and poor truck drivers going places.Alessio [00:16:30]: Interesting. One of the, our investment teams has this idea of services as software. I'm actually giving a talk at NVIDIA GTC about this, but basically software as a service, you're wrapping user productivity in software with agents and services as software is replacing things that, you know, you would ask somebody to do and the software just does it for you. When you think about these use cases, do the users still go in and look at the agent kind of like doing the things and can intervene or like are they totally removed from them? Like the truck thing is like, does the truck just show up or are there people in the middle checking in?David [00:17:04]: I think there's two current flaws in the framing for services as software, or I think what you just said. I think that one of them is like in our experience, as we've been rolling out Adept, the people who actually do the jobs are the most excited about it because they don't go from, I do this job to, I don't do this job. They go from, I do this job for everything, including the shitty rote stuff to I'm a supervisor. And I literally like, it's pretty magical when you watch the thing being used because now it parallelizes a bunch of the things that you had to do sequentially by hand as a human. And you can just click into any one of them and be like, Hey, I want to watch the trajectory that the agent went through to go solve this. And the nice thing about agent execution as opposed to like LLM generations is that a good chunk of the time when the agent fails to execute, it doesn't give you the wrong result. It just fails to execute. And the whole trajectory is just broken and dead and the agent knows it, right? So then those are the ones that the human then goes and solves. And so then they become a troubleshooter. They work on the more challenging stuff. They get way, way more stuff done and they're really excited about it. I think the second piece of it that we've found is our strategy as a company is to always be an augmentation company. And I think one out of principle, that's something we really care about. But two, actually, if you're framing yourself as an augmentation company, you're always going to live in a world where you're solving tasks that are a little too hard for what the model can do today and still needs a human to provide oversight, provide clarifications, provide human feedback. And that's how you build a data flywheel. That's how you actually learn from the smartest humans how to solve things models can't do today. And so I actually think that being an augmentation company forces you to go develop your core AI capabilities faster than someone who's saying, ah, okay, my job is to deliver you a lights off solution for X.Alessio [00:18:42]: Yeah. It's interesting because we've seen two parts of the market. One is we have one company that does agents for SOC analysts. People just don't have them, you know, and just they cannot attract the talent to do it. And similarly, in a software development, you have Copilot, which is the augmentation product, and then you have sweep.dev and you have these products, which they just do the whole thing. I'm really curious to see how that evolves. I agree that today the reliability is so important in the enterprise that they just don't use most of them. Yeah. Yeah. No, that's cool. But it's great to hear the story because I think from the outside, people are like, oh, a dev, they do Act One, they do Persimon, they do Fuyu, they do all this stuff. Yeah, it's just the public stuff.Swyx [00:19:20]: It's just public stuff.David [00:19:21]: So one of the things we haven't shared before is we're completely sold out for Q1. And so I think...Swyx [00:19:26]: Sold out of what?David [00:19:27]: Sold out of bandwidth to go on board more customers. And so we're like working really hard to go make that less of a bottleneck, but our expectation is that I think we're going to be significantly more public about the broader product shape and the new types of customers we want to attract later this year. So I think that clarification will happen by default.Swyx [00:19:43]: Why have you become more public? You know, if the whole push has... You're sold out, you're my enterprise, but you're also clearly putting effort towards being more open or releasing more things.David [00:19:53]: I think we just flipped over that way fairly recently. That's a good question. I think it actually boils down to two things. One, I think that, frankly, a big part of it is that the public narrative is really forming around agents as being the most important thing. And I'm really glad that's happening because when we started the company in January 2022, everybody in the field knew about the agents thing from RL, but the general public had no conception of what it was. They were still hanging their narrative hat on the tree of everything's a chatbot. And so I think now one of the things that I really care about is that when people think agent, they actually think the right thing. All sorts of different things are being called agents. Chatbots are being called agents. Things that make a function call are being called agents. To me, an agent is something that you can give a goal and get an end step workflow done correctly in the minimum number of steps. And so that's a big part of why. And I think the other part is because I think it's always good for people to be more aware of Redept as they think about what the next thing they want to do in their careers. The field is quickly pivoting in a world where foundation models are looking more and more commodity. And I think a huge amount of gain is going to happen from how do you use foundation models as the well-learned behavioral cloner to go solve agents. And I think people who want to do agents research should really come to Redept.Swyx [00:21:00]: When you say agents have become more part of the public narrative, are there specific things that you point to? I'll name a few. Bill Gates in his blog post mentioning that agents are the future. I'm the guy who made OSes, and I think agents are the next thing. So Bill Gates, I'll call that out. And then maybe Sam Altman also saying that agents are the future for open AI.David [00:21:17]: I think before that even, I think there was something like the New York Times, Cade Metz wrote a New York Times piece about it. Right now, in a bit to differentiate, I'm seeing AI startups that used to just brand themselves as an AI company, but now brand themselves as an AI agent company. It's just like, it's a term I just feel like people really want.Swyx [00:21:31]: From the VC side, it's a bit mixed. Is it? As in like, I think there are a lot of VCs where like, I would not touch any agent startups because like- Why is that? Well, you tell me.Alessio [00:21:41]: I think a lot of VCs that are maybe less technical don't understand the limitations of the-Swyx [00:21:46]: No, that's not fair.Alessio [00:21:47]: No, no, no, no. I think like- You think so? No, no. I think like the, what is possible today and like what is worth investing in, you know? And I think like, I mean, people look at you and say, well, these guys are building agents. They needed 400 million to do it. So a lot of VCs are maybe like, oh, I would rather invest in something that is tacking on AI to an existing thing, which is like easier to get the market and kind of get some of the flywheel going. But I'm also surprised a lot of funders just don't want to do agents. It's not even the funding. Sometimes we look around and it's like, why is nobody doing agents for X? Wow.David [00:22:17]: That's good to know actually. I never knew that before. My sense from my limited perspective is there's a new agent company popping up every day.Swyx [00:22:24]: So maybe I'm- They are. They are. But like I have advised people to take agents off of their title because it's so diluted.David [00:22:31]: It's now so diluted.Swyx [00:22:32]: Yeah. So then it doesn't stand for anything. Yeah.David [00:22:35]: That's a really good point.Swyx [00:22:36]: So like, you know, you're a portfolio allocator. You have people know about Persimmon, people know about Fuyu and Fuyu Heavy. Can you take us through like how you think about that evolution of that and what people should think about what that means for adepts and sort of research directions? Kind of take us through the stuff you shipped recently and how people should think about the trajectory of what you're doing.David [00:22:56]: The critical path for adepts is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that. So if you go zoom way, way back to Act One days, right? Like the core thing behind Act One is can we teach large model basically how to even actuate your computer? And I think we're one of the first places to have solved that and shown it and shown the generalization that you get when you give it various different workflows and texts. But I think from there on out, we really realized was that in order to get reliability, companies just do things in various different ways. You actually want these models to be able to get a lot better at having some specification of some guardrails for what it actually should be doing. And I think in conjunction with that, a giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents. Back then we had to do a ton of research basically on how do we actually make that possible? Well, first off, like back in forgot exactly one month to 23, like there were no multimodal models really that you could use for things like this. And so we pushed really hard on stuff like the Fuyu architecture. I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera. Coco. Yeah, right. And the Coco is awesome. Like I love Coco. I love TY. Like it's really helped the field. Right. But like that's the build one thing. I actually think it's really clear today. Multimodal models are the default foundation model, right? It's just going to supplant LLMs. Like you just train a giant multimodal model. And so for that though, like where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. Right. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so a depth spent a lot of time building that. And so the public for use and stuff aren't trained on our actual corpus, it's trained on some other stuff. But you take a lot of that data and then you make it really fast and make it really good at things like dense OCR on screens. And then now you have the right like raw putty to go make a good agent. So that's kind of like some of the modeling side, we've kind of only announced some of that stuff. We haven't really announced much of the agent's work, but that if you put those together with the correct product form factor, and I think the product form factor also really matters. I think we're seeing, and you guys probably see this a little bit more than I do, but we're seeing like a little bit of a pushback against the tyranny of chatbots as form factor. And I think that the reason why the form factor matters is the form factor changes what data you collect in the human feedback loop. And so I think we've spent a lot of time doing full vertical integration of all these bits in order to get to where we are.Swyx [00:25:44]: Yeah. I'll plug Amelia Wattenberger's talk at our conference, where she gave a little bit of the thinking behind like what else exists other than chatbots that if you could delegate to reliable agents, you could do. I was kind of excited at Adept experiments or Adept workflows, I don't know what the official name for it is. I was like, okay, like this is something I can use, but it seems like it's just an experiment for now. It's not your product.David [00:26:06]: So you basically just use experiments as like a way to go push various ideas on the design side to some people and just be like, yeah, we'll play with it. Actually the experiments code base underpins the actual product, but it's just the code base itself is kind of like a skeleton for us to go deploy arbitrary cards on the side.Swyx [00:26:22]: Yeah.Alessio [00:26:23]: Makes sense. I was going to say, I would love to talk about the interaction layer. So you train a model to see UI, but then there's the question of how do you actually act on the UI? I think there was some rumors about open app building agents that are kind of like, they manage the end point. So the whole computer, you're more at the browser level. I read in one of your papers, you have like a different representation, kind of like you don't just take the dome and act on it. You do a lot more stuff. How do you think about the best way the models will interact with the software and like how the development of products is going to change with that in mind as more and more of the work is done by agents instead of people?David [00:26:58]: This is, there's so much surface area here and it's actually one of the things I'm really excited about. And it's funny because I've spent most of my time doing research stuff, but there's like a whole new ball game that I've been learning about and I find it really cool. So I would say the best analogy I have to why Adept is pursuing a path of being able to use your computer like a human, plus of course being able to call APIs and being able to call APIs is the easy part, like being able to use your computer like a human is a hard part. It's in the same way why people are excited about humanoid robotics, right? In a world where you had T equals infinity, right? You're probably going to have various different form factors that robots could just be in and like all the specialization. But the fact is that humans live in a human environment. So having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path. I think because it's the most practical path, I think a lot of success will come from going down this path. I kind of think about this early days of the agent interaction layer level is a little bit like, do you all remember Windows 3.1? Like those days? Okay, this might be, I might be, I might be too old for you guys on this. But back in the day, Windows 3.1, we had this transition period between pure command line, right? Being the default into this new world where the GUI is the default and then you drop into the command line for like programmer things, right? The old way was you booted your computer up, DOS booted, and then it would give you the C colon slash thing. And you typed Windows and you hit enter, and then you got put into Windows. And then the GUI kind of became a layer above the command line. The same thing is going to happen with agent interfaces is like today we'll be having the GUI is like the base layer. And then the agent just controls the current GUI layer plus APIs. And in the future, as more and more trust is built towards agents and more and more things can be done by agents, if more UIs for agents are actually generative in and of themselves, then that just becomes a standard interaction layer. And if that becomes a standard interaction layer, what changes for software is that a lot of software is going to be either systems or record or like certain customized workflow execution engines. And a lot of how you actually do stuff will be controlled at the agent layer.Alessio [00:29:19]: And you think the rabbit interface is more like it would like you're not actually seeing the app that the model interacts with. You're just saying, hey, I need to log this call on Salesforce. And you're never actually going on salesforce.com directly as the user. I can see that being a model.David [00:29:33]: I think I don't know enough about what using rabbit in real life will actually be like to comment on that particular thing. But I think the broader idea that, you know, you have a goal, right? The agent knows how to break your goal down into steps. The agent knows how to use the underlying software and systems or record to achieve that goal for you. The agent maybe presents you information in a custom way that's only relevant to your particular goal, all just really leads to a world where you don't really need to ever interface with the apps underneath unless you're a power user for some niche thing.Swyx [00:30:03]: General question. So first of all, I think like the sort of input mode conversation. I wonder if you have any analogies that you like with self-driving, because I do think like there's a little bit of how the model should perceive the world. And you know, the primary split in self-driving is LiDAR versus camera. And I feel like most agent companies that I'm tracking are all moving towards camera approach, which is like the multimodal approach, you know, multimodal vision, very heavy vision, all the Fuyu stuff that you're doing. You're focusing on that, including charts and tables. And do you find that inspiration there from like the self-driving world? That's a good question.David [00:30:37]: I think sometimes the most useful inspiration I've found from self-driving is the levels analogy. I think that's awesome. But I think that our number one goal is for agents not to look like self-driving. We want to minimize the chances that agents are sort of a thing that you just have to bang your head at for a long time to get to like two discontinuous milestones, which is basically what's happened in self-driving. We want to be living in a world where you have the data flywheel immediately, and that takes you all the way up to the top. But similarly, I mean, compared to self-driving, like two things that people really undervalue is like really easy to driving a car down highway 101 in a sunny day demo. That actually doesn't prove anything anymore. And I think the second thing is that as a non-self-driving expert, I think one of the things that we believe really strongly is that everyone undervalues the importance of really good sensors and actuators. And actually a lot of what's helped us get a lot of reliability is a really strong focus on actually why does the model not do this thing? And the non-trivial amount of time, the time the model doesn't actually do the thing is because if you're a wizard of ozzing it yourself, or if you have unreliable actuators, you can't do the thing. And so we've had to fix a lot of those problems.Swyx [00:31:43]: I was slightly surprised just because I do generally consider the way most that we see all around San Francisco as the most, I guess, real case of agents that we have in very material ways.David [00:31:55]: Oh, that's absolutely true. I think they've done an awesome job, but it has taken a long time for self-driving to mature from when it entered the consciousness and the driving down 101 on a sunny day moment happened to now. Right. So I want to see that more compressed.Swyx [00:32:07]: And I mean, you know, cruise, you know, RIP. And then one more thing on just like, just going back on this reliability thing, something I have been holding in my head that I'm curious to get your commentary on is I think there's a trade-off between reliability and generality, or I want to broaden reliability into just general like sort of production readiness and enterprise readiness scale. Because you have reliability, you also have cost, you have speed, speed is a huge emphasis for a debt. The tendency or the temptation is to reduce generality to improve reliability and to improve cost, improve speed. Do you perceive a trade-off? Do you have any insights that solve those trade-offs for you guys?David [00:32:42]: There's definitely a trade-off. If you're at the Pareto frontier, I think a lot of folks aren't actually at the Pareto frontier. I think the way you get there is basically how do you frame the fundamental agent problem in a way that just continues to benefit from data? I think one of the main ways of being able to solve that particular trade-off is you basically just want to formulate the problem such that every particular use case just looks like you collecting more data to go make that use case possible. I think that's how you really solve. Then you get into the other problems like, okay, are you overfitting on these end use cases? You're not doing a thing where you're being super prescriptive for the end steps that the model can only do, for example.Swyx [00:33:17]: Then the question becomes, do you have one house model that you can then customize for each customer and you're fine-tuning them on each customer's specific use case?David [00:33:25]: Yeah.Swyx [00:33:26]: We're not sharing that. You're not sharing that. It's tempting, but that doesn't look like AGI to me. You know what I mean? That is just you have a good base model and then you fine-tune it.David [00:33:35]: For what it's worth, I think there's two paths to a lot more capability coming out of the models that we all are training these days. I think one path is you figure out how to spend, compute, and turn it into data. In that path, I consider search, RL, all the things that we all love in this era as part of that path, like self-play, all that stuff. The second path is how do you get super competent, high intelligence demonstrations from humans? I think the right way to move forward is you kind of want to combine the two. The first one gives you maximum sample efficiency for a little second, but I think that it's going to be hard to be running at max speed towards AGI without actually solving a bit of both.Swyx [00:34:16]: You haven't talked much about synthetic data, as far as I can tell. Probably this is a bit too much of a trend right now, but any insights on using synthetic data to augment the expensive human data?David [00:34:26]: The best part about framing AGI as being able to help people do things on computers is you have an environment.Swyx [00:34:31]: Yes. So you can simulate all of it.David [00:34:35]: You can do a lot of stuff when you have an environment.Alessio [00:34:37]: We were having dinner for our one-year anniversary. Congrats. Yeah. Thank you. Raza from HumanLoop was there, and we mentioned you were coming on the pod. This is our first-Swyx [00:34:45]: So he submitted a question.Alessio [00:34:46]: Yeah, this is our first, I guess, like mailbag question. He asked, when you started GPD 4 Data and Exist, now you have a GPD 4 vision and help you building a lot of those things. How do you think about the things that are unique to you as Adept, and like going back to like the maybe research direction that you want to take the team and what you want people to come work on at Adept, versus what is maybe now become commoditized that you didn't expect everybody would have access to?David [00:35:11]: Yeah, that's a really good question. I think implicit in that question, and I wish he were tier two so he can push back on my assumption about his question, but I think implicit in that question is calculus of where does advantage accrue in the overall ML stack. And maybe part of the assumption is that advantage accrues solely to base model scaling. But I actually believe pretty strongly that the way that you really win is that you have to go build an agent stack that is much more than that of the base model itself. And so I think like that is always going to be a giant advantage of vertical integration. I think like it lets us do things like have a really, really fast base model, is really good at agent things, but is bad at cat and dog photos. It's pretty good at cat and dog photos. It's not like soda at cat and dog photos, right? So like we're allocating our capacity wisely, right? That's like one thing that you really get to do. I also think that the other thing that is pretty important now in the broader foundation modeling space is I feel despite any potential concerns about how good is agents as like a startup area, right? Like we were talking about earlier, I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from can we make a better agent? Because right now I think we all see that, you know, if you're training on publicly available web data, you put in the flops and you do reasonable things, then you get decent results. And if you just double the amount of compute, then you get predictably better results. And so I think pure play foundation model companies are just going to be pinched by how good the next couple of llamas are going to be and the next what good open source thing. And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.Swyx [00:36:56]: So you don't consider yourself a pure play foundation model company?David [00:36:59]: No, because if we were a pure play foundation model company, we would be training general foundation models that do summarization and all this other...Swyx [00:37:06]: You're dedicated towards the agent. Yeah.David [00:37:09]: And our business is an agent business. We're not here to sell you tokens, right? And I think like selling tokens, unless there's like a...Swyx [00:37:14]: Not here to sell you tokens. I love it.David [00:37:16]: It's like if you have a particular area of specialty, right? Then you won't get caught in the fact that everyone's just scaling to ridiculous levels of compute. But if you don't have a specialty, I find that, I think it's going to be a little tougher.Swyx [00:37:27]: Interesting. Are you interested in robotics at all? Just a...David [00:37:30]: I'm personally fascinated by robotics. I've always loved robotics.Swyx [00:37:33]: Embodied agents as a business, you know, Figure is like a big, also sort of open AI affiliated company that raises a lot of money.David [00:37:39]: I think it's cool. I think, I mean, I don't know exactly what they're doing, but...Swyx [00:37:44]: Robots. Yeah.David [00:37:46]: Well, I mean, that's a...Swyx [00:37:47]: Yeah. What question would you ask? If we had them on, what would you ask them?David [00:37:50]: Oh, I just want to understand what their overall strategy is going to be between now and when there's reliable stuff to be deployed. But honestly, I just don't know enough about it.Swyx [00:37:57]: And if I told you, hey, fire your entire warehouse workforce and, you know, put robots in there, isn't that a strategy? Oh yeah.David [00:38:04]: Yeah. Sorry. I'm not questioning whether they're doing smart things. I genuinely don't know what they're doing as much, but I think there's two things. One, I'm so excited for someone to train a foundation model of robots. It's just, I think it's just going to work. Like I will die on this hill, but I mean, like again, this whole time, like we've been on this podcast, we're just going to continually saying these models are basically behavioral cloners. Right. So let's go behavioral clone all this like robot behavior. Right. And then you figure out everything else you have to do in order to teach you how to solve a new problem. That's going to work. I'm super stoked for that. I think unlike what we're doing with helping humans with knowledge work, it just sounds like a more zero sum job replacement play. Right. And I'm personally less excited about that.Alessio [00:38:46]: We had a Ken June from InBoo on the podcast. We asked her why people should go work there and not at Adept.Swyx [00:38:52]: Oh, that's so funny.Alessio [00:38:54]: Well, she said, you know, there's space for everybody in this market. We're all doing interesting work. And she said, they're really excited about building an operating system for agent. And for her, the biggest research thing was like getting models, better reasoning and planning for these agents. The reverse question to you, you know, why should people be excited to come work at Adept instead of InBoo? And maybe what are like the core research questions that people should be passionate about to have fun at Adept? Yeah.David [00:39:22]: First off, I think that I'm sure you guys believe this too. The AI space to the extent there's an AI space and the AI agent space are both exactly as she likely said, I think colossal opportunities and people are just going to end up winning in different areas and a lot of companies are going to do well. So I really don't feel that zero something at all. I would say to like change the zero sum framing is why should you be at Adept? I think there's two huge reasons to be at Adept. I think one of them is everything we do is in the service of like useful agents. We're not a research lab. We do a lot of research in service of that goal, but we don't think about ourselves as like a classic research lab at all. And I think the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build a GI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations, they're not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals, solve it, right? I think that's really cool. Like everybody knows a lot of these evals are like pretty saturated and the new ones that even are not saturated. You look at someone and you're like, is this actually useful? Right? I think that's a degree of practicality that really helps. Like we're equally excited about the same problems around reasoning and planning and generalization and all of this stuff. They're very grounded in actual needs right now, which is really cool.Swyx [00:40:45]: Yeah. This has been a wonderful dive. You know, I wish we had more time, but I would just leave it kind of open to you. I think you have broad thoughts, you know, just about

Seeking Truth Catholic Bible Study
2 Samuel 5-8, Part 3: A Deep Dive into the King-Priest David & the Divine Covenants

Seeking Truth Catholic Bible Study

Play Episode Listen Later Feb 11, 2024 27:26 Transcription Available


In this enlightening episode of "Seeking Truth with Sharon Doran," our host takes a journey through the third part of the second book of Samuel, providing a mix of narratives and interpretations. She unveils the significance of priesthood highlighted through the character of David, the king-priest, and his reverent interactions with the Ark of the Covenant. This episode is an empowering testament to the sanctity of the Ark and a profound reminder of the biggest rule: do not touch the Ark. Focused on the life of King David, Sharon underscores his sacred attire and his holy interactions with the Ark of the Lord, setting a captivating discourse on the theme of reverence versus dismissal of God's presence. Observe the polarizing contrasts between the blessings at Obed-Edom's house and those who rebuke the divine. A fascinating shift in the narrative ensues as King David transitions from residing in a cedar house to a desire to establish a home for the Ark of God. This part marks the advent of a new prophet, Nathan and the intriguing onset of the Davidic covenant, a promise of divine endurance between God and David. The episode transcends a millennium into the narrative of angel Gabriel's visitation to Mary, acclaimed as the new Ark of a new covenant. Notice the parallels drawn between Noah and Mary and explore their respective divine revelations. Imbue yourself with the profound roles women like Mary and Elizabeth played in the biblical narrative, tying the divine thread from the times of 2nd Samuel and beyond. This exploration of 2nd Samuel offers a profound understanding of divine covenants, women's significant roles and the sacred underpinnings of our beloved biblical figures. Tune into this episode for a enriching journey through the annals of biblical history.

UiPath Daily
Unlocking Potential: Imbue Raises $200M to Drive Advancements in Reasoning

UiPath Daily

Play Episode Listen Later Feb 6, 2024 6:01


Embark on a journey of AI excellence with Imbue's $200M funding announcement aimed at advancing reasoning AI models. Explore the possibilities of this investment in unlocking new levels of intelligence and problem-solving capabilities. Get on the AI Box Waitlist: AIBox.aiJoin our ChatGPT Community: Facebook GroupFollow me on Twitter: Jaeden's Twitter

ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI
Path to Progress: Imbue Secures $200M Investment for Cutting-Edge Reasoning AI

ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI

Play Episode Listen Later Feb 6, 2024 6:01


Explore the frontier of AI innovation as Imbue announces a significant $200M funding round to push the boundaries of reasoning AI. Delve into the implications of this investment in shaping the future of intelligent problem-solving. Get on the AI Box Waitlist: AIBox.aiJoin our ChatGPT Community: Facebook GroupFollow me on Twitter: Jaeden's Twitter

AI for Non-Profits
Driving Innovation: Imbue Raises $200M for Breakthrough Reasoning AI Models

AI for Non-Profits

Play Episode Listen Later Feb 6, 2024 6:01


Join the forefront of AI research and development with Imbue's $200M funding announcement for its advanced reasoning models. Discover how this investment is driving innovation and pushing the limits of AI capabilities. Get on the AI Box Waitlist: AIBox.aiJoin our ChatGPT Community: Facebook GroupFollow me on Twitter: Jaeden's Twitter

The Elon Musk Podcast
The Funding Frontier: Imbue's $200M Boost for Advancing Reasoning AI

The Elon Musk Podcast

Play Episode Listen Later Feb 6, 2024 6:38


In this episode, we explore the frontier of funding as Imbue secures a significant $200 million, propelling the development of reasoning AI models and unraveling the possibilities this financial boost brings to the forefront of artificial intelligence. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community

Open AI
Breaking Barriers: Imbue Raises $200M to Propel Advanced Reasoning AI Models

Open AI

Play Episode Listen Later Feb 6, 2024 6:01


Join the journey of AI advancement with Imbue's significant $200M funding round for its cutting-edge reasoning models. Discover how this investment is set to revolutionize the capabilities of AI in tackling complex challenges. Get on the AI Box Waitlist: AIBox.aiJoin our ChatGPT Community: Facebook GroupFollow me on Twitter: Jaeden's Twitter

The Sam Altman Podcast
Empowering Intelligence: Imbue Secures $200M for Advancing Reasoning AI Models

The Sam Altman Podcast

Play Episode Listen Later Feb 5, 2024 6:38


In this episode, we delve into the realm of empowered intelligence as Imbue raises an impressive $200 million, fueling the advancement of reasoning AI models and discussing the potential implications for innovation in artificial intelligence. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community

Feelin Good Podcast
Episode 317 Imbue Poppin sit downs down with us in Newport KY

Feelin Good Podcast

Play Episode Listen Later Jan 30, 2024 54:22


Big thanks to andrea Dancy of imbue poppin for sliding by to the Newport Ky setup space where daydreamers club podcast and I had interviews all day on our day 4 of the media tour. -meeting Mrs Dancy, working with Gary Owen and the Cincinnati bengals, entering the inspiration station for eats, run in with Deviled Eggs, life on the road in the fast lane, burning and much more. 

The Mark Cuban Podcast
Revolutionizing AI: Imbue's $200M Leap for Advanced Reasoning Models

The Mark Cuban Podcast

Play Episode Listen Later Jan 3, 2024 6:38


In this episode, I delve into Imbue's pivotal stride, securing $200M funding for their groundbreaking advanced reasoning AI models, discussing its potential transformational effects and the scope of innovation in AI technologies. Invest in AI Box: https://Republic.com/ai-box Get on the AI Box Waitlist: ⁠⁠https://AIBox.ai/⁠⁠ AI Facebook Community Learn more about AI in Video Learn more about Open AI

Speaking Your Brand
366: Trends in Public Speaking and Thought Leadership for 2024

Speaking Your Brand

Play Episode Listen Later Jan 1, 2024 26:18


Happy New Year! This is the 6th year in a row I've done a trends episode for the start of the year. Trends are like currents or waves of energy that move through our society and affect everything from business and politics and the economy to popular culture and media. As speakers, entrepreneurs, and thought leaders, these trends will impact our thought leadership, our content, and our marketing. In this episode, you'll learn 3 trends I've identified that are shaping our world: Big changes in social media platforms as the premise of social media shifts from connection to entertainment and it becomes harder to tell what's real and what's not The continued rise and more ubiquitous usage of generative AI tools like ChatGPT plus images, video, and voice The internet as a whole is changing, becoming more fragmented and siloed, with network effects accruing to people and brands who already have large followings   I also share specific action steps you can take: Develop a clear and distinct brand voice Imbue your content with soul Focus more on in-person experiences and events   As a speaker, you are well positioned to connect with your audiences in a deeper and more human way. Want to develop your speaking skills, thought leadership, and signature talk? Check out our online coaching program the Thought Leader Academy and our upcoming 3-day in-person speaking intensive. Connect with me on LinkedIn: https://www.linkedin.com/in/carolcox Show notes at https://www.speakingyourbrand.com/366/ Related Podcast Episodes: Episode 340: Can I Tell It's You? What a Brand Voice Is and Why You Need One Episode 327: From Expert to Thought Leader: 3 Key Strategies You Need Now to Set Yourself Apart in Our New AI-Driven World Episode 313: How to Thrive in the Age of A.I.   Mentioned: YouTube live stream with realistic AI avatar = https://www.youtube.com/live/VH1rOylsoMo?si=1PpvUCqchdRCb0o0&t=540 “Nobody Knows What's Happening Online Anymore” by Charlie Warzel (The Atlantic) “TikTok is eating microblogging as we've always known it” by Caroline Mimbs Nyce (The Atlantic) “Is Social Media Dying? What That Could Mean for Marketers” by Lestraundra Alfred (HubSpot) “Neil Gaiman's Radical Vision for the Future of the Internet” by Cal Newport

OnBoard!
EP 43.【AI年终特辑2】标志性的OpenAI DevDay,AI创业者和Deepmind研究员怎么看

OnBoard!

Play Episode Listen Later Dec 26, 2023 113:46


不追热点但求深度思考的OnBoard! 又来啦!转眼间 OpenAI 轰轰烈烈的开发者日 (OpenAI DevDay) 已经过去一个多月了。这一个月也发生了太多事情。但是除却各种大瓜和八卦,DevDay 实打实是行业里相当重要的标志性事件。这次的涉及的,不仅是API大幅成本下降、API更新,还有GPT Store, Assistant API, 多模态等等重磅的上新。我们在devday 三周后,邀请了Monica 非常期待的四位嘉宾,在经历了这一段时间的消化和观察沉淀之后,一起聊聊他们不同角度的思考! Hello World, who is OnBoard!? 这次的嘉宾,既有RPA头部公司来也科技的联合创始人兼CTO,也有真格基金EIR、经历两轮AI创业热潮的创业者视角,也有美团智能硬件负责人的软硬结合机会思考,还有来自 Google Deepmind 的研究员 Eric,从模型和技术的角度,解读 DevDay 中agent相关的更新。真的是非常精彩纷呈,又是一次接近两个小时火花飞溅的讨论。本期录制的时候,Google Gemini 还没有发布,但是回头来看,我们对多模态的讨论还是完全适用的! Enjoy! 嘉宾介绍 Peak, 真格基金 EIR(入驻企业家),Magi 创始人 胡一川,来也科技联合创始人 & CTO Eric Li,Google Deepmind 高级研究员 孙杨,美团智能硬件LLM 负责人 OnBoard! 主持: Monica:美元VC投资人,前 AWS 硅谷团队+ AI 创业公司打工人,公众号M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 我们都聊了什么 01:34 嘉宾自我介绍,如何进入AI领域的,最近看到的有意思的AI产品 11:38 OpenAI Devday 的观感:有什么让你印象深刻的更新?与网上评论相比,有哪些被高估和低估了 12:38 Peak: 为什么说GPT store 被高估了,GPT Builder 其实很有借鉴意义 14:27 GPT store 跟一个 App store 的差距在哪里?OpenAI 未来会如何构建 app store? 19:32 胡一川:为什么说 GPT4 Turbo 被低估了? 21:40 价格和 context window 为什么重要?技术角度要持续提升,有哪些难点? 29:53 Eric: 为什么不成熟的 GPT store 是一个好的决策 33:27 孙杨:为什么说 GPT store 短期高估,长期被低估?为什么说Function call, JSON return 被低估了? 39:01 DevDay 中与 Agent 相关的更新有什么亮点?对于创业公司有什么挑战,有什么机会? 53:05 美团的LLM相关尝试,有哪些落地的场景? 58:36 为什么不同的LLM作为 agent 的基座,效果会差别这么大?我们是否需要针对 agent 的基础模型? 64:13 DevDay 的更新,对于创业公司有什么影响?哪些公司会受到比较大的影响? 82:03 如何看待 Q* 的传闻?合成数据会对 LLM 生态产生怎样的影响? 86:50 GPT-4v 为代表的多模态能力使用感受如何?有可能带来怎样的新机会? 95:41 多模态能力的实现有怎样的技术路径?不同技术路径的核心差异和难点是什么? 98:55 经历了“上一波”AI的创业者,对于这一次的AI创业热潮,看到哪些异同?给其他创业者怎样的建议? 105:27 未来1-3年,最期待AI领域发生哪些变化? 重点词汇 OpenAI Devday GPT Store Assistant API Context length LUI: Linguistic User Interface 我们提到的公司 AI Pin by Humane Langchain: Build context-aware, reasoning applications with LangChain's flexible abstractions and AI-first toolkit. Fixie AI: The fastest way to build conversational AI agents Imbue: build AI systems that can reason Character AI: bringing to life the science-fiction dream of open-ended conversations and collaborations with computers. 参考文章 devday.openai.com openai.com openai.com Peak 提到的论文:Retrieval meets Long Context Large Language Models Fixie: www.fixie.ai Imbue 的融资:imbue.com 欢迎关注M小姐的微信公众号,了解更多中美软件、AI与创业投资的干货内容! M小姐研习录 (ID: MissMStudy) 大家的点赞、评论、转发是对我们最好的鼓励!如果你能在小宇宙上点个赞,Apple Podcasts 上给个五星好评,就能让更多的朋友看到我们努力制作的内容!

Imbue CTO Josh Albrecht on Creating AI Agents for Reasoning, Reliability, and Robustness

Play Episode Listen Later Dec 2, 2023 81:38


In this episode, Nathan chats with Josh Albrecht, CTO of Imbue. They discuss how to create agents for reasoning, reliability, and robustness. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period.  RECOMMENDED PODCAST: Every week investor and writer of the popular newsletter The Diff, Byrne Hobart, and co-host Erik Torenberg discuss today's major inflection points in technology, business, and markets – and help listeners build a diversified portfolio of trends and ideas for the future. Subscribe to “The Riff” with Byrne Hobart and Erik Torenberg: https://www.youtube.com/@TheRiffPodcast SPONSORS:  Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive MasterClass https://masterclass.com/zen get two memberships for the price of 1 Learn from the best to become your best. Learn how to negotiate a raise with Chris Voss or manage your relationships with Esther Perel. Boost your confidence and find practical takeaways you can apply to your life and at work. If you own a business or are a team leader, use MasterClass to empower and create future-ready employees and leaders. Moment of Zen listeners will get two memberships for the price of one at https://masterclass.com/zen Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. X/SOCIAL @labenz (Nathan) @eriktorenberg (Erik) @CogRev_Podcast TIMESTAMPS: (00:00:00) – Episode Preview (00:07:14) – What does it mean to be a research company? (00:10:25) – How is the reasoning landscape these days and how might it evolve? (00:11:03) – Data quality is highly important (00:21:15) – What's the difference between good features and a good world model? (00:27:31) – The impact of new modalities on reasoning (00:29:15) – How much can reasoning and knowledge be separated? (00:45:13) – Imbue demo and are they building their own LLMs or using others? (00:49:37) – Does Imbue have a deal with Nvidia? (00:57:48) – Carbs framework (01:12:57) – Imbue's involvement with policy and and AI safety (01:16:23) – Takeaways from AI Safety Summit and Biden's Order

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
AI Agents That Reason and Code with Imbue Co-Founders Kanjun Qiu and Josh Albrecht

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Nov 16, 2023 32:48


The future of tech is 25-person companies powered by AI agents that help us accomplish our larger goals. Imbue is working on building AI agents that reason, code and generally make our lives easier. Sarah Guo and Elad Gil sit down with co-founders Kanjun Qiu (CEO) and Josh Albrecht (CTO) to discuss how they define reasoning, the spectrum of specialized and generalized agents, and the path to improved agent performance. Plus, what's behind their $200M Series B fundraise.  Kanjun Qiu is the CEO and co-founder of Imbue. Kanjun is also a partner at angel fund Outset Capital, where she invests in promising pre-seed companies. Previously, Kanjun was the co-founder and CEO of Sourceress, a machine learning recruiting startup backed by YC and DFJ. She was previously Chief of Staff to Drew Houston at Dropbox, where she helped scale the company from 300 employees to 1200. Josh Albrecht is the CTO and co-founder of Imbue. He also invests in other founders via his fund, Outset Capital. He has published machine learning papers as an academic researcher; founded an AI recruiting company that went through YC and a 3D injection molding software company that was acquired; helped build Addepar as an early engineer; and served as a Thiel Fellow mentor. He started programming as a kid and began working professionally as a software engineer in high school.  Show Links:  Kanjun's LinkedIn | Website | Google Scholar Josh's LinkedIn | Website | Google Scholar Imbue raises $200M to build AI systems that can reason and code Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Kanjun | @JoshAlbrecht Show Notes:  (00:00) - Introduction to Imbue (04:55) - The Spectrum of Agent Tasks (08:43) - Specialization and Generalization With Agents (13:03) - Code and Language in AI Agents

The Voicebot Podcast
Generative AI News This Week - NVIDIA GPU Performance Gains, Roblox and Salesforce Copilots, Jobs at Risk from Generative AI and More - Voicebot Podcast 352

The Voicebot Podcast

Play Episode Listen Later Oct 31, 2023 60:41


The Generative AI News (GAIN) rundown for September 14, 2023 is here. We are seeing another acceleration in the news cycle. Featured stories this week include: NVIDIA and the Chip Industry Rises - New GPU performance gains for existing H100 chips, the expected improvements from the GH200, and a revenue rise for the global semiconductor industry. Jobs at Risk From Generative AI - A Forrester study calls out some professions that will be most impacted by generative AI automation and will either eliminate jobs or create the need for new skills. LLMs that Reason and Act - We discuss the Imbue funding round and why enabling LLMs that reason and agents that can act on our behalf requires total company focus to reach the objective. Generative AI winners and losers of the week. Generative AI News Links to the stories we covered this week are included below. Featured Stories of the Week

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning
- Write a short 3 sentence podcast episode description based on that title

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

Play Episode Listen Later Oct 28, 2023 7:55


Discover how Imbue, a trailblazing AI company, raised a staggering $200 million to advance the field of AI reasoning. Join us as we delve into the details of this groundbreaking development, exploring the implications of their advanced reasoning models and the potential impact on various industries. Tune in to stay on the cutting edge of AI innovation. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: ⁠https://www.facebook.com/groups/739308654562189/⁠Follow me on Twitter: ⁠https://twitter.com/jaeden_ai⁠

English Vocab by Victorprep
133: Advanced English Vocab. Invective, Imbue, Ribald, Levee

English Vocab by Victorprep

Play Episode Listen Later Oct 27, 2023 12:06


The words for today are: Invective, Imbue, Ribald, Levee.  VictorPrep's vocab podcast is for improving for English vocabulary skills while helping you prepare for your standardized tests! This podcast isn't only intended for those studying for the GRE or SAT, but also for people who enjoy learning, and especially those who want to improve their English skills. I run the podcast for fun and because I want to help people out there studying for tests or simply learning English. The podcast covers a variety of words and sometimes additionally covers word roots. Using a podcast to prep for the verbal test lets you study while on the go, or even while working out!  If you have comments or questions and suggestions, please send me an email at sam.fold@gmail.com

Data Skeptic
Do LLMs Make Ethical Choices

Data Skeptic

Play Episode Listen Later Oct 16, 2023 29:21


We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share findings of his work; Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety.  

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00

AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
Imbue Secures $200M for Cutting-Edge AI Reasoning Models

AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs

Play Episode Listen Later Oct 6, 2023 8:05


In this episode, we dive into the remarkable achievement of Imbue, as they secure an impressive $200 million in funding for their groundbreaking AI reasoning models. Explore how Imbue's advanced technology is pushing the boundaries of AI, promising transformative applications across various industries. Join us for an enlightening discussion on the future of AI and its potential to reshape problem-solving and decision-making. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: ⁠https://www.facebook.com/groups/739308654562189/⁠Follow me on Twitter: ⁠https://twitter.com/jaeden_ai⁠

The MAD Podcast with Matt Turck
Imbue: AI Agents That Can Reason with CEO Kanjun Qiu

The MAD Podcast with Matt Turck

Play Episode Listen Later Oct 4, 2023 44:47


Today we're joined by Kanjun Qiu, CEO of Imbue, an independent research company developing AI agents with general intelligence, fresh off the announcement of their $200M Series B round of financing. We talk about Kanjun's journey, Imbue's vision and the future of AI agents.

Multiverse 5D
Shocking Information Impact On You So As Not To Imbue Negative Events

Multiverse 5D

Play Episode Listen Later Sep 22, 2023 12:42


Shocking Information Impact On You So As Not To Imbue Negative Events YouTube: The truth can change your life

This Week in Pre-IPO Stocks
E64: Arm up 25% on IPO, Databricks raises at $43b, Starlink revenue up 533%, Dimon tells founders to “grow up”, | Pre-IPO Stock Market Update – Sep 15, 2023

This Week in Pre-IPO Stocks

Play Episode Listen Later Sep 15, 2023 7:55


Investing in pre-IPO stocks = www.agdillon.com00:10 | Arm up 25% on IPO- Arm is 90% owned by Softbank- IPOed at $54.5b, closed at $65b- Last round at $64b in Aug 2023- Apple, Google, Nvidia purchased $735m of shares01:37 | Dimon tells founders to “grow up”- Dimon says 2020/2021 valuations were overpriced- “…if you can go public, you want to go public, you need to go public, don't wait too long.”02:15 | Databricks raises at $43b- $500m led by TRowe, Nvidia participated- +13% from last round in Aug 2021, +39% from Oct 2022 internal valuation- Company is eying the IPO market03:17 | Starlink revenue up 533%- $1.4b in 2022 revenue, up from $222m in 2021- On track for 2.5m subscribers by end of 2023, averaging 127,000 new subscribers per month- 37% of world's population (3 billion people) has zero access to the internet04:23 | Big capital raises- Ola Electric (www.olaelectric.com) | $140m Series E, $5.4b valuation- Getir (www.getir.com) | $500m Series G, $2.5b valuation- Ascend Elements (www.ascendelements.com) | $460m Series D, $1.4b valuation- Zopa (www.zopa.com) | $96m Series J, $1.1b valuation- Imbue (www.imbue.com) | $200m Series B, $1.0b valuation05:51 | Pre-IPO +0.58% for week- IPO Watch:...Arm (Sep 14) = $54.5b IPO valuation

Edtech Insiders
This Week in Edtech with Ben Kornell, 9/14/23 with Special Guests Medics Academy and Braintrust Tutors

Edtech Insiders

Play Episode Listen Later Sep 14, 2023 90:34


This episode features interviews with Johann Malawana of Medics AcademyJen Mendelsohn and Mara Koffmann from Braintrust TutorsCheck out the Edtech Insiders AI+EDU Conference! We also cover:UNESCO's AI Competencies for TeachersByju's effort to repay repayment through selling acquisitionsFlint raises money for edu-LLMs, and Imbue for agentsAmericans losing faith in College ROIK12 Literacy Scores forcing shift to Science of Reading

Let's Talk AI
#136 - Claude Pro, Ideogram, Chinese ChatGPT bots, Falcon 180B, RLAIF, export restrictions, Ghostwriter

Let's Talk AI

Play Episode Listen Later Sep 10, 2023 62:44


Our 136th episode with a summary and discussion of last week's big AI news! With guest host Daniel Bashir. Check out his AI interview podcast! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there's a video version on YouTube. Timestamps + links: (00:00) Intro  (01:15) SuperDataScience Ad (01:51) Response to listeners Tools & Apps(02:47) Anthropic's Claude AI chatbot gets a paid plan for heavy users (04:36) Watch out, Midjourney! Ideogram launches AI image generator with impressive typography (06:37) Intuit launches generative AI–powered digital assistant for small businesses and consumers (07:17) Zoom Is Jumping on the AI Chatbot Bandwagon Applications & Business(08:47) China lets Baidu, others launch ChatGPT-like bots to public, tech shares jump (11:23) Tencent releases AI model for businesses as competition in China heats up (12:00) Microsoft says it will take the heat if Copilot AI users get sued (14:52) How We Chose the TIME100 Most Influential People in AI (17:30) ChatGPT creator OpenAI is reportedly earning $80M a month (19:16) AI chip startup d-Matrix raises $110 mln with backing from Microsoft (21:00) ThetaRay nabs $57M for AI tools to ID and fight money laundering (22:18) Sapeon raises $46m for AI chips (23:20) Imbue raises $200M to build AI models that can ‘robustly reason' Projects & Open Source(25:48) Announcing the commercial relicensing and expansion of DINOv2, plus the introduction of FACET (27:02) UAE launches Arabic large language model in Gulf push into generative AI (29:23) New Open-Source ‘Falcon' AI Language Model Overtakes Meta and Google Research & Advancements(31:01) Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities (33:57) RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback (36:22) Perception, performance, and detectability of conversational artificial intelligence across 32 university courses (39:28) SyncDreamer: Generating Multiview-consistent Images from a Single-view Image Policy & Safety(41:00) US curbs AI chip exports from Nvidia and AMD to some Middle East countries (42:55) China suspected of using AI on social media to sway US voters, Microsoft says (46:37) Trusting A.I.-written mushroom hunting guides sold on Amazon could get you killed. But like deadly fungi, identifying them is tricky (48:25) Ads for AI sex workers are flooding Instagram and TikTok (50:30) The UK releases key ambitions for global AI summit Synthetic Media & Art(51:55) AI Took the Stage at the World's Largest Arts Festival. Here's What Happened (54:12) Ghostwriter Returns With an A.I. Travis Scott Song, and Industry Allies (56:17) Artists sign open letter saying generative AI is good, actually (58:57) The latest canvas for Refik Anadol's AI-generated art? The new Sphere in Las Vegas (01:01:20) Outro

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Imbue Raises $200M for Advanced Reasoning AI Models

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Sep 8, 2023 7:36


In this episode, we discuss Imbue's recent $200M funding round aimed at developing AI models capable of advanced reasoning. We delve into what this technology could mean for the future of AI and its potential impact on various industries. Get on the AI Box Waitlist: ⁠https://AIBox.ai/⁠ Facebook Community: ⁠⁠https://www.facebook.com/groups/739308654562189/⁠⁠ Discord Community: ⁠https://aibox.ai/discord⁠ Follow me on X: ⁠https://twitter.com/jaeden_ai⁠

Camille Parle Sexe
#45 Cathline Smoos : Réalité Virtuelle et sexualités

Camille Parle Sexe

Play Episode Listen Later Apr 27, 2023 66:09


La sexualité futuriste d'aujourd'huiCathline Smoos est psychologue, sexologue et CEO de Imbue, the Love-Tech company. Elle oscille entre consultations thérapeutiques et recherches sur les cybersexualités et les arts numériques. Cathline fait partie de cette nouvelle génération de pionnières qui décortiquent les mystères du cyberespace. Son travail mélange l'art numérique, la réalité virtuelle et l'amour au service des couples.Dans cet épisode, vous allez nous entendre parler de :C'est quoi la Cybersexualité ? Fantasmer, sexplorer dans la VR, est-ce tromper ?Est-ce que la notion de consentement existe dans la réalité virtuelle ?L'utilisation de la VR pour les femmes plus complexe = recherchesUtiliser la réalité virtuelle au service de la sexologieContre-indications et dérivesTW : Viol 49m20-51mLa phrase clé de l'épisode : " Creativity is the ultimate aphrodisiac"Pour retrouver mon invité.e :ImbueContenu Imbue Lab (TW : images de nudités)Son instagram Ressources pour aller plus loin :Where thoughts go? Jeu VREpisode : SexTech for Good Christel BonyMon compte instagram : @camilleparlesexe -/- mon site internet : www.camillebataillon.comPOUR LES COUPLES : le replay du Webinaire avec des pistes d'explorations pour faire face à l'écart de libido dans le couple-------------------------

Merriam-Webster's Word of the Day

Merriam-Webster's Word of the Day for December 11, 2022 is: imbue • im-BYOO • verb Imbue can be used as a synonym for endow (“to provide with something freely or naturally”) and can also mean “to permeate or influence” in a way that suggests colored dye permeating cloth fabric. // The children were imbued with a passion for nature by their parents, both biologists. See the entry > Examples: “A radical political commentator who turned to children's literature late in life, [Carlo] Collodi wrote a complex, unsettling novel—miles away from the morality tale that Pinocchio's story has become. Collodi's is a multilayered work of fiction that, although primarily aimed at young readers, is imbued with social criticism and pessimistic humor, and can be read, among other things, as an irreverent attack on established authority.” — Anna Momigliano, The Atlantic, 12 Sept. 2022 Did you know? Like its synonym infuse, imbue implies the introduction of one thing into another so as to affect it throughout. Someone's voice can be imbued with pride, for example, or a photograph might be imbued with a sense of melancholy. In the past imbue has also been used synonymously with imbrue, an obscure word meaning “to drench or stain,” but the two words are likely unrelated. Imbue comes from the Latin verb imbuere, meaning “to dye, wet, or moisten.” Imbrue has been traced back through Anglo-French and Old French to the Latin verb bibere, meaning “to drink.”

Coach and Coordinator Podcast
Beat Burnout And Enhance Your Performance As A Coach - Dr. Erik Korem, Aim7

Coach and Coordinator Podcast

Play Episode Listen Later Nov 28, 2022 57:45


Beat Burnout and Enhance Your Performance as a Coach Dr. Erik Korem joins us again with some eye-opening data as well as simple, practical ways we can enhance our performance as coaches while remaining physically and mentally healthy in a demanding profession. -Stress Fallacies -Build capacity to adapt t more stress -Stress is the gateway to growth -Mental and physical stress is one input -The price to be paid for stress 5- Pillars to build more capacity -One behavior that everyone should engage in -Effects of lack of sleep' -Sunlight -50 times less effective through window- must go outside -Sit on porch - cold exposure in am same effect as cold bath -Imbue sunlight - outside -20 minutes per day can be 5 min at a time -Do it frequently throughout the day -Do it for 2 weeks Sleep -Restore and regenerate -The brain's detoxification system -7-9 hours -3 key behaviors Caffeine -Caffeine 4 cups of coffee per day -Can improve longevity -Delay it Not 1st thing in the morning it spikes cortisol -Then 2 hours -Not after 6 hours before bed time Mental Fitness -Be consciously present and process info without bias -Attention is the currency of performance -Myth that best in the world do not feel pressure -Cyclist Shift attention to feet in clips -Rumination Mindfulness -Learn to control your attention -Train mindfulness -8 week study - 8 weeks of mindfulness 30% reduction in anxiety -Helps you adapt to stress -5-10 min 4-6 times per week -10 min per day can radically improve ability to adapt to stress and executive functioning -Attention is the currency of performance -Rumination Exercise -Exercise training improves general stress resilience -Toughening effect to stress -2 types everyone needs to do -Aerobic -Resistance training -Stats -Moderate 150-300 minutes week -Reduce all cause mortality by 25% -Walking won't get it done -75-150 min of vigorous -Moderate above 60% max heart rate -Vigorous hit 20 seconds rest 40-60 -1::2 ratio vigorous vs moderate -Doesn't have to be compound -Upper/lower strengthening -45 min 2x per week min -Aerobic with resistance reduces all cause mortality by over 30% Nutrition -Anti-inflammatory diet -Repair tissues and lower systemic inflammation which causes disease -Effective means of Reduce depression -Impacts testosterone significantly -Outside of grocery store -Rainbow every day -Nuts and seeds -Recommends ti Everyone high quality fish oil -Brain health and improvements in mood -Multi colored Vegan -Key issue getting adequate protein -Key amino acid - leucine -Supplement with leucine -Maintain or build muscle -Protein powder Community -Kill your clone -Assume a new identity Related: https://soundcloud.com/user-804678956/avoid-the-first-game-conundrum-erik-korem-former-director-of-sports-performance-houston-banners https://soundcloud.com/user-804678956/prcatice-scripting-to-get-ready-for-week-1-erik-korem-former-director-of-performance-houston-texans https://soundcloud.com/user-804678956/erik-korem-director-of-sports-science-houston-texans

Ayana Explains It All
Ayana Explains Why Your Thanksgiving May Look a Little Strange This Year

Ayana Explains It All

Play Episode Listen Later Nov 15, 2022 52:50


This Thanksgiving, some turkeys will be left behind...cause there's a shortage and they cost too damn much. Works used in this episode: “The Myths of the Thanksgiving Story and the Lasting Damage they Imbue.” Claire Bugos. Smithsonian Magazine, 11/26/2019. Available at: https://www.smithsonianmag.com/history/thanksgiving-myth-and-what-we-should-be-teaching-kids-180973655/ US Bureau of Labor Statistics, Consumer Price Index October 2022 published 11/10/2022. Available at: https://www.bls.gov/news.release/cpi.nr0.htm

Brain Dribble
Imbue the Tortilla With Your Thoughts — Brain Dribble #80

Brain Dribble

Play Episode Listen Later Oct 14, 2022 53:50


00:00 Intro / Public Toilet Seat 04:57 No Language / Hiring a Hitman 11:18 Psychology / Astrology 22:10 Banned on TikTok LIVE 27:22 Dropping the Soap / Tortillas 35:01 Operation Catch-Up 36:45 Olive Garden First Date 44:34 Bad to the Vein 48:38 YouTube "Documentaries" --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/braindribble/support

MSU Today with Russ White
Arts and culture institutions collaborating to imbue the arts into the fabric of MSU

MSU Today with Russ White

Play Episode Listen Later Sep 15, 2022 17:12


WKAR Public Media is celebrating a century of service as AM 870 went on the air in August of 1922. Wharton Center for Performing Arts is celebrating 40 years of providing a wide array of world class arts and entertainment for mid-Michigan and beyond. And the Eli and Edythe Broad Art Museum opened its doors 10 years ago. The three leaders of these MSU institutions join the program today. Shawn Turner is the interim director of broadcasting at MSU and general manager of WKAR Public Media. Eric Olmscheid is executive director of Wharton Center, and Steven Bridges is interim director of the Broad Art Museum. “You don't get to stick around for 100 years without doing something right,” says Turner. “WKAR went on the air on August 18 of 1922. When we originally went on the air, WKAR was about providing agricultural information to local farmers and quickly evolved to providing additional programming to the local community. If you look at what's happened over the past hundred years, WKAR has been a leader in innovation when it comes to providing news and information and entertainment to the community. We've come from providing those very direct and limited broadcasts to providing programing and education.“Today we have one of the most popular classical radio stations in all of Michigan. And when we look to the future of WKAR, our viewers and listeners are going to see additional content that's really going to connect with this community. Our evolution has been one of responding to people in the community, responding to our listeners and our viewers, and making sure that at every turn we're doing the right things to support them and their needs.”“Wharton Center is coming up on its 40th anniversary on the 25th of September,” says Olmscheid. “On September 25, 1982, the Chicago Symphony Orchestra opened Wharton Center with a grand affair, and it's been nonstop since then. It has been nonstop in the sense of that commitment to the community and to mid-Michigan and world class performing arts and educational opportunities. The organization continues to think about what's next. We're celebrating 40 years, but we're excited about how we fit into this greater MSU 2030 Strategic Plan, the Arts Plan, and how our units collectively work more together to amplify what's happening from an arts and culture standpoint on this campus. We are continuing to evolve and thinking about how we engage and support what's happening here on campus and how are we connect with the community to be a leader in education, both in university and K-12.“It's truly just beginning, and there are so many more things ahead. As we look at developing our own strategic plan, I think of it as more of a roadmap. Where do we really want to go? And how do we want to connect with our community? People love the Wharton Center for great Broadway programming and amazing concerts, and we're home to traditional and contemporary performing arts. All of that's going to stay, but I think how we package it and how we connect to our audiences and how we get new audiences in the door is our next chapter and our next focus.”“In the past 10 years, there's been a lot of great work, and I think we've accomplished a lot and made a lot of inroads, both in our community and as a campus leader in arts education,” says Bridges. “We've been a strong collaborator and partner to many different disciplines throughout those 10 years. We recently celebrated a major opening of a Zaha Hadid exhibition, which is the largest, most major retrospective of her design work to date. To have Zaha Hadid's design work placed within the architecture of her building is a truly unique and unparalleled experience. I'm very proud of that exhibition, and for us, it also signals an important shift for us looking forward into the future. “If we look back at the Broads and Hadid, they were important figures for us as an institution. Looking at the ways that they carried themselves and that they invested and provided opportunities for growth and development within their spheres of influence, there's a lot of inspiration to be taken there. Zaha Hadid famously said, ‘I think there should be no end to experimentation,' and that's something that we take whole heartedly at the museum.”WKAR, Wharton Center, and the Broad are all part of a comprehensive campus-wide strategy called University Arts and Collections, which supports units across campus that hold significant cultural and intellectual collections that serve the research, scholarship, and outreach missions of MSU. What is it? Why now, and what are its goals and mission?“Let me start out by saying that I think this is a really amazing collaboration for the community,” continues Turner. “The fact that the three of us are here talking about our organizations and our collaborations and our willingness to work together, and that you have this broader collaboration that will really bring a level of intensity in the arts to this community that we've never seen before, is something that we're all very excited about. This is an opportunity for us to recognize that in the time that we've been a part of this community, we all have touched different parts of this community. We all have different audiences and different followings and different supporters, but those interests that this community has all converge at some point, and what we recognize is that that point is the arts. We're going to work together across the campus to make sure that these collections and these collaborations not only bring us together as organization, but those collaborations then create new and interesting opportunities for this community to engage with the arts.”“Michigan State is such a large organization that if we don't have the intentional connectivity, it's easy for us to all drift into our own focus,” adds Olmscheid. “We all have our own priorities and strategies that roll up into this greater university plan, which I think is critically important as far as setting direction and intention and shared goals. But if we don't have that intentionality of collaboration, it's easy for us to all be in our own lane not even focused on the greater good. I think that's great. It's really about access, and this idea that the community can come together is important as we think about our next stage and step in evolution and what we do because that's such a critical piece to our human condition. The arts are that fabric that brings us together. The weaving of the human condition is really through the arts. The arts are such a core piece of who we are and how its evolved in our day-to-day lives is very different today, but I think it's important to remember that.”“These anniversary years weren't planned, but what a great moment to seize that opportunity and recognize the opportunities that lie before us,” Bridges says. “Culture isn't just something that kind of happens to us. It's something that we create, and we create it together. We all work in the service of this university, the student body, and the faculty and staff and researchers here. But we work for the greater community of mid-Michigan, Lansing and beyond.“Moving forward we want to create more porousness, if you will, between our organizations, but also with the communities that we serve. We want feedback from them directly about what they want to see from us and meet them where they are to create a greater sense of belonging and collectiveness that I think will be more important in terms of ingraining the value of arts and culture within our communities and within our lives.”“Eric talked about access. And when we think about access over at WKAR, part of that for us is going out into the community and finding out what the community wants and what the community needs to feel supported by WKAR,” says Turner. “What is the community interested in with regards to the arts? This is a collaboration, not only between us, but between these organizations in the community. This is an interactive relationship, and so I hope that people feel as excited about this as we do because you're going to have an opportunity to shape the future of these organizations and shape the future of the arts in this community.”“The arts have this really important place in us as human beings, and they connects us,” Olmscheid says. “It's a natural connection, a connective tissue. Here at MSU, the arts have that same kind of connective tissue across campus and across our organizations. What are our plans as we look at connecting to the research endeavor and to looking at academic connections and many other tentacles into the campus community that are beyond just the arts and cultural components? That's the piece that I think is the chapter that is yet to be written. How are we continuing to evolve in that way across the campus and really infusing the arts to be a valuable tool across every piece of MSU?”“That resonates with the values of the museum and the University,” adds Bridges. “It has a large part to do with creating vibrant, welcoming communities and the next generation of arts leaders and stewards of culture within this country and region. The place of the arts as a generative force within our communities and the understanding that a creative approach to thinking and knowledge production are applicable far beyond the arts and into all disciplines. The integration of the arts across campus and into our daily lives is critical to creating exactly that kind of community.“There's a great opportunity to always see and experience and know things differently through the arts, and I think there's a real educational value, but also an expansion of your mind and awareness, which allows you to engage with different cultures, lived experiences and perspectives. That creates more well-rounded individuals and therefore better communities and better societies.”“We're all living at a time when there are a lot of stresses,” concludes Turner. “There's a lot going on in our environment that can make us feel anxious. And as we sit around the table here today, I think about the ability of these organizations to not only help people be well informed about their world, but to Eric's point, it's an opportunity for people to go to a place where we can let the stress go, and we can let the anxiousness go, and we can experience the arts in ways that help us all feel rejuvenated and help us all refresh and help us come back to our world with a new perspective. As I sit here with these gentlemen, and as I think about the collaborations that are to come, that excites me, especially at a time when I think that's something that we all need.”MSU Today airs Saturdays at 5 p.m. and Sundays at 5 a.m. on WKAR News/Talk and Sundays at 8 p.m. on 760 WJR. Find “MSU Today with Russ White” on Spotify, Apple Podcasts, and wherever you get your shows.

3AW Breakfast with Ross and John
Emilia reviews: Imbue — 'underrated and understated'

3AW Breakfast with Ross and John

Play Episode Listen Later Jul 21, 2022 4:12


Emilia says this restaurant made her proud to be a Melburnian.See omnystudio.com/listener for privacy information.

Crypto Unplugged
Crypto Unplugged Special with Sam Elamin of Imbue Network - PART 2

Crypto Unplugged

Play Episode Listen Later May 19, 2022 50:40


In the second of this two-part Crypto Unplugged Special, Sam Elamin, Founder of Imbue Network talks to Doc and Oz about the current problems community members face with some crypto projects and issues related to venture capitalists. He discusses Imbue Network, decentralised crowdfunding, and the role of DAOs. Doc asks both Sam and Oz to make some difficult choices in an interesting game towards to end of the episode.Date of podcast recording: Wednesday 16th February 2022 About Imbue NetworkImbue Network is a decentralised crowdfunding DAO built on top of the Polkadot blockchain platform. It is an idea incubator open to the entire world that allows anyone from any walk of life and for any endeavour to submit and vote on ideas worth funding from the communities that believe in them the most. Anyone submitting a proposal is called an initiator, and anyone contributing funds is called a contributor.Imbue brings real-world impact to people's lives, dismantling the popular criticism that web 3.0 lacks utility. By leveraging the power of the Polkadot ecosystem and the Substrate Framework, anyone can initiate a proposal along with how much funding they feel they need. Initiators then define deliverables as milestones and split up the total amount of funding between their milestones (e.g., 10% for the first milestone, 30% for the second milestone, 40% for the third milestone, etc.).https://www.imbue.network/Twitter:Imbue Network - @ImbueNetworkSam Elamin - @samelaminCrypto Unplugged Social MediaTwitter:Doc - @DrCrypto47Oz - @AskCryptoWealthCrypto Unplugged - @crypto_unplugdCrypto Unplugged YouTube Channel:https://www.youtube.com/channel/UCiNxD56lZUk8XpCgy8h-XGACrypto Unplugged on Instagram:https://www.instagram.com/crypto_unplugged/Crypto Unplugged Telegram Community Channel:https://t.me/cryptounpluggedSubsocial Network:Crypto Unplugged - https://app.subsocial.network/5191Doc -  https://app.subsocial.network/5180Pinterest:https://www.pinterest.co.uk/cryptounpluggedukLinktree:https://linktr.ee/cryptounpluggedFor crypto and Bitcoin articles on technical and fundamental analysis, project reviews on altcoins, and more visit the Crypto Unplugged Website:https://cryptounplugged.co.ukShow your support by leaving a review:https://lovethepodcast.com/cryptounpluggedTry Audible free for 30 days! You can listen to your favorite Crypto and Bitcoin audiobook free on Audible for 30 days!Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

The Long Island Sound
How to Imbue Creative Steadfast Joyful Tenacity by the Somehow Sorry Band

The Long Island Sound

Play Episode Play 30 sec Highlight Listen Later Apr 21, 2022 62:18


 How to Imbue Creative Steadfast Joyful Tenacity by the Somehow Sorry BandSomehow Sorry is an American rock band formed in Long Island, NY in 2006. Their lineup consists of singer and guitarist Lorraine McCarthy, guitarist and singer John McCarthy, bass guitarist and singer Paul Maugeri, and drummer James Bosko. Maugeri and Bosko's influential playing styles rounded out to be a solid rhythm section for Somehow Sorry, John McCarthy's guitar technique has created and has developed their unique sound. John and Rain's voices when combined put the signature on this project. There is no band or artist who sounds like Somehow Sorry. They are influenced by many rock, and folk-rock bands and their songs receive regular exposure. Somehow Sorry developed from an earlier group, D'Vine Ryte, D'Vine Ryte, established themselves as part of the hard rock era of the late '80s and '90s. 1994 to 1994 was signed to the Road Runner Records Label. John & Lorraine McCarthy's contributions to rock music include the Song "When" (1994) which sold over 250,000 copies in the USA under the band D'Vine Ryte. "Pear" a Demo EP In 1977, was recorded and released under Modern Voices Records with Studio Drummer Chris Pati and Studio Bassist Christopher Warren. "Fit To Be Tied" a full-length Album in 2007 recorded and released under Guru Project is the first record recorded with the rhythm section, Maugeri and Bosko. "Same Great Taste" full-length Album in 2021. Released under Rain Arts and Entertainment. Basic tracks recorded at EKO Productions, Deer Park, NY, With Engineers Steve Porcelli and Jack Walker. Produced and Engineered at Rain Arts and Entertainment. Mixed at Chrome Orange Music Media and Rain Arts and Entertainment, Mastered at Major Decibel and Chrome Orange Music Media. Features nine original songs and one re-make. Festival appearances at Woman's Rights to Rock, Lilith Fair, Woodstock Revival, and the CCAR Recovery Fest. established their reputation as a respected rock act. As well as Supporting artists for many national rock acts including, Warrant, Huey Lewis, Nino Betancourt, and Missing Persons. Connect with The Long Island Sound Podcast:Website: Https://GigDestiny.com/podcast Follow Steve Yusko, GigDestiny.com, and his adventures:  Website: https://www.GigDestiny.com  Twitter, Instagram,  YouTube, FacebookSpotify: https://open.spotify.com/show/21aCeQWDmD4fkucpfVf9Email: Steve@GigDestiny.com Intro/Outro song in this episode:“Fading out Fast” from Mike Nugent's album, Mike Nugent and the Blue Moon BandThe growth of The Long Island Sound Podcast has been exponential. Help us grow the show!Subscribe to the GigDestiny.com Site here for bonus contentSubscribe to our YouTube ChannelCall the Listener Line & leave your comments: (631) 800-3579 Remember to Rate & Review the show! Help us keep the conversation going with your donation - Click Right Here or go to GigDestiny.com Buzzsprout - Let's get your podcast launched! Start for FREE

Judaism From Within
Rav Hirsch Mitzvah #28 Pesach - Haggadah Imbue The Spirit & Flood Their Minds

Judaism From Within

Play Episode Listen Later Apr 11, 2022 8:51


Rav Hirsch Mitzvah #28 Pesach - Haggadah Imbue The Spirit & Flood Their Minds

Crypto Unplugged
Crypto Unplugged Special with Sam Elamin of Imbue Network - PART 1

Crypto Unplugged

Play Episode Listen Later Apr 7, 2022 43:48


In the first of this two-part Crypto Unplugged Special, Sam Elamin, Founder of Imbue Network talks to Doc and Oz about the current problems community members face with some crypto projects and issues related to venture capitalists. He discusses Imbue Network, decentralised crowdfunding and the role of DAOs. Part 2 of this special will be released at a later date.Date of podcast recording: Wednesday 16th February 2022 About Imbue NetworkImbue Network is a decentralised crowdfunding DAO built on top of the Polkadot blockchain platform. It is an idea incubator open to the entire world that allows anyone from any walk of life and for any endeavour to submit and vote on ideas worth funding from the communities that believe in them the most. Anyone submitting a proposal is called an initiator, and anyone contributing funds is called a contributor.Imbue brings real-world impact to people's lives, dismantling the popular criticism that web 3.0 lacks utility. By leveraging the power of the Polkadot ecosystem and the Substrate Framework, anyone can initiate a proposal along with how much funding they feel they need. Initiators then define deliverables as milestones and split up the total amount of funding between their milestones (e.g., 10% for the first milestone, 30% for the second milestone, 40% for the third milestone, etc.).https://www.imbue.network/Twitter:Imbue Network - @ImbueNetworkSam Elamin - @samelaminCrypto Unplugged Social MediaTwitter:Doc - @DrCrypto47Oz - @AskCryptoWealthCrypto Unplugged - @crypto_unplugdCrypto Unplugged YouTube Channel:https://www.youtube.com/channel/UCiNxD56lZUk8XpCgy8h-XGACrypto Unplugged on Instagram:https://www.instagram.com/crypto_unplugged/Crypto Unplugged Telegram Community Channel:https://t.me/cryptounpluggedSubsocial Network:Crypto Unplugged - https://app.subsocial.network/5191Doc -  https://app.subsocial.network/5180Pinterest:https://www.pinterest.co.uk/cryptounpluggedukLinktree:https://linktr.ee/cryptounpluggedFor crypto and Bitcoin articles on technical and fundamental analysis, project reviews on altcoins, and more visit the Crypto Unplugged Website:https://cryptounplugged.co.ukShoStart your Crypto Journey with Ledger Ledger's the smartest way to secure, buy, exchange, and grow your crypto assets. Buzzsprout - Let's get your podcast launched! Start for FREETry Audible free for 30 days! You can listen to your favorite Crypto and Bitcoin audiobook free on Audible for 30 days!Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

The Rebbe’s advice
2419 – The nature of Yeshiva students is to look for explanations not just to follow orders

The Rebbe’s advice

Play Episode Listen Later Jan 19, 2022 9:31


Imbue the students with yiras shamayim by teaching them Chasidus. https://www.torahrecordings.com/rebbe/igroskodesh/008/005/2419

The Power Chord Hour Podcast
Ep 87 - Best of and Most Anticipated with Joey Cobra - Power Chord Hour Podcast

The Power Chord Hour Podcast

Play Episode Listen Later Jan 9, 2022 141:01


Joey Cobra makes his fourth appearance on PCH as our first guest of the yearWe talk:- Joeys Top 5 of 2021- Our most anticipated albums of 2022- Releasing too many singles before an album- If bands should change their names when they change their sound- Why playing in a trio makes you up your game- The future of live shows & moreJOEY COBRA LINKS -https://joeycobra.bandcamp.comhttps://linktr.ee/josephgricearthttps://www.instagram.com/joey_cobra_musichttps://www.instagram.com/joseph_grice_arthttps://www.facebook.com/JoeyCobrahttps://www.facebook.com/JosephGriceArtCheck out the Power Chord Hour radio show every Friday night at 10 est on 107.9 WRFA in Jamestown, NY, stream the station online at wrfalp.com/streaming/ or listen on the WRFA mobile appemail me for FREE Power Chord Hour stickers - powerchordhour@gmail.comFacebook - www.facebook.com/powerchordhourInstagram - www.instagram.com/powerchordhour/Twitter - www.twitter.com/powerchordhour/Youtube - www.youtube.com/channel/UC6jTfzjB3-mzmWM-51c8LggSpotify - https://open.spotify.com/user/kzavhk5ghelpnthfby9o41gnr?si=4WvOdgAmSsKoswf_HTh_Mg

spotify new year canada social media art uk internet pandemic future radio ny united kingdom safety wake minneapolis records cd nostalgia musician arena releasing painting guitar etsy radio show beginnings mixtape singles new music bandcamp vinyl music industry artwork cobra 2022 drummer drums songwriting misfits green day guitarists punk rock face to face ska finch anticipated replacements rock band ramones emo mest bassists garageband punchlines drumming wonder years jamestown travis barker midtown music production blink182 nimrod pop punk digital art indie rock queers chord western new york warped tour buffalo new york story so far fever dreams silverstein copyrights tom delonge talk show host chords avenues nofx dookie belvedere bayside alternative rock bass players rochester new york dive bars local music american idiot title fights full length taking back sunday turnstile music interviews less than jake new found glory rube billie joe armstrong handguns enola gaslight anthem major labels guitar solos garage rock mark hoppus mxpx most anticipated alternative music chautauqua post hardcore wilhelm scream pch full album music commentary laura jane grace motion city soundtrack menzingers fat mike north sentinel island mid century modern hot water music fat wreck chords compact discs lagwagon senses fail power trio strung out hawthorne heights four year strong young hearts joyce manor midtempo direct hit keep flying indie labels suicide machines lawrence arms enema of the state rise records equal vision records hopeless records dave hause imbue authority zero pure noise records mike herrera jack nance fueled by ramen sincere engineer college rock swellers movielife last gang side one dummy records glow on leftover crack catbite doom scroll vagrant records drive thru records wiretap records power chords we are the union best i ever had choking victim youth fountain primary colours chautauqua county ace enders triple crown records banner pilot survay says
The Pat Thurston Show Podcast
November 30, 2021: Pat Thurston: The Myths of the Thanksgiving Story and the Lasting Damage They Imbue

The Pat Thurston Show Podcast

Play Episode Listen Later Nov 30, 2021 33:35


Historian Kenneth Davis joins the show. See omnystudio.com/listener for privacy information.

The Resilient Minds Podcast
99. How To Imbue Your Soul With Joy with Sam Gibbs

The Resilient Minds Podcast

Play Episode Listen Later Aug 23, 2021 54:44


As we surround ourselves with ambition, there is a danger of being overcome by the warrior inside us all. Some people like to celebrate that kind of mindset, but where will it lead us in the end? If we frame our efforts around the idea of struggle, of domination and power, even challenges we have already overcome can become a burden. By defining ourselves as a survivor, as someone who is locked in a struggle against life, then we will be forever focused on what is coming next, always reacting, and never able to live freely in the moment. Survival mode is no way to live a life of abundance! Sam Gibbs knows the power a frame of mind can have over our life better than most. After spending years trying to 'fix' things that were wrong with him, he realized that beneath all that was an inherent admission that he was somehow 'broken' or 'less than' others. That kind of thinking cannot nourish the love that we have in our hearts, for ourselves or for other people, and in a world where our greatest legacy can only be love, it's our duty to amplify it however we can. We don't need to be perfect to be happy, we're born to be joyful. We don't need to earn happiness and joy, they're our birthrights. We are not survivors, but thrivers! So soften that warrior's soul and put an end to competition. Start collaborating, start building something together. Start helping other people to help everybody else. Our creativity will shift the world. Be always beginning, always learning. Be curious. These are some of the tips that Sam Gibbs leaves us in this episode of The Resilient Minds Podcast. Sam Gibbs is the founder of Surrendr Retreats. He's a guide and healer at The Full F*ck Yes Frequency, a thought leader, and a TEDx Speaker. His mission is to show up for men, until they can show for themselves. "Raise your vibration." - Sam Gibbs Connect with Sam Gibbs: YouTube: https://www.youtube.com/channel/UCETyO38LJVrtWpTG1awYooA Instagram: https://www.instagram.com/samgibbsmorris/ Facebook: https://www.facebook.com/samgibbsmorris/ LinkedIn: https://www.linkedin.com/in/sam-gibbs-morris-speaker-thought-leader-mens-work/ Website: https://sociatap.com/samgibbsmorris/ -- Hit me up on social media and say hi! Youtube: https://bit.ly/35nJ0uV Podcast: http://new.ericbalance.com/podcast/ Instagram: https://www.instagram.com/ericbalance/ Facebook: https://www.facebook.com/ericbalancecoaching Website: https://www.ericbalance.com/

Mess It Up Podcast
Mess it Up Show 172 - Imbue

Mess It Up Podcast

Play Episode Listen Later Jul 27, 2021 51:51


On the week after CR Summit the Bow Tie Guy is joined by Kevin to talk about his experience at his first CR Summit. Listen to find out what the Mac Owen Challenge is.

mess imbue bow tie guy
The Rebbe’s advice
2114 – It is wonderful that you hosted students for the Sedarim and Purim to imbue them with a love for our traditions.

The Rebbe’s advice

Play Episode Listen Later Jul 25, 2021 7:59


Don't put all the work on your wife and also share financial responsibilities with other. This will keep others more involved. Hebrew text: https://www.torahrecordings.com/rebbe/igroskodesh/007/008/2114

Awakening Aphrodite
39. Ten Minutes of Positive Affirmations for Women To Unleash Their Feminine Superpowers!

Awakening Aphrodite

Play Episode Listen Later Apr 20, 2021 10:31


In this weeks episode, I share a passage from one of my all time favorite books - Evolution of Goddess: A Modern Girl's Guide to Activating Your Feminine Superpowers. If any of these affirmations speak to you, I recommend writing them down and memorizing them! Incorporate them into your life by bringing them into your daily awareness or creating art or drawing inspiration from them. Episode Time Stamps: 00:00 Introduction 02:31 Mantras Here is more information on the book: Evolution of Goddess: Evolution of Goddess is a practical introduction to the goddess realm, digging up the histories of long-forgotten myths of goddesses of love, war, death, the sun, the moon, and more. With this clear-eyed and spirited book, you can finally become familiarized with goddesses from a wide range of cultures throughout history, including the mermaids of the Atlantic, the empresses of ancient Egypt, the wise women of the Middle Ages, right up to the modern-day goddesses who walk amongst us today as humble light workers, educating and inspiring. Through a goddess assessment, you'll uncover your own goddess archetype and be given rituals, meditations, and exercises to tap and embolden your own feminine superpowers. Imbue your life with healing, invigorating goddess energy, and discover ways to harness your new empowerment to improve the world. Now is the time to reconnect with the strength and holistic spirituality of our ancestors—to trace the evolution of the Goddess. Subscribe to FitAmyTV on YouTube: https://www.youtube.com/FitAmyTV Thank you for listening to Awakening Aphrodite! Can't wait to share more with you next week!

Awakening Aphrodite
39. Ten Minutes of Positive Affirmations for Women To Unleash Their Feminine Superpowers!

Awakening Aphrodite

Play Episode Listen Later Apr 20, 2021 10:31


In this weeks episode, I share a passage from one of my all time favorite books - Evolution of Goddess: A Modern Girl's Guide to Activating Your Feminine Superpowers. If any of these affirmations speak to you, I recommend writing them down and memorizing them! Incorporate them into your life by bringing them into your daily awareness or creating art or drawing inspiration from them. Episode Time Stamps: 00:00 Introduction 02:31 Mantras Here is more information on the book: Evolution of Goddess: Evolution of Goddess is a practical introduction to the goddess realm, digging up the histories of long-forgotten myths of goddesses of love, war, death, the sun, the moon, and more. With this clear-eyed and spirited book, you can finally become familiarized with goddesses from a wide range of cultures throughout history, including the mermaids of the Atlantic, the empresses of ancient Egypt, the wise women of the Middle Ages, right up to the modern-day goddesses who walk amongst us today as humble light workers, educating and inspiring. Through a goddess assessment, you'll uncover your own goddess archetype and be given rituals, meditations, and exercises to tap and embolden your own feminine superpowers. Imbue your life with healing, invigorating goddess energy, and discover ways to harness your new empowerment to improve the world. Now is the time to reconnect with the strength and holistic spirituality of our ancestors—to trace the evolution of the Goddess. Subscribe to FitAmyTV on YouTube: https://www.youtube.com/FitAmyTV Thank you for listening to Awakening Aphrodite! Can't wait to share more with you next week!

Awakening Aphrodite
39. Ten Minutes of Positive Affirmations for Women To Unleash Their Feminine Superpowers!

Awakening Aphrodite

Play Episode Listen Later Apr 20, 2021 10:31


In this weeks episode, I share a passage from one of my all time favorite books - Evolution of Goddess: A Modern Girl's Guide to Activating Your Feminine Superpowers. If any of these affirmations speak to you, I recommend writing them down and memorizing them! Incorporate them into your life by bringing them into your daily awareness or creating art or drawing inspiration from them. Episode Time Stamps: 00:00 Introduction 02:31 Mantras Here is more information on the book: Evolution of Goddess: Evolution of Goddess is a practical introduction to the goddess realm, digging up the histories of long-forgotten myths of goddesses of love, war, death, the sun, the moon, and more. With this clear-eyed and spirited book, you can finally become familiarized with goddesses from a wide range of cultures throughout history, including the mermaids of the Atlantic, the empresses of ancient Egypt, the wise women of the Middle Ages, right up to the modern-day goddesses who walk amongst us today as humble light workers, educating and inspiring. Through a goddess assessment, you'll uncover your own goddess archetype and be given rituals, meditations, and exercises to tap and embolden your own feminine superpowers. Imbue your life with healing, invigorating goddess energy, and discover ways to harness your new empowerment to improve the world. Now is the time to reconnect with the strength and holistic spirituality of our ancestors—to trace the evolution of the Goddess. Subscribe to FitAmyTV on YouTube: https://www.youtube.com/FitAmyTV Thank you for listening to Awakening Aphrodite! Can't wait to share more with you next week!

Contact Chai with Rabbi Lizzi
Shabbat Replay: How Can We Imbue Our Days with Upkeep Magic? ft. Marni Loffman

Contact Chai with Rabbi Lizzi

Play Episode Play 49 sec Highlight Listen Later Apr 12, 2021 18:46


This week, we hear the live recording of Marni Loffman's drash. As a Challah Back Girls co-founder, TikTok sensation (@singing_jewess), Wesleyan graduate, and all-around great human, Marni offers insight on patience, counting the Omer, our desire for artistic inspiration and magical moments over the mundane logistics of every day life.   Tune into our full service from Friday, April 9th, click here. Check out upcoming Shabbat services and programs here.Learn more about Mishkan Chicago. Follow us on Instagram and like us on Facebook.Be sure to like and subscribe to our podcast for updates on new episodes! And please leave a review. We want to hear from you.Produced by Mishkan Chicago. Music composed, written by Kalman Strauss.

Writer Unleashed
Four Ways to Write Effective Flashbacks

Writer Unleashed

Play Episode Play 51 sec Highlight Listen Later Sep 2, 2020 13:15


A flashback moves readers back in time. It interrupts the flow of the story to catch up on what happened before page 1. But when writers overuse flashbacks, readers grow impatient. They lose interest in, or even forget, what’s happening in the “now” of the story. Yes, we should explore our story’s past. But at some point in revision, we need to reign the past in. We need to select and shape only those flashback episodes that are crucial to understanding the story’s present. It all boils down to 2 things:Relevance and resonance.On today’s episode of Writer Unleashed, we’ll explore how to use flashbacks to:Deepen our understanding of your character’s desires and motivations. Lend later scenes more dramatic power. Add nuance and texture to your story.Imbue the story’s central conflict.Bear the story’s core meaning.Notable BookRevolutionary RoadFor more writing resources, visit me at NanciPanuccio.com Support the show (https://www.buymeacoffee.com/writerunleashed)

The Moms I Know Podcast
What You Need to Know About Contemporary Waldorf Education | Interview with Susan Goldstein | Episode 88

The Moms I Know Podcast

Play Episode Listen Later Aug 10, 2020 32:00


Since its inception in 1919, Waldorf Education has been embraced world-wide.  In this interview with Susan Goldstein, we take a look into the past, present, and future of Waldorf Education and discuss its role in the face of current events.   "Imbue thyself with the power of imagination.  Have courage for the truth.  Sharpen thy feeling for responsibility of soul."  - Rudolf Steiner   The Waldorf model of education is a timeless and time-tested schooling method that is gaining more and more attention and accolades world-wide, but do you know how it began and how it's been evolving in modern times? For this week's episode, we had the privilege of sitting down with Susan Goldstein, a seasoned Waldorf educator and mentor, to discuss most commonly raised questions about validity of Waldorf education. Plus, if you're wondering how Waldorf is adapting to modern times, current events, advances in technology, and racial disparities, in this episode we address all of this and more!   Resources From This Episode: Susan Goldstein: susgoldstein@yahoo.com Teaching as a Lively Art by Marjorie Spock: https://rudolfsteinerbookstore.com/product/teaching-lively-art/ You Are Your Child's First Teacher by Rahima Baldwin Dancy: https://www.penguinrandomhouse.com/books/216341/you-are-your-childs-first-teacher-third-edition-by-rahima-baldwin-dancy/ Conscious Parenting: A Guide to Living with Young Children by Stephen Spitalny: https://www.amazon.com/Conscious-Parenting-Children-Stephen-Spitalny/dp/B01B9A3PCE The Five Golden Keys by Helle Heckmann: https://store.waldorfearlychildhood.org/products/five-golden-keys School as a Journey The Eight-Year Odyssey of a Waldorf Teacher and His Class by Torin M. Finser: https://steinerbooks.org/ebooks/school-as-a-journey-the-eight-year-odyssey-of-a-waldorf-teacher-and-his-class Frankfurt Memorandum: Rudolf Steiner and the subject of racism by Ramon Brüll and Dr. Jens Heisterkamp: https://www.themomsiknow.com/s/Frankfurt_Memorandum_English.pdf   Subscribe Now: iTunes | Spotify | Stitcher | Google Play   --- About Susan Goldstein Susan was born on February 8, 1949 in Philadelphia, Pa., the youngest of three children.  She attended public schools from K-12th grade.  When asked as a child what she wanted to be when she grew up, she would have said without hesitation, “a teacher”.  To Susan, it was clear that this was always going to be her path. She graduated from Penn State University with a degree in Education and began teaching in the Philadelphia Public schools.  She taught for several years, but always felt that something was wrong with what she was doing...somehow, something was missing. After the birth of her own two children, she taught in a private nursery school for several years where she began to see the importance of play and developing the imagination. When her own children were about to begin school, her family moved to Colorado, and in this rural community in Western Colorado, there was a Waldorf School.  A visit to this school was Susan's first taste of Waldorf education and immediately she knew that this was the education she had been searching for, and she began teaching in this little Waldorf school before even completing her Waldorf Teacher Training.  After two years of teaching a combined class of 4th/5th grade and then 5th/6th grade she enrolled at Rudolf Steiner College in Sacramento.  She completed her student-teaching at the Santa Cruz Waldorf School and fell in love with both the school and the community.  She then taught three full class cycles, grades 1-8, in Santa Cruz. Since then, she has also worked as the Assistant Director of the Bay Area Center for Waldorf Teacher Training and the Pedagogical Director of the Santa Cruz Waldorf School.  She has taught many courses over the years at BACWTT and also at Rudolf Steiner College's Public School Institute.  She has completed the Waldorf Remedial Education training and the Waldorf Mentoring training, and has had the privilege of traveling to many of the Waldorf schools throughout the United States, mentoring and evaluating Waldorf teachers.   Subscribe Now: iTunes | Spotify | Stitcher | Google Play

2 Queens in a Pod
72. Imbue Curls featuring Michelle Sultan

2 Queens in a Pod

Play Episode Listen Later Aug 4, 2020 45:20


This weeks episode we are joined with Michelle Sultan, the brand ambassador and creative director for Imbue Curls. Imbue Curls Follow us:Instagram @2queensinapod Twitter @2queensinapod_Personal Instagram:@imanleila & @itsleahmai

Lorekeepers - A Worldbuilding Podcast
Ep. 2.33 - The Weapon is the Wielder

Lorekeepers - A Worldbuilding Podcast

Play Episode Listen Later Jul 7, 2020 111:28


“Never forget what you are the rest of the world will not. Wear it like armour and it can never be used to hurt you.”- George R.R. MartinWhen a warrior spends a lifetime with a weapon, it's bound to take on certain aspects of their character. It reflects who they are. When an Oruhnian spends their life by a weapon, the imbuing is literal.This week, we investigate what the implications of a weapon imbued becomes. How does it change the character of the weapon? Of the imbuer? Is the bond forced? Is it like marriage, or a friend you've always known? Or something else? Curious about something you heard in this episode? Chances are you can find out more about it in the Record of the Lorekeeper!———Want to learn more about Halûme? Got some ideas of your own? Join the conversation at www.reddit.com/r/thelorekeepers or by visiting our homepage at thelorekeepers.com and clicking on "Canon". Note that it may not render properly on your browser. If so, try using Chrome.Questions or ideas? Visit r/thelorekeepers!Website: thelorekeepers.comEmail: lorekeeperspodcast@gmail.comTwitter: @thelorekeepers

imbue
0 • imbue intro with Kevin Janu & Alex Carrabre

imbue

Play Episode Listen Later Apr 3, 2020 65:19


Join us as we chat through Kevin's & Alex's journeys through fitness, from the soccer fields at 4 years old to playing division sports. Follow Kevin on Instagram @kevin_janu & Alex @carrabre. follow imbue on any social media @imbueworld.

imbue
imbue (Trailer)

imbue

Play Episode Listen Later Apr 2, 2020 0:43


The Twin Cities Wellness Collective™ Podcast
#080: Imbue: The New Frontier of Gym Memberships with Alex Carrabre

The Twin Cities Wellness Collective™ Podcast

Play Episode Listen Later Feb 10, 2020 32:04


Alex and I discuss Imbue, a startup seeking to revolutionize the fitness industry by offering a membership to multiple gyms for one flat monthly fee.Alex's Bio: Alex is a former division one athlete turned entrepreneur. He likes to say his first startup was his running career - starting his sophomore year he went from an average high school runner to being one of the top runners in the country his senior season, competing nationally at USA Juniors - earning him a scholarship at the University of Kentucky. After one year, he left university for the untrodden path. He worked at a marketing agency for a while then went to work with a sportswear brand based in Europe, helping them enter the market with brand deals between USA Ski & Snowboarding Team as well as large retailers like REI. While working abroad in Poland, the seeds for imbue we're sewn. Liked the podcast? have any ideas for imbue? know any gym owners or potential members? shoot Alex an email at carrabre@imbue.world.Website: imbue.worldDownload the App: app.imbue.worldare you a gym owner? download the gym portal application & get your gym on imbue! gymportal.imbue.worldRSVP for the Upcoming Twin Cities Wellness Collective™ Event: https://tcwc_whyselfcareisntselfish.eventbrite.com

Individual 1 podcast
Trump Finally Gets Impeached

Individual 1 podcast

Play Episode Listen Later Dec 18, 2019 53:05


John evaluates The Impeachment of President Trump and sets the scene for the coming trial in the senate. He then breaks down the political realities of impeachment and The 2020 election

RADIO X CHRONIQUES & ENTREVUES
Revue de l’année de Jacques Brassard : Dorion est la PIRE…Imbue d’elle-même et PITOYABLE!

RADIO X CHRONIQUES & ENTREVUES

Play Episode Listen Later Dec 10, 2019 14:50


L'heure est au bilan dans Maurais Live! Les chroniqueurs se prononce sur les tops et les flops de la dernière année!

Health Matters with The Medicine Center Pharmacy
CBD Oil Questions answered with Chris Jurist from Imbue Bontanicals

Health Matters with The Medicine Center Pharmacy

Play Episode Listen Later Nov 9, 2019 50:47


As you drive around town you will probably see signs advertising CBD Oil everywhere. From gas stations to video stores, how are you supposed to know if CBD oil is right for you? How are you supposed to separate the low-quality brands from the reputable? Your Medicine Center Pharmacist is the most accessible health care professional to answer your questions. Today we are excited today to have Chris Jurist visiting to help us learn more about CBD Oil and its place

Acts Teens Live
Holy Spirit - Pr Andy Yeoh

Acts Teens Live

Play Episode Listen Later Nov 2, 2019 29:02


The truth and person of the Holy Spirit is introduced and the impact He can bring to our lives is studied. Are you ready to know and activate the ultimate Helper in your life?

Reiki Lifestyle® Podcast
Article: 5 Steps to Clear and Imbue Objects

Reiki Lifestyle® Podcast

Play Episode Listen Later Oct 14, 2019 37:43


Have you ever wanted to know how to clear unwanted energy from your crystals or other sacred objects? How about Imbue them with intentions, hopes, dreams, or goals? Colleen reads her and Robyn Benelli's article from the summer 2018 issue of Reiki News Magazine "Imbue your Sacred Objects with Reiki" on how to clear objects and also imbue them with Reiki and intentions. She also makes commentary along the way. This is a great thing to learn for your Reiki practice and also for practical purposes such as imbuing your gifts for the holidays.  For the full article click here: https://reikilifestyle.com/imbue-your-sacred-objects-with-reiki/ They also have an upcoming webinar "Imbuing sacred objects" Nov 16th. Click here for more information.  https://reikilifestyle.com/webinars/ Contact Robyn and Colleen Benelli at: reikilifestyle.com colleen@reikilifestyle.com robyn@reikilifestyle.com facebook:@reikilifestyle Instagram: @colleenbenelli         

#amtheyaysay
Imbue

#amtheyaysay

Play Episode Listen Later Feb 25, 2019 0:44


Imbue --- Support this podcast: https://anchor.fm/mike-madigan/support

Gin Gals
07 Imbue Distillery

Gin Gals

Play Episode Listen Later Dec 21, 2018 50:24


In this episode we talk to Melanie & Mick Sheard. We discuss: Dogs, kids, working with your partner and battles with council. 
Interview was recorded on the 29 September 2018 @imbuedistiller imbuedistillery.com/ Theme music by Kate Bart @katebartmusic 
Cover art by Jess @jess_dubblu

Living OUT Podcast
How Gay Men Imbue Culture With Beauty and Creativity – LOP042

Living OUT Podcast

Play Episode Listen Later Dec 12, 2018 25:05


As a gay man, when I tune into beauty I connect with the source that creates life. From that source comes my inspiration, that quality which allows me to create and bring value into the world.Following in the steps of the exploration I began in, Is Forgiveness of Homophobia a Gay Male Gift? LOP030, today I discuss (in very sensual terms) the qualities and value that gay men bring to the betterment of society and culture.According to Raymond L. Rigolisoso in, Gay Men and The New Way Forward, one of gay men’s 14 distinct gifts is a “fine attunement to beauty”, specifically that gay men are “creators and keepers of culture.”Appreciation of beauty is a highly sensual experience.At the extreme, intense love-making can be one of the most sensual experience we will ever have. Every one of our senses is activated to the max: sight, hearing, smell, taste, and sensation (or feeling). This overwhelm of the senses takes place outside of time and place, as every moment is felt in the moment and we are lost in sensation.For many gay men, sex has been a way to define their identity – specifically having sex with another man. That sex may have been keep secret – the love that dare not speak its name – if that gay man wasn’t out, or uncomfortable with his identity.This leads me to pose the question, “Where does this fine attunement to beauty, this need or desire to creatively express beauty in all its forms, come from for many gay men?”The answer presents in four parts, with each part woven into the next:Sex (as part of sensuality)In-SightIdentitySelf-Love (acceptance)As gay men we invite humanity to let go of ego and ideologies and experience the sensuality of beauty, to recognize that beauty is natural – that beauty is found in the natural world, and that humans come from nature. Thus as gay men we express creatively what is only natural; what is truly normal.A culture without gay men would be dry, dull, boring, lifeless, and devoid of complimentary colours! Enjoy the episode and prepare yourself for my discussion of sex and sensuality – baby, it gets a little HOT! :-)

iCreateDaily Podcast
Life Alchemy ~ Audio Article

iCreateDaily Podcast

Play Episode Listen Later Oct 5, 2018


Dreams only become tangible when we work towards bringing them to life. Just like wholistic healers use the alchemy of herbs to help heal, we use life alchemy to bring our dreams to fruition. Audio Article Topics:What is alchemy? Give alchemy a home Metamorphosis Dreams fuel purpose Construct your life Imperfection Alchemy of vision https://www.icreatedaily.com/life-alchemy/ (Read the full article with inspiring quotes here!)  Your life is the alchemy of your dreams…and your efforts.Mind to matter… ether to earth.Dreams without effort, evaporate.Effort without dreams is just work.Imbue your effort with dreams…and your dreams with effort,And your life will be transformed.~LeAura Alderson, iCreateDaily.comhttps://icreatedailypodcast.com/wp-content/uploads/2018/10/Life-Alchemy-effort-and-dreams.jpg ()

What We're Tasting
1:5 Why Vermouth Demands and Deserves Respect

What We're Tasting

Play Episode Listen Later Jul 23, 2018 21:03


Vermouth is having a revival and getting the respect it deserves. In this episode we speak with Kara Newman, Wine Enthusiast's spirits editor. Find out why it belongs as a featured ingredient in your home bar, the diversity of styles and flavors available, and tips on mixing it up. Vermouths Discussed:  @3:00 Routin Dry Vermouth  @7:30 Lustau Vermut Blanco  @15:12 Imbue Sweet Vermouth  Transcript Jameson Fink: Welcome to Wine Enthusiast's What We're Tasting Podcast. I'm your host, Jameson Fink. Join me as we discuss three fantastic wines and why each one belongs in your glass. This episode I'm talking about vermouth with contributing editor, Kara Newman. Kara covers spirits for Wine Enthusiast. What We're Tasting is sponsored by Vivino. With the largest online inventory, Vivino finds the right wine every time, and it's also got vermouth, which is a wine too. Download Vivino to discover and buy your favorites, and stock up at Vivino.com/wineenthusiast. So, I was recently at a bar, not surprising, and I was thinking about vermouth because the person I was with ordered a martini, and the bartender made a big show of pouring a cap full of vermouth, and putting the cap full of vermouth into the glass, swirling it, and then dumping it out, and just said, "This is the most important step in making a martini." So, I wanted to talk to you, Kara, and welcome to the show, about vermouth because I feel like it's still even in this day in age, it's underappreciated, and people aren't enjoying it as much as they should. They're just dumping it out, and that was a criminal, that was a traumatizing moment. So actually what I want to ask you, Kara, is how do you like vermouth in your martini? What's your play there? Kara Newman: Well, my go-to is actually a 50/50, so that means equal parts gin and vermouth, and that's actually a lot of vermouth. That's a pretty wet martini. Although, I just like to have it in the martini at all. It's funny that happened to you. The same thing happened to me in Rome. I was appalled to order a martini, and they poured in the dry vermouth and made a big show of shaking it, and then pouring it all out. I was like, "Oh my god, what are you doing? Are you crazy?" Jameson Fink: It's such a waste. I do like the 50 ... another great thing about a equal parts 50/50 martini too is that you can have a lot more of them, and that's another thing that's nice about vermouth as more of a starring role. And then you've got sort of like the ultimate expression of that, which would be the reverse martini, which would be- Kara Newman: Right, that was Julia Child's play. Jameson Fink: Oh really? Kara Newman: Yeah, I think she was the first person I ever heard of doing a reverse martini, yeah where lots more vermouth and just a splash of gin. Very civilized. Jameson Fink: Yeah, and that's a good drink to have while you're in the kitchen cooking too. Kara Newman: You know it. She would know it. Jameson Fink: She would know it, she would know it. Kara Newman: If Julia says- Jameson Fink: Yeah. And also the thing with vermouth is that we're seeing kind of an explosion of small batch crafted type of vermouth's from all over the country and all over the world, and I think we have so many more available to us now, and also with different flavors and types. So, the first wine I wanted to talk to you about, and vermouth is a wine, it's just fortified- Kara Newman: Correct. Fortified, aromatized, correct. Jameson Fink: Aromatized and fortified. God, that sounds so cool. It's a French vermouth. It's the Routin dry vermouth, 91 points, best buy, and what are people doing with vermouth in France? I mean, I don't even know what's the tradition of vermouth there? Are there certain ingredients that they use that's kind of like a signature? Or is it just kind of it's anything goes, whatever you want to use? Kara Newman: Well, traditionally you only heard about French vermouth or Italian vermouth and there were no other vermouth's out there in the universe for years and years and years. And recently we've had more of an explosion where we've seen vermouth, as you said, from all over the world. But the Routin, the one that you mentioned, that one's more of an alpine vermouth and it has more botanicals, more of those beautiful herbs and flowers, and they even have bitter almonds listed in their botanical list. They really have this beautiful alpine sensibility. Jameson Fink: Now is it rare to ... I think like a lot of those things it would be like a closely guarded secret- Kara Newman: Oh, you know it. Jameson Fink: ... do you see, like obviously there's some things that they're not listing, but do you find more people are just like, "Hey, we're gonna let you know what some of the flavorings we use to make this vermouth." Kara Newman: Every now and then you see ... You're absolutely right, it's definitely held close to the vest. I mean, sometimes I think it's because it's a secret, sometimes I think it's because they change it pretty frequently, and it might be based on what's available. But I'm not sure that there's really a ... I'm trying to think if there's anyone who's really giving their full list of botanicals. Usually you just see a number if they talk about it at all. Jameson Fink: Right, like the secrets herbs and spices. Kara Newman: Exactly. Very KFC. Jameson Fink: Yeah. And then this vermouth is a dry vermouth, and you mentioned in your review that it's martini material, so what is ... I mean, there's different kinds of vermouth, but so if I'm shopping, do I want to look for like, okay, I'm making martinis, I want a dry vermouth? Kara Newman: Well, for martinis, I would usually go for a white vermouth as opposed to a red vermouth. I think dry vermouth is lovely in a martini and can be very crisp. It goes really well with gin. I'm also a fan of Blanc vermouth, which are a little more oxidized. They have a bit more of like a honey note, and there certainly are a growing number of Blancs and Blancos out there. But yeah, dry would probably be my go-to for that perfect classic martini profile. Jameson Fink: And what about too, we've talked a little bit about oh vermouth, you mix it, you put it as ingredients in things, what about drinking vermouth solo, like just on the rocks with a twist? Is that something that's becoming more popular or do people still look at vermouth as like, oh vermouth is just, it's an ingredient, it doesn't stand on its own? Kara Newman: I'm seeing a lot of vermouth and tonics. Jameson Fink: Oh okay. Kara Newman: Yeah, that's sort of a Spanish tradition, and every now and then I'll see a vermouth tonic. That's very refreshing. Vermouth, tonic, a nice curl of citrus peel. Oh, it can be so good. A little tapas on the side- Jameson Fink: And then that's kind of too with this trend of ... which is great about vermouth, it's got so much flavor, but it doesn't pack the punch alcohol-wise that vodka or gin or something like that would too. Is that also maybe helping revitalize vermouth that people are trying to make these more kind of culinary cocktails or things that are ... you can have a few more of them rather than just one giant stiff martini that's 100% vodka? Kara Newman: Well, we are definitely seeing a trend toward lower alcohol cocktails, what people call session cocktails. You can hang out and have them over a session. And vermouth forward cocktails are definitely a huge part of that. The Bamboo, the Adonis, those are two cocktails that are literally nothing but vermouth, like two different kinds of vermouth. Vermouth, sherry, all kinds of lower alcohol cocktails are definitely on the forefront right now. Jameson Fink: Mm-hmm (affirmative). Yeah, no doubt. And then actually, this is great because we're not just talking about vermouth, we're drinking some vermouth, and the second wine I found really interesting because I know Lustau is a great sherry producer, and I was really excited to see that they now have a vermouth, at least it's new to me, and this is the Lustau Vermouth Blanco, 94 points, and it's a sherry-based vermouth made from fino and sweetened with Muscatel wine. It's really good. And is this more of that oxidized style that you were just talking about? Kara Newman: Yeah, this one's definitely Blanco. This is actually two of my favorite trends of vermouth right now. Jameson Fink: Uh-huh, in one bottle. Kara Newman: In one bottle. Because I mean, I love the Blancos, and those I will drink straight up. Just a little ice is really all I need. But there's also a trend toward ... trendlet, toward more sherry-based vermouth's. There are I think three or four on the market right now, and this one, Lustau was actually the first one out to my knowledge ... and it's so good. Jameson Fink: Yeah, it's really delicious. I mean, it's really ... I mean, you can smell sort of the beautiful grapes, but then it's got that kind of oxidized character too. I mean, it's really good. It's just good. I mean, we're just drinking this on its own and it's pretty damn good. Kara Newman: No, it's nice. I mean, it's got that honey, it has floral characteristics. I mean, a bit of chamomile. It's just really pretty and drinkable. Jameson Fink: Mm-hmm (affirmative). Yes, it's pretty and drinkable, absolutely. So it seems like the dry vermouth is the classic martini vermouth, but what do you like to do besides just enjoying it on its own or with maybe a little soda or something like that? What do you like to do with this as far as cocktails go? Kara Newman: I think Blancos are really nice with anything that has a bit of citrus to it. I was playing around with kind of a gimlet martini mashup over the weekend, and I was trying to make a lemon cordial that I then combined with some gin and some Blanco vermouth, and it was really quite nice. Jameson Fink: Mm-hmm (affirmative). Kara Newman: You're looking very skeptical. Jameson Fink: Oh no, no. I'm just thinking, I'm just imagining you in your home ... like I'm thinking of this like drinks lab, and you kind of like, "Oh, today I'm gonna make a cordial." It just sounds really charming and intriguing. I mean, yeah, a lot of work goes into this stuff, right? Kara Newman: Sometimes. This was ... let's call it a quick cordial. It was not exactly high maintenance. It was more or less simple syrup with lemon, and it was nice. It was very sunshiny, it was yellow. It went really well with the blanco and a little gin. I think next time I do it I might even do it with vodka. We won't tell. Jameson Fink: Okay, no, not at all. And so cordial is, what is a cordial? Kara Newman: It's just a sugar syrup. It's just a fancy word for that. Jameson Fink: Oh, okay, it's like simple syrup, but it has fancier name. Kara Newman: Yeah, you hear a lot about lime cordial for gimlets. Jameson Fink: Uh-huh, cordial, well it sounds so cordial. Kara Newman: No, but it was fun. Personally, I think you can do just about anything with a blanco. It's so versatile. I think it works well with whiskeys as well. Usually that's just the province of sweet vermouth, but I think that blanco really just spans categories, defies categories. Jameson Fink: Mm-hmm (affirmative). So when you walk into bars, I mean, we're in New York, it's an amazing city for cocktails. Are you seeing a lot more selection and variety of vermouth's on the shelf, or is it still just like we have the sweet vermouth and we have the dry vermouth, and that's it? And you don't know how old the bottles are. Kara Newman: Well, it depends where you are. I think we're seeing a little more variety than we used to. Every now and then I'll see an amber vermouth, and those are quite good too. They're even more oxidized. Once in a blue moon I'll see a rosé vermouth, and I get very excited about those. Jameson Fink: I would think, yeah, I would think there would be a ton of just ... 'cause there's rosé everything now. The popularity of rosé wine, there's rosé cider, there's rosé gin? Kara Newman: There is. There is, yeah. Jameson Fink: Okay, yeah, I think I've seen that too. Yeah, and cider ... if anything can be made like with a pink, pale Provencal color, it's being done. But that's pretty cool with vermouth. What do you do with a rosé vermouth? Kara Newman: I think it probably would work very well in any kind of ... I mean, I keep going back to gin just 'cause I want everything with gin. That's just my go-to this time of year, but I think it probably would be really lovely on its own. It really wouldn't need much embellishment at all. I think it would be really nice with anything with kind of a grapefruit, I think kind of a tequila would be really nice, a rosé vermouth tequila grapefruit concoction, like a Palomaesque kind of thing. Jameson Fink: Oh, I love a Paloma. I had a Paloma yesterday. Kara Newman: Nice. Jameson Fink: Yeah, it's one of my favorite drinks. Kara Newman: Oh, okay, cool. Jameson Fink: All these flavor notes of vermouth, especially blanco vermouth, I mean, does it kind of remind you of gin in a way, botanically? Or do you think there's similarities? Kara Newman: here can be. I'm nodding, no one can see me. I think that a lot of the language is the same. You talk about botanicals in both of them, and I think there are definitely some common botanicals in both of them, like we were talking about the Routin, I know they use juniper, which is also typically a gin botanical. But they also are ... in vermouth there are bittering agents that you don't find in most gins. It really would be just too intensely bitter I think to drink. Jameson Fink: Mm-hmm (affirmative). Kara Newman: And that gives vermouth a nice gently bitter undertone, that would be really unpleasant I think in a standard spirit. Jameson Fink: Mm-hmm (affirmative). Oh, the other thing I want to talk about is your books. You've written a lot of books. Kara Newman: Yeah, it's a compulsion. Jameson Fink: Yeah, so your most current one is Road Soda, which I think is really fun. Actually, can you just explain what Road Soda is about? Kara Newman: Yeah, yeah, sure. It is all about drinking well on the road, so good things to make and drink in hotel rooms, on planes, on trains, on camping trips, the great outdoors. Jameson Fink: Like what's a good example of something, like one of your favorites that's really innovative or fun ... or I guess it's all about being resourceful. Like what are some resourceful ways to make cocktails on the road when you're not at a bar? Kara Newman: Well, one of my favorite chapters in the book is all about drinks to batch and put into flasks. Jameson Fink: Okay. Kara Newman: I think the flask is definitely an underrated cocktail tool- Jameson Fink: Underrated. Kara Newman: ... and I definitely love being able to pre-batch drinks, like Negronis or a gin and tonic, or any kind of vermouth drink. Jameson Fink: Like to take to the movies or the park or all of the above? Kara Newman: All of the above. Jameson Fink: Of course, you're obeying all the laws of drinking in public and bringing things- Kara Newman: Of course, of course. Jameson Fink: ... but yeah, no that's really fun. What's a good drink to batch? Kara Newman: I think anything in the old fashioned family works really, really well. Anything that doesn't include citrus I think works particularly well. So any kind of combination ... For me, the Black Manhattan I think is the ultimate. So, whiskey, sweet vermouth, some Amaro in there, maybe a drop or two of orange bitters or Angostura bitters, and then just cap it up and toss it in the freezer. Jameson Fink: That sounds really good, and you worked vermouth into it, which I think is really great. Kara Newman: Oh hey, I didn't even mean to do that. Jameson Fink: But you did, but you did. And then speaking of a sweet vermouth, the last one I want to talk about is Imbue sweet vermouth, 90 points, and that's from Oregon. I remember ... I have a vermouth story. So, when I was working at a wine shop in Seattle, one of our sales reps, he was like ... we have this room where we taste wine, it was kind of like our break room, and he's like, "Okay, and I have one more thing for you to taste." He was like, "It's a vermouth," and we were all like, "Ew, I don't want to taste a vermouth." We were all just like wine, you know, I mean, vermouth's a wine, but you know what I mean, we were like, "I only want to taste red wine and white wine and champagne and sparkling wine." And he was really indignant. I mean, he wasn't a jerk about it, he's just like, "All right, I'm not leaving here until all of you taste this vermouth, and I guarantee you you're gonna love it." And it was Imbue, and it was really good. We were just blown away by it. And for me also, it was an introduction to ... that people in Oregon are making vermouth too, which I thought was super cool as well. But this is a sweet vermouth, which I think is really interesting because, I mean I guess the classic application for a sweet vermouth would be a Manhattan, right? Kara Newman: Right. Jameson Fink: So, what else can you do with sweet vermouth, and is it really that sweet? It's not like super sweet. It's still got some bitterness to it. Kara Newman: I don't think it's that sweet at all- Jameson Fink: Yeah. Kara Newman: ... I mean, sometimes I think everything should be re-categorized so it's red vermouth or white vermouth, and sometimes I deliberately try to refer to them that way, which is not standard and not done, but yeah, you're exactly right. Sweet vermouth is not terribly sweet at all. I do like the Imbue. I think they're just so sincere also. There's a certain earnestness to this particular brand that I enjoy. Jameson Fink: It's very pacific northwest. Kara Newman: I guess so, yeah. Jameson Fink: Yeah, it seems very Portland-ish. Kara Newman: And they're using a lot of local ingredients. They're using ... I believe they're using Willamette Valley wines and I know they're fortifying their wine with eau de vie from Clear Creek, and they're a local producer of brandy's and other spirits. They just seem like very well-meaning and they make a good product. Jameson Fink: Yeah, I also like the Petal and Thorn. Have you had that? Kara Newman: Yeah. Jameson Fink: That is rosé color. Kara Newman: Yeah, I mean that sort of feels more like a Campariesque kind of ... a [Parativo 00:17:31]. But also a wine-base, and very, very drinkable. Jameson Fink: Yeah, I haven't had the sweet wine or the red one. Yeah, I like that. Maybe we should just ban, just stop calling it sweet vermouth. I think like anything, people hear the word sweet and they automatically go to a dark place. I enjoy sweet things, chocolate- Kara Newman: Same, same. Jameson Fink: ... all kinds of sweets. Sweet sweets, so yeah, I agree that sweet is a ... well, it's just a loaded word, and especially in the world of wine and spirits too, that people automatically think like, "Oh, it's sweet-" Kara Newman: Because something's been added and- Jameson Fink: Right, or it's just for dessert or something like that. Do you look at ... I guess when you're thinking about sweet vermouth and a manhattan, but do you look at vermouth in general as a category, as like, oh it's just an apéritif wine, or does it depend on if you're having it alone or in a cocktail? Kara Newman: I think of vermouth as being an ingredient. I don't know that I think of it as being an apéritif category. Maybe it should be, but I don't think it's typically consumed alone or as a precursor to a meal unless it's mixed into something else. Maybe that's something that should change. Jameson Fink: Mm-hmm (affirmative). Yeah, I mean, I love the idea ... One of my favorite summer cocktails, along those lines, is a white port and soda or tonic with just a twist, a citrus twist. It's the same kind of philosophy. It's like, lower alcohol, it's got a ton of flavor, and it's really refreshing ... and it's really easy to make too. I think that's the nice thing about vermouth too is that it can be one small component of something, or it can just be like, hey, all you need to do is just [glug 00:19:15] some into a glass of ice, top it off with some soda or tonic, and add some citrus, and boom, you're done. Kara Newman: Absolutely. Jameson Fink: And you don't even need to like, oh, like X number of ounces of this and that. Just kind of, you know- Kara Newman: No, just eyeball it. Jameson Fink: ... eyeball it, yeah. Yeah, I think maybe it's hard. Well, you write a lot of cocktail recipes too. I mean, sometimes it's kind of a relief to just tell people like, you know, you can just kinda eyeball it, and it's not like a cocktail that requires 20 ingredients or 30 steps and eyedroppers of this, and you know, bar spoons of that. Kara Newman: I think a vermouth highball, a white port highball, I think all of these just sound wonderful. Yeah, just put some into your glass, glug it up with a little bit of sparkling, and if you feel like some bitters, put in some bitters. If you feel like some citrus, put in some citrus. Jameson Fink: Mm-hmm (affirmative). You can be having ... it's like a hot summer day and you just have a little on the rocks with some soda, a little citrus, it can be part of a classic drink like a martini or a manhattan, and it can be a little bit of everything in between. It's an underrated ingredient, and it's really cool to explore it from all over the country and all over the world in many guises and flavors. So, thank you for joining us, Kara. Kara Newman: My pleasure. Jameson Fink: And thank you for listening to the What We're Tasting podcast, sponsored by Vivino, wine made easy. The three wines we talked about today are: The Routin Dry Vermouth, Lustau Vermouth Blanco, and the Imbue Sweet Vermouth. Find What We're Tasting on iTunes, Google Play, or wherever you find podcasts. And if you liked today's episode, please give us a five star rating on iTunes, leave a comment, tell your friends. What We're Tasting is a Wine Enthusiast podcast. Check out Wine Enthusiast online at winemag.com.

Lost in Translation
Lost In Translation 002 - Guest Mix By IMBUE

Lost in Translation

Play Episode Listen Later Mar 12, 2018 61:59


This months episode, I invited my good friends IMBUE to come bless you guys. These guys really brought the heat with this one. Be prepared to voyage into the deeper realm of what we call house music! Based in Miami, Florida, Imbue consists of Maximiliano Cortes, Ignacio Vallejo and Eduardo J. Alvarez. Forming in 2015, the group of musicians, producers and DJs has crafted their unique interpretation of minimal electronic music, with influences ranging from house, hip-hop and psychedelic rock. Their debut record, Imbue, released in February of 2017 on limited run vinyl and their follow up, Abstractions, released on December 1st, 2017 on all digital platforms

Tabs Out Cassette Podcast
Episode #115 | 11.19.17

Tabs Out Cassette Podcast

Play Episode Listen Later Nov 19, 2017


Brett Naucke, Moth Cock, Phanm Slug, Imbue, Spore Spawn, Cereal Banter, Eaton Flowers, M. Walter, Burn Cycle, Dee Grinski, Hasufel, var Self, Andrew Weathers and Peter J. Woods, and Jadapod.

drones experimental cassettes imbue andrew weathers peter j woods
Tabs Out Cassette Podcast
Episode #115 | 11.19.17

Tabs Out Cassette Podcast

Play Episode Listen Later Nov 19, 2017


Brett Naucke, Moth Cock, Phanm Slug, Imbue, Spore Spawn, Cereal Banter, Eaton Flowers, M. Walter, Burn Cycle, Dee Grinski, Hasufel, var Self, Andrew Weathers and Peter J. Woods, and Jadapod.

drones experimental cassettes imbue andrew weathers peter j woods
Madlik Podcast – Torah Thoughts on Judaism From a Post-Orthodox Jew
A Thanksgiving Meal – סעודת הודיה

Madlik Podcast – Torah Thoughts on Judaism From a Post-Orthodox Jew

Play Episode Listen Later Nov 24, 2016 45:47


A Thanksgiving Meal –  סעודת הודיה This week in the US we will be sitting down to a Thanksgiving meal, so what better opportunity to explore the sources and traditions of a Seuda Hodaah – סעודת הודיה  a thanksgiving meal in the Jewish tradition… and survey a collection of Thanksgiving sermons…. We’ll even explain why turkey is called Hodu… which means “thanks” in Hebrew… If you like the madlik podcast please subscribe at iTunes.  And for your Andoids, the podcast is now available on Google PlayMusic and Stitcher.  For easy links go to madlik.com ------------------ In the Bible: After the battle of the five kings: Genesis 14: 18 יח  וּמַלְכִּי-צֶדֶק מֶלֶךְ שָׁלֵם, הוֹצִיא לֶחֶם וָיָיִן; וְהוּא כֹהֵן, לְאֵל עֶלְיוֹן. 18 And Melchizedek king of Salem brought forth bread and wine; and he was priest of God the Most High. יט  וַיְבָרְכֵהוּ, וַיֹּאמַר:  בָּרוּךְ אַבְרָם לְאֵל עֶלְיוֹן, קֹנֵה שָׁמַיִם וָאָרֶץ. 19 And he blessed him, and said: 'Blessed be Abram of God Most High, Maker of heaven and earth; כ  וּבָרוּךְ אֵל עֶלְיוֹן, אֲשֶׁר-מִגֵּן צָרֶיךָ בְּיָדֶךָ; וַיִּתֶּן-לוֹ מַעֲשֵׂר, מִכֹּל. 20 and blessed be God the Most High, who hath delivered thine enemies into thy hand.' And he gave him a tenth of all. כא  וַיֹּאמֶר מֶלֶךְ-סְדֹם, אֶל-אַבְרָם:  תֶּן-לִי הַנֶּפֶשׁ, וְהָרְכֻשׁ קַח-לָךְ. 21 And the king of Sodom said unto Abram: 'Give me the persons, and take the goods to thyself.' כב  וַיֹּאמֶר אַבְרָם, אֶל-מֶלֶךְ סְדֹם:  הֲרִמֹתִי יָדִי אֶל-יְהוָה אֵל עֶלְיוֹן, קֹנֵה שָׁמַיִם וָאָרֶץ. 22 And Abram said to the king of Sodom: 'I have lifted up my hand unto the LORD, God Most High, Maker of heaven and earth, כג  אִם-מִחוּט וְעַד שְׂרוֹךְ-נַעַל, וְאִם-אֶקַּח מִכָּל-אֲשֶׁר-לָךְ; וְלֹא תֹאמַר, אֲנִי הֶעֱשַׁרְתִּי אֶת-אַבְרָם. 23 that I will not take a thread nor a shoe-latchet nor aught that is thine, lest thou shouldest say: I have made Abram rich; כד  בִּלְעָדַי, רַק אֲשֶׁר אָכְלוּ הַנְּעָרִים, וְחֵלֶק הָאֲנָשִׁים, אֲשֶׁר הָלְכוּ אִתִּי:  עָנֵר אֶשְׁכֹּל וּמַמְרֵא, הֵם יִקְחוּ חֶלְקָם.  {ס} 24 save only that which the young men have eaten, and the portion of the men which went with me, Aner, Eshcol, and Mamre, let them take their portion.' {S} RASHI: And Malchizedek: The Midrash Aggadah (Targum Jonathan, Ned. 32b, Mid. Ps. 76:3) states that he was Shem, the son of Noah.   ומלכי צדק: מדרש אגדה הוא שם בן נח:   The weaning of Isaac: Genesis 21: 8 8 And the child grew and was weaned, and Abraham made a great feast on the day that Isaac was weaned.                                 חוַיִּגְדַּ֥ל הַיֶּ֖לֶד וַיִּגָּמַ֑ל וַיַּ֤עַשׂ אַבְרָהָם֙ מִשְׁתֶּ֣ה גָד֔וֹל בְּי֖וֹם הִגָּמֵ֥ל אֶת־יִצְחָֽק: RASHI: and was weaned: At the end of twenty-four months. — [from Gen. Rabbah 53:10, Keth. 60a]  ויגמל: לסוף עשרים וארבע חדש: a great feast: for all the prominent people of the generation were there: Shem, Eber, and Abimelech. — [from Tan. Buber, Vayishlach 23] Cf. Gen. Rabbah 53:10.                 משתה גדול: שהיו שם גדולי הדור, שם ועבר ואבימלך: חיי אדם כלל קנ”ה סעיף מ”א ומשנה ברורה סי’ תר”ע סק”ט בשם המהרש”ל The Thanksgiving Sacrifice: Leviticus יב  אִם עַל-תּוֹדָה, יַקְרִיבֶנּוּ--וְהִקְרִיב עַל-זֶבַח הַתּוֹדָה חַלּוֹת מַצּוֹת בְּלוּלֹת בַּשֶּׁמֶן, וּרְקִיקֵי מַצּוֹת מְשֻׁחִים בַּשָּׁמֶן; וְסֹלֶת מֻרְבֶּכֶת, חַלֹּת בְּלוּלֹת בַּשָּׁמֶן. 12 If he offer it for a thanksgiving, then he shall offer with the sacrifice of thanksgiving unleavened cakes mingled with oil, and unleavened wafers spread with oil, and cakes mingled with oil, of fine flour soaked.   Vayikra Rabbah 9:7 ר' אלעזר ור' יוסי בר חנינא ר' אלעזר אמר: שלמים הקריבו בני נח. רבי יוסי בר חנינא אמר עולות הקריבו בני נח  ...  מתיב ר' אלעזר לרבי יוסי בר חנינא (שם יח): ויקח יתרו חותן משה עולה וזבחים לאלהים. דא מה עבד לה רבי יוסי בר חנינא? עבד כמאן דאמר לאחר מתן תורה נתגייר יתרו. איפלגו רבי חייא בר אבא ורבי ינאי חד אמר: לאחר מתן תורה נתגייר יתרו. וחד אמר: קודם מתן תורה נתגייר יתרו. אמר רבי הונא: ולא פליגי. מאן דאמר קודם מתן תורה נתגייר יתרו, כמאן דאמר, שלמים הקריבו בני נח.חת Rabbi Pinchas, Rabbi Levi and Rabbi Yochanan [said] in the name of Rabbi Menachem from Gallia: In the time to come, all sacrifices will be annulled - but the sacrifice of thanksgiving will not be annulled. All prayers will be annulled, but the prayer of gratitude will not be annulled. This accords with what is written [Jeremiah 33:11]: "The voice of joy and the voice of gladness, the voice of the groom and the voice of the bride, the voice of those who say 'Give thanks to the LORD of hosts' etc." - this is the prayer of gratitude. "Those who bring [the sacrifice of] thanksgiving to the House of the LORD": this is the sacrifice of thanksgiving. Thus David said: "I owe You vows and will offer you thanksgivings" [Psalms 56:13] - not "thanksgiving," but "thanksgivings," [indicating both] the thanksgiving prayer and the prayer of gratitude. In the Talmud: Tractate Berakoth  46a Zera once was ill. R. Abbahu went to visit him, and made a vow, saying, If the little one with scorched legs1 recovers, I will make a feast for the Rabbis. He did recover, and he made a feast for all the Rabbis. Modern Times: Chabad Hasidim celebrate the 19th of Kislev to commemorate the release of the first Lubavitcher Rebbe;  Schneur Zalman from jail…. also considered to be the Rosh Hashana of Chassidus.  Also the day the Rebbe walked out of his room for the first time since his heart attack on shemini atzeret (1978)..for the Chassidim this was huge and still is for them as they feel that this day is hodoo of his recovery and hence his subsequent relationship to the hasidim.  Also 12 tammuz the previous Rebbe release from prison in Russia. (all events that allowed the next frame to occur which leads to today ) The 30th day of Nissan See a reference in a luach (הלכה יומית) here to the custom to have a  on the anniversary of the UN Vote for the partition of Palestine and the resulting birth of Israel:   א‘ ל‘ ניסן. מה משמעותו של יום העצמאות יום היום בו הוכרזה המדינה בשנת תש“ח, הינו יום שמחה ותודה לבורא עולם, על הנס הגדול שעשה לנו בהקמת המדינה. אף על פי שאויבנו לא רצו בהקמת המדינה היהודית, הכריזה המועצה הזמנית על הקמת המדינה היהודית, ונחתמה מגילת העצמאות יש לקיים סעודת הודיה ביום זה, ולברך את ה‘ על כך Prayers: See Alan Brill’s: The Book of Doctrines and Opinions: notes on Jewish theology and spirituality. Service for Thanksgiving Day 1905- In Commemoration of 250 Years of Jews in the US. by Rev H. Pereira Mendes of the Spanish- Portuguese synagogue of NY offered in 1905 at a special convocation to commemorate the 250th anniversary of the settlement of Jews in the United States.  2005 was 350 years….   Throughout the past ages Thou hast carried Israel as on eagles' wings. From the bondage of Egypt, through the trials of the wilderness, ….From nation to nation Thou didst lead us, until the hand of the oppressor was weakened and the day of human rights began to dawn Thou hast opened unto us this blessed haven of our beloved land. we lift up our hearts in gratitude to Thee, in that two hundred and fifty years ago Thou didst guide a little band of Israel’s children who, . seeking freedom to worship Thee, found it in a land which, with Thy blessing, became a refuge of freedom and justice for the oppressed of all peoples. O Lord, look down from Thy holy habitation from heaven and bless this Republic. Preserve it in the liberty which has been proclaimed in the land, and in the righteousness which is its foundation. Bless it with prosperity and peace. May it advance from strength to strength and continue to be a refuge for all who seek its shelter. Imbue all its citizens with a spirit of loyalty to its ideals. May they be ever mindful that the blessings of liberty are safeguarded by obedience to law, and that the prosperity of the nation rests upon trust in Thy goodness and reverence for Thy commandments. Bless the President and his counselors, the judges, lawgivers, and executives of our county. Put forth upon them the spirit of wisdom and understanding, the spirit of counsel and the spirit of might, the spirit of knowledge and the fear of the Lord. May America become a light to all peoples, teaching the world that righteousness exalteth a nation. Our Father in Heaven, Who lovest all nations, all men are Thy children. Thou dost apportion tasks to peoples according to their gifts of mind and heart. But all, are revealing Thy marvelous plans for mankind. May the day speedily dawn when Thy kingdom will be established on earth, when nations shall learn war no more, when peace shall be the crowning reward of a world redeemed by justice, and all men shall know Thee, from the greatest unto the least. -------------- Service for Thanksgiving Day 1940 – Rabbi Joseph Lookstein at Kehilath Jeshurun in New York We thank Thee for the beauty and utility of Thy creations, for the flowers which are the stars of the earth even as the stars are the flowers of heaven; for the fertility of the soil and the abundance of its products; for the food that is borne within its bosom and the waters that flow from its deep and inner fountains; for the air that surrounds all creatures and that holds within its invisible self the secret and power • of life. Almighty God, we pray that we may remain true to the destiny for which we were created. We pray that the dignity of human per­sonality may be preserved and the reverence of man for man may continue. We pray that the beautiful heavens that Thou didst spread over our heads may not be darkened by the clouds of hate and that the magic carpet which is earth may not be disturbed by the tramp of hostile feet. We pray that man’s inhumanity to man may forever end and that human genius may continue to strive for greater perfection and for nobler fulfillment. Let man come to understand that he is closest to God when he is nearer to man, that he worships at Thy holy throne when he serves Thy creatures and that he is within Thy holy shrine when he is at one with his fellow-beings. We pray sincerely for America and the ideals of democracy and freedom that are here enshrined. May she be strong to withstand all the currents that assail her and all the forces of evil that would invade her sacred precincts. A tower of light to her own citizenry, may she cast a steady beam and light up all the dark areas of the world and show to a perplexed and straying humanity the path of freedom, of life and of peace.  Rabbi and Congregation.  May the words of our mouths and the meditations of our hearts be acceptable to Thee, oh Lord, our rock and our redeemer. Amen.  Cf Leonard Cohen “if it be your will”  ----------------- 1951 The Faith of America: Readings, Songs and Prayers for the Celebration of American Holidays by Mordecai Kaplan; Williams, J. Paul; Kohn, Eugene Kaplan   Intro THANKSGIVING DAY: a day devoted to a grateful awareness of the blessings of American life. A blessing not appreciated is easily lost. If we take for granted the blessings that we enjoy by virtue of our living in a land of almost boundless opportunities and take no thought to the moral foundation on which the welfare of our people rests, those blessings will sooner or later be lost. Thanksgiving should be used to make us aware of those moral foundations, of our dependence on divine justice and love for the continued enjoyment of the blessings of American life.  Prayer  The Significance of the Day  OUR GOD AND FATHER, it is good to give thanks to Thee and to acknowledge Thy blessings. Only thus can we savor them to the full. In the hurried pace of our lives and in our preoccupation with the petty and the trivial, we are prone to take Thy gifts for granted. Oblivious of thy bounties, we sinfully waste the opportunities they afford us for living the good life. Therefore, do we set aside this day for thanksgiving.  We thank Thee for the land and for its fruits by which we live. We thank thee for the vigor of body and mind that enables us to exploit the fertility of our country’s fields and forests and the buried treasures of its mineral wealth. We thank Thee for the varied beauty of its landscape, for the grandeur of its mountains, the hospitality of its plains and prairies, and the gleaming vistas of ocean from its coasts.  We thank Thee for the inspiration of our country’s history—for the courage and hardihood that sustained its explorers and pioneers, for the heroism that inspires its fighters for freedom and equality, for the enterprise that builds its teeming cities, for the arts that express the beauty and meaning of its way of life, for the just laws and free institutions that enable its people to work together in peace and harmony.  Grant, O God, in Thy grace, that we may perfect our national life to the measure of Thy bounty. Grateful for the gifts Thou hast bestowed upon us, may we use them to extend the area of freedom, justice, and good-will among men. May our use of Thy, gifts bear. Witness to mankind that life is good when lived according to Thy benign will, O gracious Giver of all good. AMEN.  ------------- George Washington – Thanksgiving Proclamation Issued on October 3, 1789 And also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations, and beseech Him to pardon our national and other transgressions; to enable us all, whether in public or private stations, to perform our several and relative duties properly and punctually; to render our National Government a blessing to all the people by constantly being a Government of wise, just, and constitutional laws, discreetly and faithfully executed and obeyed; to protect and guide all sovereigns and nations (especially such as have shown kindness to us) ----------- In hard times A THOUGHTFUL MIND will perceive propriety in a service of thanksgiving on the ground, not only of any exceptional benefit, but of the continuance of those ordinary blessings which give its gladness and beauty to life. The preservation of our life itself from casualty or from disease, which might have fallen upon it, is no less a sign of God’s goodness than a narrow escape from what seemed certain death. And so, though any given year may not have been marked by what we should call conspicuous blessings, it is right and proper that we should meet to give thanks for that bounty of heaven which has not failed, for our personal life, and health, and happiness, for the undisturbed serenity and tranquility of our homes, for the maintenance of public order, content and liberty, for the peaceful progress of industry, for the regular and beneficent operations of nature. The hand of God is in all this, as well as in the events which more strikingly exhibit His goodness and His power . . . The year that is ending has not been what we commonly call a “good” year. It has been rather a bad year in the history of other nations, in business and in politics within our own borders. How then shall we meet the call which invites us to give thanks today to God for His goodness. We might try to banish from our minds these gloomy facts…. And yet it is more likely to be useful to look at the facts as they are and to ask whether, if we should judge them aright, we should not find, not in spite of them, but in them, traces and tokens of God’s goodness and occasions for praise. We mourn, for example, the decline of our material Prosperity, but it is a shallow view of things which regards material prosperity as an unmixed good for a man or for a nation. The psalmist who said, “It is good for me that I have been afflicted,” uttered a truth which finds abundant confirmation in national as well as in personal history. Look at your neighbor whom you knew as a poor boy and who now is worth his millions. . . . He used to be considerate of others, helpful to those who needed help, nobly generous with what little he had to give. Now he seems to think that poverty is a crime, and it is easier to get a flame out of an iceberg than a dollar out of his purse. Once he judged men by their moral character. Now he speaks of them as “worth” whatever their property would sell for in the market. . . . What has made the change in him? Nothing but his success. . . . And the same thing is equally true of a nation. The unparalleled development of the material resources of the American people in recent years has astonished the world, but it has also awakened the gravest solicitude of thoughtful minds. The ever rising tide of wealth, the vast increase and wide diffusion of luxury, the reckless extravagance and waste which have been common, the senseless rivalry in vulgar display, the growing tyranny of money in the hands of rich men and rich corporations, the wild fever of speculation, the prostitution of public office to an unrestrained desire of wealth, the increased inequality, and, in consequence of this, the deepening animosity of the classes of which society is composed, the swift and shameless spread of corruption in politics, the intrusion into the place of legitimate and honest business of the methods and morals of the gambling room, the growing frequency of gross violations of trust—all these things . . . have come as the direct and inevitable fruit of the era of prosperity which now—for a time at least, is ended. . . . As you try to gather up your reasons for thanksgiving, do not turn your thoughts away from the things which at first seem dark. . . . Look at them, rather, frankly . . . and see if the goodness and the mercy of God are not manifest in them. So may your sorrows be turned into joy, and your sore disappointment into confident hope. So may you gain the height of adoring trust whereon he stood who long ago declared: “I will bless the Lord at all time: His praise shall continually be in my mouth.” Edward B. Coe   Turkey The guinea fowl bears some resemblance to the then-recently found American bird. Though it is native to eastern Africa, the guinea fowl was imported to Europe through the Ottoman Empire and came to be called the turkey-cock or turkey-hen. When settlers in the New World began to send similar-looking fowl back to Europe, they were mistakenly called turkeys. Every language seems to have radically different names for this bird. The Turkish word is hindi, which literally means “Indian.” The original word in French, coq d’Inde, meant rooster of India, and has since shortened to dinde. These names likely derive from the common misconception that India and the New World were one and the same. In Portuguese, it’s literally a “Peru bird,” and in Malay, it’s called a “Dutch chicken.” Hodu – India הֹדוּ Hôdûw, ho'-doo; of foreign origin; Hodu (i.e. Hindustan):—India. India = "flee away" or " give ye thanks" Strongs Lexicon H1912

Word With Friend
Imbue with Nina Concepcion

Word With Friend

Play Episode Listen Later Oct 16, 2015 77:01


This week's guest, Nina Concepcion (comedian, writer, actor, improvisor) sat down with Julian and chatted up a storm about her oddly conservative christian phase during high school and college, fan fiction (erotic), and let out some Zach Braff steam. They also talked Doctor Who, Hunger Games, and Disney Channel Original Movies. Our word this episode is IMBUE - we find out what building Nina would "go through like a ghost", what "Nina's Law" would be, and who the cartoon character "Imbue Boo" is.   Big News! Word With Friend has a panel at this year's Stan Lee's Comikaze!! October 31st 5:30pm Julian and a panel of word experts will get into Onomatopoeia!! JOIN US!!! Also, Nina wrote a book, "The Good-ish Girl". Get it.

Washed Up Emo
#40 - Joseph Marro (The Early November)

Washed Up Emo

Play Episode Listen Later May 30, 2015 80:49


Joseph and I discuss how the band met in high school, remembering to wait for things in the mail and the actual feeling of emo. I give in a little to the swoop era and Joseph explains his learnings from his rocker days that he takes to managing bands today. Finally, we discus The Early November's latest album "Imbue." Support the show (https://www.patreon.com/washedupemo)

#AlternativeFacts
The Early November

#AlternativeFacts

Play Episode Listen Later Apr 3, 2015 25:04


The band discusses its new album “Imbue,” how to stay relevant after reuniting, and looks back on how the Drive-Thru Records roster has grown up. See acast.com/privacy for privacy and opt-out information.