Podcasts about fuyu

  • 37PODCASTS
  • 43EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about fuyu

Latest podcast episodes about fuyu

早安英文-最调皮的英语电台
外刊精讲 | 为什么日本是50岁人士的天堂?

早安英文-最调皮的英语电台

Play Episode Listen Later Jan 6, 2025 13:08


【欢迎订阅】每天早上5:30,准时更新。【阅读原文】标题:Why Japan is the perfect place to turn 50A significant birthday feels less so in a country that has become a global pioneer of ageing — for better and for worse正文:So. A big, round-numbered and menacing birthday coming up in a few weeks. Not to give too much away, but in the month I was born, Momoe Yamaguchi's Fuyu no Iro was electrifying the charts, Terror of Mechagodzilla was about to hit cinemas, and Okinawa was busying itself with last-minute preparations for Expo '75.知识点:round adj. /raʊnd/a round figure or amount is one that is given as a whole number, usually one ending in 0 or 5整数的;尾数是0(或5)的• Make it a round figure—say forty dollars.凑个整数—就四⼗块钱吧。• Two thousand is a nice round number—put that down.两千是个不错的整数—记下吧。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你!【节目介绍】《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。【适合谁听】1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者2、任何想通过地道英文提高听、说、读、写能力的英文学习者3、想快速掌握表达,有出国学习和旅游计划的英语爱好者4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等)【你将获得】1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 11, 2024 56:20


Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Why Google failed to make GPT-3 + why Multimodal Agents are the path to AGI — with David Luan of Adept

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 22, 2024 41:52


Our next SF event is AI UX 2024 - let's see the new frontier for UX since last year! Last call: we are recording a preview of the AI Engineer World's Fair with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an “ex-technical co-founder type”. Reach out to him for more!David Luan has been at the center of the modern AI revolution: he was the ~30th hire at OpenAI, he led Google's LLM efforts and co-led Google Brain, and then started Adept in 2022, one of the leading companies in the AI agents space. In today's episode, we asked David for some war stories from his time in early OpenAI (including working with Alec Radford ahead of the GPT-2 demo with Sam Altman, that resulted in Microsoft's initial $1b investment), and how Adept is building agents that can “do anything a human does on a computer" — his definition of useful AGI.Why Google *couldn't* make GPT-3While we wanted to discuss Adept, we couldn't talk to a former VP Eng of OpenAI and former LLM tech lead at Google Brain and not ask about the elephant in the room. It's often asked how Google had such a huge lead in 2017 with Vaswani et al creating the Transformer and Noam Shazeer predicting trillion-parameter models and yet it was David's team at OpenAI who ended up making GPT 1/2/3. David has some interesting answers:“So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized…what they (should) have done would be say, hey, Noam Shazeer, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too…You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing. He's got this decoder only transformer that's probably going to get there before we do. And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused.”Cloning HGI for AGIHuman intelligence got to where it is today through evolution. Some argue that to get to AGI, we will approximate all the “FLOPs” that went into that process, an approach most famously mapped out by Ajeya Cotra's Biological Anchors report:The early days of OpenAI were very reinforcement learning-driven with the Dota project, but that's a very inefficient way for these models to re-learn everything. (Kanjun from Imbue shared similar ideas in her episode).David argues that there's a shortcut. We can bootstrap from existing intelligence.“Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there… I think we are ignoring the fact that you have a giant shortcut, which is you can behaviorally clone everything humans already know. And that's what we solved with LLMs!”LLMs today basically model intelligence using all (good!) written knowledge (see our Datasets 101 episode), and have now expanded to non-verbal knowledge (see our HuggingFace episode on multimodality). The SOTA self-supervised pre-training process is surprisingly data-efficient in taking large amounts of unstructured data, and approximating reasoning without overfitting.But how do you cross the gap from the LLMs of today to building the AGI we all want? This is why David & friends left to start Adept.“We believe the clearest framing of general intelligence is a system that can do anything a human can do in front of a computer. A foundation model for actions, trained to use every software tool, API, and webapp that exists, is a practical path to this ambitious goal” — ACT-1 BlogpostCritical Path: Abstraction with ReliabilityThe AGI dream is fully autonomous agents, but there are levels to autonomy that we are comfortable giving our agents, based on how reliable they are. In David's word choice, we always want higher levels of “abstractions” (aka autonomy), but our need for “reliability” is the practical limit on how high of an abstraction we can use.“The critical path for Adept is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that.”We saw how Adept thinks about different levels of abstraction at the 2023 Summit:The highest abstraction is the “AI Employee”, but we'll get there with “AI enabled employees”. Alessio recently gave a talk about the future of work with “services as software” at this week's Nvidia GTC (slides).No APIsUnlike a lot of large research labs, Adept's framing of AGI as "being able to use your computer like a human" carries with it a useful environmental constraint:“Having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path (to economic value).”This realization and conviction means that multimodal modals are the way to go. Instead of using function calling to call APIs to build agents, which is what OpenAI and most of the open LLM industry have done to date, Adept wants to “drive by vision”, (aka see the screen as a human sees it) and pinpoint where to click and type as a human does. No APIs needed, because most software don't expose APIs.Extra context for readers: You can see the DeepMind SIMA model in the same light: One system that learned to play a diverse set of games (instead of one dedicated model per game) using only pixel inputs and keyboard-and-mouse action outputs!The OpenInterpreter team is working on a “Computer API” that also does the same.To do this, Adept had to double down on a special kind of multimodality for knowledge work:“A giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents……I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera… (but) where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so Adept spent a lot of time building that.”With this context, you can now understand the full path of Adept's public releases:* ACT-1 (Sept 2022): a large Transformers model optimized for browser interactions. It has a custom rendering of the browser viewport that allows it to better understand it and take actions.* Persimmon-8B (Sept 2023): a permissive open LLM (weights and code here)* Fuyu-8B (Oct 2023): a small version of the multimodal model that powers Adept. Vanilla decoder-only transformer with no specialized image encoder, which allows it to handle input images of varying resolutions without downsampling.* Adept Experiments (Nov 2023): A public tool to build automations in the browser. This is powered by Adept's core technology but it's just a piece of their enterprise platform. They use it as a way to try various design ideas.* Fuyu Heavy (Jan 2024) - a new multimodal model designed specifically for digital agents and the world's third-most-capable multimodal model (beating Gemini Pro on MMMU, AI2D, and ChartQA), “behind only GPT4-V and Gemini Ultra, which are 10-20 times bigger”The Fuyu-8B post in particular exhibits a great number of examples on knowledge work multimodality:Why Adept is NOT a Research LabWith OpenAI now worth >$90b and Anthropic >$18b, it is tempting to conclude that the AI startup metagame is to build a large research lab, and attract the brightest minds and highest capital to build AGI. Our past guests (see the Humanloop episode) and (from Imbue) combined to ask the most challenging questions of the pod - with David/Adept's deep research pedigree from Deepmind and OpenAI, why is Adept not building more general foundation models (like Persimmon) and playing the academic benchmarks game? Why is Adept so focused on commercial agents instead?“I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from “Can we make a better agent”…… I think pure play foundation model companies are just going to be pinched by how good the next couple of (Meta Llama models) are going to be… And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.”and the commercial grounding is his answer to Kanjun too (whom we also asked the inverse question to compare with Adept):“… the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build AGI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations are not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals.. I think that's a degree of practicality that really helps.”And his customers seem pretty happy, because David didn't need to come on to do a sales pitch:David: “One of the things we haven't shared before is we're completely sold out for Q1.”Swyx: “Sold out of what?”David: “Sold out of bandwidth to onboard more customers.”Well, that's a great problem to have.Show Notes* David Luan* Dextro at Data Driven NYC (2015)* Adept* ACT-1* Persimmon-8B* Adept Experiments* Fuyu-8B* $350M Series B announcement* Amelia Wattenberger talk at AI Engineer Summit* FigureChapters* [00:00:00] Introductions* [00:01:14] Being employee #30 at OpenAI and its early days* [00:13:38] What is Adept and how do you define AGI?* [00:21:00] Adept's critical path and research directions* [00:26:23] How AI agents should interact with software and impact product development* [00:30:37] Analogies between AI agents and self-driving car development* [00:32:42] Balancing reliability, cost, speed and generality in AI agents* [00:37:30] Potential of foundation models for robotics* [00:39:22] Core research questions and reasons to work at AdeptTranscriptsAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:15]: Hey, and today we have David Luan, CEO, co-founder of Adept in the studio. Welcome.David [00:00:20]: Yeah, thanks for having me.Swyx [00:00:21]: Been a while in the works. I've met you socially at one of those VC events and you said that you were interested in coming on and glad we finally were able to make this happen.David: Yeah, happy to be part of it.Swyx: So we like to introduce the speaker and then also just like have you talk a little bit about like what's not on your LinkedIn, what people should just generally know about you. You started a company in college, which was the first sort of real time video detection classification API that was Dextro, and that was your route to getting acquired into Axon where you're a director of AI. Then you were the 30th hire at OpenAI?David [00:00:53]: Yeah, 30, 35, something around there. Something like that.Swyx [00:00:56]: So you were VP of Eng for two and a half years to two years, briefly served as tech lead of large models at Google, and then in 2022 started Adept. So that's the sort of brief CV. Is there anything else you like want to fill in the blanks or like people should know more about?David [00:01:14]: I guess a broader story was I joined OpenAI fairly early and I did that for about two and a half to three years leading engineering there. It's really funny, I think second or third day of my time at OpenAI, Greg and Ilya pulled me in a room and we're like, you know, you should take over our directs and we'll go mostly do IC work. So that was fun, just coalescing a bunch of teams out of a couple of early initiatives that had already happened. The company, the Dota effort was going pretty hard and then more broadly trying to put bigger picture direction around what we were doing with basic research. So I spent a lot of time doing that. And then I led Google's LLM efforts, but also co-led Google Brain was one of the brain leads more broadly. You know, there's been a couple of different eras of AI research, right? If we count everything before 2012 as prehistory, which people hate it when I say that, kind of had this like you and your three best friends write a research paper that changes the world period from like 2012 to 2017. And I think the game changed in 2017 and like most labs didn't realize it, but we at OpenAI really did. I think in large part helped by like Ilya's constant beating of the drum that the world would be covered in data centers. And I think-Swyx [00:02:15]: It's causally neat.David [00:02:16]: Yeah. Well, like I think we had conviction in that, but it wasn't until we started seeing results that it became clear that that was where we had to go. But also part of it as well was for OpenAI, like when I first joined, I think one of the jobs that I had to do was how do I tell a differentiated vision for who we were technically compared to, you know, hey, we're just smaller Google Brain, or like you work at OpenAI if you live in SF and don't want to commute to Mountain View or don't want to live in London, right? That's like not enough to like hang your technical identity as a company. And so what we really did was, and I spent a lot of time pushing this, is just how do we get ourselves focused on a certain class of like giant swings and bets, right? Like how do you flip the script from you just do bottom-up research to more about how do you like leave some room for that, but really make it about like, what are the big scientific outcomes that you want to show? And then you just solve them at all costs, whether or not you care about novelty and all that stuff. And that became the dominant model for a couple of years, right? And then what's changed now is I think the number one driver of AI products over the next couple of years is going to be the deep co-design and co-evolution of product and users for feedback and actual technology. And I think labs, every tool to go do that are going to do really well. And that's a big part of why I started Adept.Alessio [00:03:20]: You mentioned Dota, any memories thinking from like the switch from RL to Transformers at the time and kind of how the industry was evolving more in the LLM side and leaving behind some of the more agent simulation work?David [00:03:33]: Like zooming way out, I think agents are just absolutely the correct long-term direction, right? You just go to find what AGI is, right? You're like, Hey, like, well, first off, actually, I don't love AGI definitions that involve human replacement because I don't think that's actually how it's going to happen. Even this definition of like, Hey, AGI is something that outperforms humans at economically valuable tasks is kind of implicit view of the world about what's going to be the role of people. I think what I'm more interested in is like a definition of AGI that's oriented around like a model that can do anything a human can do on a computer. If you go think about that, which is like super tractable, then agent is just a natural consequence of that definition. And so what did all the work we did on our own stuff like that get us was it got us a really clear formulation. Like you have a goal and you want to maximize the goal, you want to maximize reward, right? And the natural LLM formulation doesn't come with that out of the box, right? I think that we as a field got a lot right by thinking about, Hey, how do we solve problems of that caliber? And then the thing we forgot is the Novo RL is like a pretty terrible way to get there quickly. Why are we rediscovering all the knowledge about the world? Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there. Right.Swyx [00:04:44]: The biological basis theory. Right.David [00:04:46]: So I think we are ignoring the fact that you have a giant shortcut, which is you can behavioral clone everything humans already know. And that's what we solved with LLMs. We've solved behavioral cloning, everything that humans already know. Right. So like today, maybe LLMs is like behavioral cloning every word that gets written on the internet in the future, the multimodal models are becoming more of a thing where behavioral cloning the visual world. But really, what we're just going to have is like a universal byte model, right? Where tokens of data that have high signal come in, and then all of those patterns are like learned by the model. And then you can regurgitate any combination now. Right. So text into voice out, like image into other image out or video out or whatever, like these like mappings, right? Like all just going to be learned by this universal behavioral cloner. And so I'm glad we figured that out. And I think now we're back to the era of how do we combine this with all of the lessons we learned during the RL period. That's what's going to drive progress.Swyx [00:05:35]: I'm still going to pressure you for a few more early opening stories before we turn to the ADET stuff. On your personal site, which I love, because it's really nice, like personal, you know, story context around like your history. I need to update it. It's so old. Yeah, it's so out of date. But you mentioned GPT-2. Did you overlap with GPT-1? I think you did, right?David [00:05:53]: I actually don't quite remember. I think I was joining right around- Right around then?Swyx [00:05:57]: I was right around that, yeah. Yeah. So what I remember was Alec, you know, just kind of came in and was like very obsessed with Transformers and applying them to like Reddit sentiment analysis. Yeah, sentiment, that's right. Take us through-David [00:06:09]: Sentiment neuron, all this stuff.Swyx [00:06:10]: The history of GPT as far as you know, you know, according to you. Ah, okay.David [00:06:14]: History of GPT, according to me, that's a pretty good question. So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized, where like, again, you and your three best friends write papers, right? Okay. So zooming way out, right? I think about my job when I was a full-time research leader as a little bit of a portfolio allocator, right? So I've got really, really smart people. My job is to convince people to coalesce around a small number of really good ideas and then run them over the finish line. My job is not actually to promote a million ideas and never have critical mass. And then as the ideas start coming together and some of them start working well, my job is to nudge resources towards the things that are really working and then start disbanding some of the things that are not working, right? That muscle did not exist during my time at Google. And I think had they had it, what they would have done would be say, hey, Noam Shazir, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too.Swyx [00:07:17]: He's talking about trillion parameter models in 2017.David [00:07:20]: Yeah. So that's the core of the GPT story, right? Which is that, and I'm jumping around historically, right? But after GPT-2, we were all really excited about GPT-2. I can tell you more stories about that. It was the last paper that I even got to really touch before everything became more about building a research org. You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing, right? He's got this decoder only transformer that's probably going to get there before we do. And I was like, but like, please just like let this model finish, right? And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why, right? At the time, there was a thing called the brain credit marketplace. And did you guys know the brain credit marketplace? No, I never heard of this. Oh, so it's actually, it's a, you can ask any Googler.Swyx [00:08:23]: It's like just like a thing that, that, I mean, look like, yeah, limited resources, you got to have some kind of marketplace, right? You know, sometimes it's explicit, sometimes it isn't, you know, just political favors.David [00:08:34]: You could. And so then basically everyone's assigned a credit, right? So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused. And I think, again, that's like part of the narrative of like this phase one of AI, right? Of like this modern AI era to phase two. And I think in the same way, I think phase three company is going to out execute phase two companies because of the same asymmetry of success.Swyx [00:09:12]: Yeah. I think it's underrated how much NVIDIA works with you in the early days as well. I think maybe, I think it was Jensen. I'm not sure who circulated a recent photo of him delivering the first DGX to you guys.David [00:09:24]: I think Jensen has been a complete legend and a mastermind throughout. I have so much respect for NVIDIA. It is unreal.Swyx [00:09:34]: But like with OpenAI, like kind of give their requirements, like co-design it or just work of whatever NVIDIA gave them.David [00:09:40]: So we work really closely with them. There's, I'm not sure I can share all the stories, but examples of ones that I've found particularly interesting. So Scott Gray is amazing. I really like working with him. He was on one of my teams, the supercomputing team, which Chris Berner runs and Chris Berner still does a lot of stuff in that. As a result, like we had very close ties to NVIDIA. Actually, one of my co-founders at Adept, Eric Elson, was also one of the early GPGPU people. So he and Scott and Brian Catanzaro at NVIDIA and Jonah and Ian at NVIDIA, I think all were very close. And we're all sort of part of this group of how do we push these chips to the absolute limit? And I think that kind of collaboration helped quite a bit. I think one interesting set of stuff is knowing the A100 generation, that like quad sparsity was going to be a thing. Is that something that we want to go look into, right? And figure out if that's something that we could actually use for model training. Really what it boils down to is that, and I think more and more people realize this, six years ago, people, even three years ago, people refused to accept it. This era of AI is really a story of compute. It's really the story of how do you more efficiently map actual usable model flops to compute,Swyx [00:10:38]: Is there another GPT 2, 3 story that you love to get out there that you think is underappreciated for the amount of work that people put into it?David [00:10:48]: So two interesting GPT 2 stories. One of them was I spent a good bit of time just sprinting to help Alec get the paper out. And I remember one of the most entertaining moments was we were writing the modeling section. And I'm pretty sure the modeling section was the shortest modeling section of any ML, reasonably legitimate ML paper to that moment. It was like section three model. This is a standard vanilla decoder only transformer with like these particular things, those paragraph long if I remember correctly. And both of us were just looking at the same being like, man, the OGs in the field are going to hate this. They're going to say no novelty. Why did you guys do this work? So now it's funny to look at in hindsight that it was pivotal kind of paper, but I think it was one of the early ones where we just leaned fully into all we care about is solving problems in AI and not about, hey, is there like four different really simple ideas that are cloaked in mathematical language that doesn't actually help move the field forward?Swyx [00:11:42]: Right. And it's like you innovate on maybe like data set and scaling and not so much the architecture.David [00:11:48]: We all know how it works now, right? Which is that there's a collection of really hard won knowledge that you get only by being at the frontiers of scale. And that hard won knowledge, a lot of it's not published. A lot of it is stuff that's actually not even easily reducible to what looks like a typical academic paper. But yet that's the stuff that helps differentiate one scaling program from another. You had a second one? So the second one is, there's like some details here that I probably shouldn't fully share, but hilariously enough for the last meeting we did with Microsoft before Microsoft invested in OpenAI, Sam Altman, myself and our CFO flew up to Seattle to do the final pitch meeting. And I'd been a founder before. So I always had a tremendous amount of anxiety about partner meetings, which this basically this is what it was. I had Kevin Scott and Satya and Amy Hood, and it was my job to give the technical slides about what's the path to AGI, what's our research portfolio, all of this stuff, but it was also my job to give the GPT-2 demo. We had a slightly bigger version of GPT-2 that we had just cut maybe a day or two before this flight up. And as we all know now, model behaviors you find predictable at one checkpoint are not predictable in another checkpoint. And so I'd spent all this time trying to figure out how to keep this thing on rails. I had my canned demos, but I knew I had to go turn it around over to Satya and Kevin and let them type anything in. And that just, that really kept me up all night.Swyx [00:13:06]: Nice. Yeah.Alessio [00:13:08]: I mean, that must have helped you talking about partners meeting. You raised $420 million for Adept. The last round was a $350 million Series B, so I'm sure you do great in partner meetings.Swyx [00:13:18]: Pitchers meetings. Nice.David [00:13:20]: No, that's a high compliment coming from a VC.Alessio [00:13:22]: Yeah, no, I mean, you're doing great already for us. Let's talk about Adept. And we were doing pre-prep and you mentioned that maybe a lot of people don't understand what Adept is. So usually we try and introduce the product and then have the founders fill in the blanks, but maybe let's do the reverse. Like what is Adept? Yeah.David [00:13:38]: So I think Adept is the least understood company in the broader space of foundational models plus agents. So I'll give some color and I'll explain what it is and I'll explain also why it's actually pretty different from what people would have guessed. So the goal for Adept is we basically want to build an AI agent that can do, that can basically help humans do anything a human does on a computer. And so what that really means is we want this thing to be super good at turning natural language like goal specifications right into the correct set of end steps and then also have all the correct sensors and actuators to go get that thing done for you across any software tool that you already use. And so the end vision of this is effectively like I think in a couple of years everyone's going to have access to like an AI teammate that they can delegate arbitrary tasks to and then also be able to, you know, use it as a sounding board and just be way, way, way more productive. Right. And just changes the shape of every job from something where you're mostly doing execution to something where you're mostly actually doing like these core liberal arts skills of what should I be doing and why. Right. And I find this like really exciting and motivating because I think it's actually a pretty different vision for how AGI will play out. I think systems like Adept are the most likely systems to be proto-AGIs. But I think the ways in which we are really counterintuitive to everybody is that we've actually been really quiet because we are not a developer company. We don't sell APIs. We don't sell open source models. We also don't sell bottom up products. We're not a thing that you go and click and download the extension and like we want more users signing up for that thing. We're actually an enterprise company. So what we do is we work with a range of different companies, some like late stage multi-thousand people startups, some fortune 500s, et cetera. And what we do for them is we basically give them an out of the box solution where big complex workflows that their employees do every day could be delegated to the model. And so we look a little different from other companies in that in order to go build this full agent thing, the most important thing you got to get right is reliability. So initially zooming way back when, one of the first things that DEP did was we released this demo called Act One, right? Act One was like pretty cool. It's like kind of become a hello world thing for people to show agent demos by going to Redfin and asking to buy a house somewhere because like we did that in the original Act One demo and like showed that, showed like Google Sheets, all this other stuff. Over the last like year since that has come out, there's been a lot of really cool demos and you go play with them and you realize they work 60% of the time. But since we've always been focused on how do we build an amazing enterprise product, enterprises can't use anything that isn't in the nines of reliability. And so we've actually had to go down a slightly different tech tree than what you might find in the prompt engineering sort of plays in the agent space to get that reliability. And we've decided to prioritize reliability over all else. So like one of our use cases is crazy enough that it actually ends with a physical truck being sent to a place as the result of the agent workflow. And if you're like, if that works like 60% of the time, you're just blowing money and poor truck drivers going places.Alessio [00:16:30]: Interesting. One of the, our investment teams has this idea of services as software. I'm actually giving a talk at NVIDIA GTC about this, but basically software as a service, you're wrapping user productivity in software with agents and services as software is replacing things that, you know, you would ask somebody to do and the software just does it for you. When you think about these use cases, do the users still go in and look at the agent kind of like doing the things and can intervene or like are they totally removed from them? Like the truck thing is like, does the truck just show up or are there people in the middle checking in?David [00:17:04]: I think there's two current flaws in the framing for services as software, or I think what you just said. I think that one of them is like in our experience, as we've been rolling out Adept, the people who actually do the jobs are the most excited about it because they don't go from, I do this job to, I don't do this job. They go from, I do this job for everything, including the shitty rote stuff to I'm a supervisor. And I literally like, it's pretty magical when you watch the thing being used because now it parallelizes a bunch of the things that you had to do sequentially by hand as a human. And you can just click into any one of them and be like, Hey, I want to watch the trajectory that the agent went through to go solve this. And the nice thing about agent execution as opposed to like LLM generations is that a good chunk of the time when the agent fails to execute, it doesn't give you the wrong result. It just fails to execute. And the whole trajectory is just broken and dead and the agent knows it, right? So then those are the ones that the human then goes and solves. And so then they become a troubleshooter. They work on the more challenging stuff. They get way, way more stuff done and they're really excited about it. I think the second piece of it that we've found is our strategy as a company is to always be an augmentation company. And I think one out of principle, that's something we really care about. But two, actually, if you're framing yourself as an augmentation company, you're always going to live in a world where you're solving tasks that are a little too hard for what the model can do today and still needs a human to provide oversight, provide clarifications, provide human feedback. And that's how you build a data flywheel. That's how you actually learn from the smartest humans how to solve things models can't do today. And so I actually think that being an augmentation company forces you to go develop your core AI capabilities faster than someone who's saying, ah, okay, my job is to deliver you a lights off solution for X.Alessio [00:18:42]: Yeah. It's interesting because we've seen two parts of the market. One is we have one company that does agents for SOC analysts. People just don't have them, you know, and just they cannot attract the talent to do it. And similarly, in a software development, you have Copilot, which is the augmentation product, and then you have sweep.dev and you have these products, which they just do the whole thing. I'm really curious to see how that evolves. I agree that today the reliability is so important in the enterprise that they just don't use most of them. Yeah. Yeah. No, that's cool. But it's great to hear the story because I think from the outside, people are like, oh, a dev, they do Act One, they do Persimon, they do Fuyu, they do all this stuff. Yeah, it's just the public stuff.Swyx [00:19:20]: It's just public stuff.David [00:19:21]: So one of the things we haven't shared before is we're completely sold out for Q1. And so I think...Swyx [00:19:26]: Sold out of what?David [00:19:27]: Sold out of bandwidth to go on board more customers. And so we're like working really hard to go make that less of a bottleneck, but our expectation is that I think we're going to be significantly more public about the broader product shape and the new types of customers we want to attract later this year. So I think that clarification will happen by default.Swyx [00:19:43]: Why have you become more public? You know, if the whole push has... You're sold out, you're my enterprise, but you're also clearly putting effort towards being more open or releasing more things.David [00:19:53]: I think we just flipped over that way fairly recently. That's a good question. I think it actually boils down to two things. One, I think that, frankly, a big part of it is that the public narrative is really forming around agents as being the most important thing. And I'm really glad that's happening because when we started the company in January 2022, everybody in the field knew about the agents thing from RL, but the general public had no conception of what it was. They were still hanging their narrative hat on the tree of everything's a chatbot. And so I think now one of the things that I really care about is that when people think agent, they actually think the right thing. All sorts of different things are being called agents. Chatbots are being called agents. Things that make a function call are being called agents. To me, an agent is something that you can give a goal and get an end step workflow done correctly in the minimum number of steps. And so that's a big part of why. And I think the other part is because I think it's always good for people to be more aware of Redept as they think about what the next thing they want to do in their careers. The field is quickly pivoting in a world where foundation models are looking more and more commodity. And I think a huge amount of gain is going to happen from how do you use foundation models as the well-learned behavioral cloner to go solve agents. And I think people who want to do agents research should really come to Redept.Swyx [00:21:00]: When you say agents have become more part of the public narrative, are there specific things that you point to? I'll name a few. Bill Gates in his blog post mentioning that agents are the future. I'm the guy who made OSes, and I think agents are the next thing. So Bill Gates, I'll call that out. And then maybe Sam Altman also saying that agents are the future for open AI.David [00:21:17]: I think before that even, I think there was something like the New York Times, Cade Metz wrote a New York Times piece about it. Right now, in a bit to differentiate, I'm seeing AI startups that used to just brand themselves as an AI company, but now brand themselves as an AI agent company. It's just like, it's a term I just feel like people really want.Swyx [00:21:31]: From the VC side, it's a bit mixed. Is it? As in like, I think there are a lot of VCs where like, I would not touch any agent startups because like- Why is that? Well, you tell me.Alessio [00:21:41]: I think a lot of VCs that are maybe less technical don't understand the limitations of the-Swyx [00:21:46]: No, that's not fair.Alessio [00:21:47]: No, no, no, no. I think like- You think so? No, no. I think like the, what is possible today and like what is worth investing in, you know? And I think like, I mean, people look at you and say, well, these guys are building agents. They needed 400 million to do it. So a lot of VCs are maybe like, oh, I would rather invest in something that is tacking on AI to an existing thing, which is like easier to get the market and kind of get some of the flywheel going. But I'm also surprised a lot of funders just don't want to do agents. It's not even the funding. Sometimes we look around and it's like, why is nobody doing agents for X? Wow.David [00:22:17]: That's good to know actually. I never knew that before. My sense from my limited perspective is there's a new agent company popping up every day.Swyx [00:22:24]: So maybe I'm- They are. They are. But like I have advised people to take agents off of their title because it's so diluted.David [00:22:31]: It's now so diluted.Swyx [00:22:32]: Yeah. So then it doesn't stand for anything. Yeah.David [00:22:35]: That's a really good point.Swyx [00:22:36]: So like, you know, you're a portfolio allocator. You have people know about Persimmon, people know about Fuyu and Fuyu Heavy. Can you take us through like how you think about that evolution of that and what people should think about what that means for adepts and sort of research directions? Kind of take us through the stuff you shipped recently and how people should think about the trajectory of what you're doing.David [00:22:56]: The critical path for adepts is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that. So if you go zoom way, way back to Act One days, right? Like the core thing behind Act One is can we teach large model basically how to even actuate your computer? And I think we're one of the first places to have solved that and shown it and shown the generalization that you get when you give it various different workflows and texts. But I think from there on out, we really realized was that in order to get reliability, companies just do things in various different ways. You actually want these models to be able to get a lot better at having some specification of some guardrails for what it actually should be doing. And I think in conjunction with that, a giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents. Back then we had to do a ton of research basically on how do we actually make that possible? Well, first off, like back in forgot exactly one month to 23, like there were no multimodal models really that you could use for things like this. And so we pushed really hard on stuff like the Fuyu architecture. I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera. Coco. Yeah, right. And the Coco is awesome. Like I love Coco. I love TY. Like it's really helped the field. Right. But like that's the build one thing. I actually think it's really clear today. Multimodal models are the default foundation model, right? It's just going to supplant LLMs. Like you just train a giant multimodal model. And so for that though, like where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. Right. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so a depth spent a lot of time building that. And so the public for use and stuff aren't trained on our actual corpus, it's trained on some other stuff. But you take a lot of that data and then you make it really fast and make it really good at things like dense OCR on screens. And then now you have the right like raw putty to go make a good agent. So that's kind of like some of the modeling side, we've kind of only announced some of that stuff. We haven't really announced much of the agent's work, but that if you put those together with the correct product form factor, and I think the product form factor also really matters. I think we're seeing, and you guys probably see this a little bit more than I do, but we're seeing like a little bit of a pushback against the tyranny of chatbots as form factor. And I think that the reason why the form factor matters is the form factor changes what data you collect in the human feedback loop. And so I think we've spent a lot of time doing full vertical integration of all these bits in order to get to where we are.Swyx [00:25:44]: Yeah. I'll plug Amelia Wattenberger's talk at our conference, where she gave a little bit of the thinking behind like what else exists other than chatbots that if you could delegate to reliable agents, you could do. I was kind of excited at Adept experiments or Adept workflows, I don't know what the official name for it is. I was like, okay, like this is something I can use, but it seems like it's just an experiment for now. It's not your product.David [00:26:06]: So you basically just use experiments as like a way to go push various ideas on the design side to some people and just be like, yeah, we'll play with it. Actually the experiments code base underpins the actual product, but it's just the code base itself is kind of like a skeleton for us to go deploy arbitrary cards on the side.Swyx [00:26:22]: Yeah.Alessio [00:26:23]: Makes sense. I was going to say, I would love to talk about the interaction layer. So you train a model to see UI, but then there's the question of how do you actually act on the UI? I think there was some rumors about open app building agents that are kind of like, they manage the end point. So the whole computer, you're more at the browser level. I read in one of your papers, you have like a different representation, kind of like you don't just take the dome and act on it. You do a lot more stuff. How do you think about the best way the models will interact with the software and like how the development of products is going to change with that in mind as more and more of the work is done by agents instead of people?David [00:26:58]: This is, there's so much surface area here and it's actually one of the things I'm really excited about. And it's funny because I've spent most of my time doing research stuff, but there's like a whole new ball game that I've been learning about and I find it really cool. So I would say the best analogy I have to why Adept is pursuing a path of being able to use your computer like a human, plus of course being able to call APIs and being able to call APIs is the easy part, like being able to use your computer like a human is a hard part. It's in the same way why people are excited about humanoid robotics, right? In a world where you had T equals infinity, right? You're probably going to have various different form factors that robots could just be in and like all the specialization. But the fact is that humans live in a human environment. So having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path. I think because it's the most practical path, I think a lot of success will come from going down this path. I kind of think about this early days of the agent interaction layer level is a little bit like, do you all remember Windows 3.1? Like those days? Okay, this might be, I might be, I might be too old for you guys on this. But back in the day, Windows 3.1, we had this transition period between pure command line, right? Being the default into this new world where the GUI is the default and then you drop into the command line for like programmer things, right? The old way was you booted your computer up, DOS booted, and then it would give you the C colon slash thing. And you typed Windows and you hit enter, and then you got put into Windows. And then the GUI kind of became a layer above the command line. The same thing is going to happen with agent interfaces is like today we'll be having the GUI is like the base layer. And then the agent just controls the current GUI layer plus APIs. And in the future, as more and more trust is built towards agents and more and more things can be done by agents, if more UIs for agents are actually generative in and of themselves, then that just becomes a standard interaction layer. And if that becomes a standard interaction layer, what changes for software is that a lot of software is going to be either systems or record or like certain customized workflow execution engines. And a lot of how you actually do stuff will be controlled at the agent layer.Alessio [00:29:19]: And you think the rabbit interface is more like it would like you're not actually seeing the app that the model interacts with. You're just saying, hey, I need to log this call on Salesforce. And you're never actually going on salesforce.com directly as the user. I can see that being a model.David [00:29:33]: I think I don't know enough about what using rabbit in real life will actually be like to comment on that particular thing. But I think the broader idea that, you know, you have a goal, right? The agent knows how to break your goal down into steps. The agent knows how to use the underlying software and systems or record to achieve that goal for you. The agent maybe presents you information in a custom way that's only relevant to your particular goal, all just really leads to a world where you don't really need to ever interface with the apps underneath unless you're a power user for some niche thing.Swyx [00:30:03]: General question. So first of all, I think like the sort of input mode conversation. I wonder if you have any analogies that you like with self-driving, because I do think like there's a little bit of how the model should perceive the world. And you know, the primary split in self-driving is LiDAR versus camera. And I feel like most agent companies that I'm tracking are all moving towards camera approach, which is like the multimodal approach, you know, multimodal vision, very heavy vision, all the Fuyu stuff that you're doing. You're focusing on that, including charts and tables. And do you find that inspiration there from like the self-driving world? That's a good question.David [00:30:37]: I think sometimes the most useful inspiration I've found from self-driving is the levels analogy. I think that's awesome. But I think that our number one goal is for agents not to look like self-driving. We want to minimize the chances that agents are sort of a thing that you just have to bang your head at for a long time to get to like two discontinuous milestones, which is basically what's happened in self-driving. We want to be living in a world where you have the data flywheel immediately, and that takes you all the way up to the top. But similarly, I mean, compared to self-driving, like two things that people really undervalue is like really easy to driving a car down highway 101 in a sunny day demo. That actually doesn't prove anything anymore. And I think the second thing is that as a non-self-driving expert, I think one of the things that we believe really strongly is that everyone undervalues the importance of really good sensors and actuators. And actually a lot of what's helped us get a lot of reliability is a really strong focus on actually why does the model not do this thing? And the non-trivial amount of time, the time the model doesn't actually do the thing is because if you're a wizard of ozzing it yourself, or if you have unreliable actuators, you can't do the thing. And so we've had to fix a lot of those problems.Swyx [00:31:43]: I was slightly surprised just because I do generally consider the way most that we see all around San Francisco as the most, I guess, real case of agents that we have in very material ways.David [00:31:55]: Oh, that's absolutely true. I think they've done an awesome job, but it has taken a long time for self-driving to mature from when it entered the consciousness and the driving down 101 on a sunny day moment happened to now. Right. So I want to see that more compressed.Swyx [00:32:07]: And I mean, you know, cruise, you know, RIP. And then one more thing on just like, just going back on this reliability thing, something I have been holding in my head that I'm curious to get your commentary on is I think there's a trade-off between reliability and generality, or I want to broaden reliability into just general like sort of production readiness and enterprise readiness scale. Because you have reliability, you also have cost, you have speed, speed is a huge emphasis for a debt. The tendency or the temptation is to reduce generality to improve reliability and to improve cost, improve speed. Do you perceive a trade-off? Do you have any insights that solve those trade-offs for you guys?David [00:32:42]: There's definitely a trade-off. If you're at the Pareto frontier, I think a lot of folks aren't actually at the Pareto frontier. I think the way you get there is basically how do you frame the fundamental agent problem in a way that just continues to benefit from data? I think one of the main ways of being able to solve that particular trade-off is you basically just want to formulate the problem such that every particular use case just looks like you collecting more data to go make that use case possible. I think that's how you really solve. Then you get into the other problems like, okay, are you overfitting on these end use cases? You're not doing a thing where you're being super prescriptive for the end steps that the model can only do, for example.Swyx [00:33:17]: Then the question becomes, do you have one house model that you can then customize for each customer and you're fine-tuning them on each customer's specific use case?David [00:33:25]: Yeah.Swyx [00:33:26]: We're not sharing that. You're not sharing that. It's tempting, but that doesn't look like AGI to me. You know what I mean? That is just you have a good base model and then you fine-tune it.David [00:33:35]: For what it's worth, I think there's two paths to a lot more capability coming out of the models that we all are training these days. I think one path is you figure out how to spend, compute, and turn it into data. In that path, I consider search, RL, all the things that we all love in this era as part of that path, like self-play, all that stuff. The second path is how do you get super competent, high intelligence demonstrations from humans? I think the right way to move forward is you kind of want to combine the two. The first one gives you maximum sample efficiency for a little second, but I think that it's going to be hard to be running at max speed towards AGI without actually solving a bit of both.Swyx [00:34:16]: You haven't talked much about synthetic data, as far as I can tell. Probably this is a bit too much of a trend right now, but any insights on using synthetic data to augment the expensive human data?David [00:34:26]: The best part about framing AGI as being able to help people do things on computers is you have an environment.Swyx [00:34:31]: Yes. So you can simulate all of it.David [00:34:35]: You can do a lot of stuff when you have an environment.Alessio [00:34:37]: We were having dinner for our one-year anniversary. Congrats. Yeah. Thank you. Raza from HumanLoop was there, and we mentioned you were coming on the pod. This is our first-Swyx [00:34:45]: So he submitted a question.Alessio [00:34:46]: Yeah, this is our first, I guess, like mailbag question. He asked, when you started GPD 4 Data and Exist, now you have a GPD 4 vision and help you building a lot of those things. How do you think about the things that are unique to you as Adept, and like going back to like the maybe research direction that you want to take the team and what you want people to come work on at Adept, versus what is maybe now become commoditized that you didn't expect everybody would have access to?David [00:35:11]: Yeah, that's a really good question. I think implicit in that question, and I wish he were tier two so he can push back on my assumption about his question, but I think implicit in that question is calculus of where does advantage accrue in the overall ML stack. And maybe part of the assumption is that advantage accrues solely to base model scaling. But I actually believe pretty strongly that the way that you really win is that you have to go build an agent stack that is much more than that of the base model itself. And so I think like that is always going to be a giant advantage of vertical integration. I think like it lets us do things like have a really, really fast base model, is really good at agent things, but is bad at cat and dog photos. It's pretty good at cat and dog photos. It's not like soda at cat and dog photos, right? So like we're allocating our capacity wisely, right? That's like one thing that you really get to do. I also think that the other thing that is pretty important now in the broader foundation modeling space is I feel despite any potential concerns about how good is agents as like a startup area, right? Like we were talking about earlier, I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from can we make a better agent? Because right now I think we all see that, you know, if you're training on publicly available web data, you put in the flops and you do reasonable things, then you get decent results. And if you just double the amount of compute, then you get predictably better results. And so I think pure play foundation model companies are just going to be pinched by how good the next couple of llamas are going to be and the next what good open source thing. And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.Swyx [00:36:56]: So you don't consider yourself a pure play foundation model company?David [00:36:59]: No, because if we were a pure play foundation model company, we would be training general foundation models that do summarization and all this other...Swyx [00:37:06]: You're dedicated towards the agent. Yeah.David [00:37:09]: And our business is an agent business. We're not here to sell you tokens, right? And I think like selling tokens, unless there's like a...Swyx [00:37:14]: Not here to sell you tokens. I love it.David [00:37:16]: It's like if you have a particular area of specialty, right? Then you won't get caught in the fact that everyone's just scaling to ridiculous levels of compute. But if you don't have a specialty, I find that, I think it's going to be a little tougher.Swyx [00:37:27]: Interesting. Are you interested in robotics at all? Just a...David [00:37:30]: I'm personally fascinated by robotics. I've always loved robotics.Swyx [00:37:33]: Embodied agents as a business, you know, Figure is like a big, also sort of open AI affiliated company that raises a lot of money.David [00:37:39]: I think it's cool. I think, I mean, I don't know exactly what they're doing, but...Swyx [00:37:44]: Robots. Yeah.David [00:37:46]: Well, I mean, that's a...Swyx [00:37:47]: Yeah. What question would you ask? If we had them on, what would you ask them?David [00:37:50]: Oh, I just want to understand what their overall strategy is going to be between now and when there's reliable stuff to be deployed. But honestly, I just don't know enough about it.Swyx [00:37:57]: And if I told you, hey, fire your entire warehouse workforce and, you know, put robots in there, isn't that a strategy? Oh yeah.David [00:38:04]: Yeah. Sorry. I'm not questioning whether they're doing smart things. I genuinely don't know what they're doing as much, but I think there's two things. One, I'm so excited for someone to train a foundation model of robots. It's just, I think it's just going to work. Like I will die on this hill, but I mean, like again, this whole time, like we've been on this podcast, we're just going to continually saying these models are basically behavioral cloners. Right. So let's go behavioral clone all this like robot behavior. Right. And then you figure out everything else you have to do in order to teach you how to solve a new problem. That's going to work. I'm super stoked for that. I think unlike what we're doing with helping humans with knowledge work, it just sounds like a more zero sum job replacement play. Right. And I'm personally less excited about that.Alessio [00:38:46]: We had a Ken June from InBoo on the podcast. We asked her why people should go work there and not at Adept.Swyx [00:38:52]: Oh, that's so funny.Alessio [00:38:54]: Well, she said, you know, there's space for everybody in this market. We're all doing interesting work. And she said, they're really excited about building an operating system for agent. And for her, the biggest research thing was like getting models, better reasoning and planning for these agents. The reverse question to you, you know, why should people be excited to come work at Adept instead of InBoo? And maybe what are like the core research questions that people should be passionate about to have fun at Adept? Yeah.David [00:39:22]: First off, I think that I'm sure you guys believe this too. The AI space to the extent there's an AI space and the AI agent space are both exactly as she likely said, I think colossal opportunities and people are just going to end up winning in different areas and a lot of companies are going to do well. So I really don't feel that zero something at all. I would say to like change the zero sum framing is why should you be at Adept? I think there's two huge reasons to be at Adept. I think one of them is everything we do is in the service of like useful agents. We're not a research lab. We do a lot of research in service of that goal, but we don't think about ourselves as like a classic research lab at all. And I think the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build a GI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations, they're not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals, solve it, right? I think that's really cool. Like everybody knows a lot of these evals are like pretty saturated and the new ones that even are not saturated. You look at someone and you're like, is this actually useful? Right? I think that's a degree of practicality that really helps. Like we're equally excited about the same problems around reasoning and planning and generalization and all of this stuff. They're very grounded in actual needs right now, which is really cool.Swyx [00:40:45]: Yeah. This has been a wonderful dive. You know, I wish we had more time, but I would just leave it kind of open to you. I think you have broad thoughts, you know, just about

El Laboratorio de Juan
159 | Material utilizado en la Trailcat200

El Laboratorio de Juan

Play Episode Listen Later Mar 12, 2024 26:41


En este programa analizo todo el material que utilicé durante los 210 kms que recorrí en la Trailcat200.Esta es mi lista de material para la carrera:Electrónica GPS:-Suunto Race Acero-Suunto Vertical SolarNutrición:Todos los productos de Santa MadreTextil:-4 pantalones Ronhill, modelos Distance, Marathon, Ultra Twin Short-4 camisetas térmicas Hoko, modelos Fuyu y Geisha-1 malla térmica Nike-3 mallas Hoko, modelos Kobe y Nagai-4 calcetines Lurbel, modelos Desafío, Distance y Path Pro-2 Haglöfs Gore-Tex-1 Raidlight Ultralight MP+Mochila:-Salomon Adv Skin Cross Season 15LZapatillas:-Salomon S/LAB Genesis-Asics Gel Trabuco 12-La Sportiva Mutant 2-Salomon GenesisAccesorios:-Crema antifricción Six Pro-Bastones Weis Z Max Carbon-Frontal Led Lenser Neo10R-Guantes: Leki, Raidlight, The North Face, Salomon, Lurbel Alaska-Gafas GOG Steno fotocromáticasPuedes contactarme en:juan@ellaboratoriodejuan.com

Delirio Místico
Chaser Game W

Delirio Místico

Play Episode Listen Later Mar 6, 2024 81:43


Fuyu comeme el puchero.El colecho, la cultura japonesa, las manos de Fuyu, la Itsuki-Tsuki, la nena y el amo de casa. ⁠Seguinos en Twitter  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Seguinos en Instagram ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠Seguinos en Tik Tok⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Suscribite a nuestro canal de Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tecito⁠

The top AI news from the past week, every ThursdAI

What A SHOW folks, I almost don't want to write anything in the newsletter to MAKE you listen haha but I will I know many of you don't like listening to be babble. But if you chose one episode to listen to instead of just skimming the show-notes, make it this one. We've had 2 deep dives, one into the exciting world of multi-modalilty, we chatted with the creator of Moondream1, Vik and the co-founders of Prophetic, Wes and Eric about their EEG/fMRI multimodal transformer (that's right!) and then we had a DEEP dive into the new Hourglass Diffusion Transformers with Tanishq from MedArc/Stability. More than 1300 tuned in to the live show

god tv ceo new york director netflix new year canada english europe google ai science technology vision voice french west doctors deep phd video building joe biden european writing elections ukraine reading data german ny russian elon musk spanish european union microsoft mit focus dm italian tools speak open iphone blog hawaii institute 3d nasa generation stage discord effects scale mac id cat ambassadors air achieve trans standards rico terminator dms limited buzz folks whispers transformers instant demo instructors fireworks utilizing correct function horn gemini openai prophetic ongoing ux users stability nvidia api world economic forum highlighting luther shot remind gadgets vic chrome accelerate qu'en attributes python stable ui deepfakes rope pinocchio gpt lama linux controls alibaba mosaic github nissan automatic prometheus apis lava neuralink dod transformer bagels farrell whisperer runway qa amd javascript azure argentinian temporal biases tl lucid weights nda pharrell llm gpu wick obsidian 3b hug raspberry pi enrico darpa national science foundation elixir mongolian doa phi morpheus eeg diffusion cloudflare rag faro gpus gan quin nist benchmarking deepmind nsf lucid dreams kwan 6b kilian fmri persson anthropic gans lm millet json vb compute sota hf ouroboros yi fid tropic pika vik mistral justin lin cpp rms lumiere moonbeam prevalent multimodal adept nissen mpt asr baklava uis mits narr imad hila persimmon unkown astonish 1gb umesh imagenet ttv loras neurips mmu multimodality daniel kaplan raspberry pi zero google video inews vl m entropic quyen nishtha ggf phy asrs vci ropen oai technium fuyu swix tesla x anuka john durbin
The Produce Industry Podcast w/ Patrick Kelly
WK45 - GROOVY GREENS, FUYU PERSIMMONS & MORE ON FRESH FROM THE FIELD FRIDAYS - EP119

The Produce Industry Podcast w/ Patrick Kelly

Play Episode Listen Later Nov 10, 2023 21:24


This week's Fresh From the Field Fridays from The Produce Industry Podcast Dan the Produce Man shares some info on transition time, Fuyu Persimmons, Groovy Greens and more so tune in turn on and get down! FANCY SPONSORS: Ag Tools, Inc.: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.agtechtools.com⁠⁠⁠⁠⁠⁠⁠⁠, Flavor Wave, LLC.: ⁠⁠⁠⁠⁠⁠⁠⁠https://flavorwavefresh.com⁠⁠⁠⁠⁠⁠⁠⁠, Noble Citrus: ⁠⁠⁠⁠⁠⁠⁠⁠https://noblecitrus.com⁠⁠⁠⁠⁠⁠⁠⁠, Buck Naked Onions/Owyhee Produce, Inc.: ⁠⁠⁠⁠⁠⁠⁠⁠http://www.owyheeproduce.com⁠⁠⁠⁠⁠⁠⁠⁠ and John Greene Logistics Company: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.jglc.com⁠⁠⁠⁠⁠⁠⁠⁠ and Summer Citrus From South Africa; ⁠⁠⁠⁠⁠⁠⁠⁠https://www.summercitrus.com ⁠⁠⁠⁠⁠⁠⁠⁠ CHOICE SPONSORS: Indianapolis Fruit Company: ⁠⁠⁠⁠⁠⁠⁠⁠https://indyfruit.com⁠⁠⁠⁠⁠⁠⁠⁠, Equifruit: ⁠⁠⁠⁠⁠⁠⁠⁠https://equifruit.com⁠⁠⁠⁠⁠⁠⁠⁠ Arctic® Apples: ⁠⁠⁠⁠⁠⁠⁠⁠https://arcticapples.com⁠⁠⁠⁠⁠⁠⁠⁠ Sev-Rend Corporation: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.sev-rend.com⁠⁠⁠⁠⁠⁠⁠⁠, Jac Vandenberg Inc.: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.jacvandenberg.com⁠⁠⁠⁠⁠⁠⁠⁠ Dole Fresh Vegetables: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.dole.com/en/produce/vegetables⁠⁠⁠⁠⁠⁠⁠⁠ WholesaleWare: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.grubmarket.com/hello/software/index.html⁠⁠⁠⁠⁠⁠⁠⁠ Continental Fresh, LLC: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.continentalfresh.com⁠⁠⁠⁠⁠⁠⁠⁠ Golden Star Citrus, Inc.: ⁠⁠⁠⁠⁠⁠⁠⁠http://www.goldenstarcitrus.com⁠⁠⁠⁠⁠⁠⁠⁠ STANDARD SPONSORS:  Freshway Produce: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.freshwayusa.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ , Yo, Quiero/Fresh Innovations, LLC.: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://yoquierobrands.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  RPE/Tasteful Selections: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.tastefulselections.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Ben B. Schwartz & Co.:⁠⁠⁠⁠⁠⁠⁠⁠https://benbdetroit.com⁠⁠⁠⁠⁠⁠⁠⁠/ and Citrus America: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://citrusamerica.com⁠⁠⁠ --- Support this podcast: https://podcasters.spotify.com/pod/show/theproduceindustrypodcast/support

programmier.bar – der Podcast für App- und Webentwicklung
News AI #8: OpenAI DevDays // State of AI // DallE3 // Zephyr // Fuyu 8B

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later Oct 25, 2023 44:51


Die OpenAI DevDays finden am 4. November statt und Philipp und Fabi spekulieren über die möglichen Releases. OpenAI hat nun Dall-E 3 an alle Pro User:innen ausgerollt und muss sich dadurch auch weiteren möglichen Prompt-Injection-Attacken stellen. Auch lustig war der Roast der Gründer:innen von OpenAI durch ChatGPT. Der State of AI Report 2023 ist raus. Welche Prognosen getroffen wurden und sich bewahrheiteten sowie welche für das kommende AI-Jahr aufgestellt wurden, klären wir in dieser Folge. Philipps Kolleg:innen bei Hugging Face haben ein DPO-finegetunetes Language Model auf Basis von Mistral AIs Modell trainiert. Das Ergebnis haben sie Zephyr7B getauft. Adept AI will mit ACT-1 einen Agenten bauen, der User:innen bei allen Aufgaben am Computer unterstützt. Hierfür benötigen sie ein multimodales Modell, das Inhalte von Bildern sehr gut analysieren kann. Eine erste 8B-Paramater-Version dieses Modells haben sie mit Fuyu8B released.Hier noch der versprochene Link zum Fuyu Multimodal Playground auf Hugging Face.

The top AI news from the past week, every ThursdAI

Hey friends, welcome to ThursdAI Oct - 19. Here's everything we covered + a little deep dive after the TL;DR for those who like extra credit. ThursdAI - If you like staying up to date, join our communityAlso, here's the reason why the newsletter is a bit delayed today, I played with Riffusion to try and get a cool song for ThursdAI

Quoth the Camser
How will others find me if I'm not doing anything to facilitate that?

Quoth the Camser

Play Episode Listen Later Jul 21, 2023 6:55


This is day two with my Pebble Stationery Co. A5 notebook with Cosmo Air Light paper. I'm very happy with it indeed! I wrote with the Edelfeder and Diamine Bilberry yesterday and the Pelikan M400 with Fuyu-gaki today. There was no bleed, no feathering and, best of all, no squeaking with the Pilot!Video DiaryLinks* Auld Kirk video on Chilled Scotland channel* Quoth the Camser podcast* The Sobriety Diaries with Nate Kelly* My Notion system: Pillars, Pipelines and Vaults* Acoustic Guitar IO Podcast This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit camscampbell.substack.com/subscribe

Chef AJ LIVE!
Persimmon-Kumquat Pudding with Coconut Whipped Cream & Nutritarian Vinaigrette-Chef James Rohrbacher

Chef AJ LIVE!

Play Episode Listen Later Jan 15, 2023 88:53


Persimmon-Kumquat Pudding with Coconut Whipped Cream Culinary Inspirations: From Gourmet to Pret-a-Manger. We CAN have nice things! +Cooking Demo Serves 6 For the pudding: 4 very ripe Hachiya persimmons or 5 Fuyu persimmons, allowed to soften ¼ lb. kumquats, whole, about 10 1-2 TBLS Dr. Fuhrman's Blood Orange vinegar, or lemon juice, or to taste Coconut water or orange juice (optional to facilitate blending) For the coconut whipped cream: 1 ½ cup nuts of choice (cashews, macadamias, walnuts, hemp, or a combination) 1 TBLS unsweetened flaked coconut 12 deglet noor dates, pitted Coconut water (as needed to facilitate blending) Optional: 1 teaspoon, Dr. Fuhrman's Coconut Vinegar, or coconut extract Optional garnishes: Unsweetened shredded coconut, raw or lightly toasted 6 kumquats, cut into pinwheels and seeded In a high-speed blender, puree the pudding ingredients until smooth, adding coconut water or orange juice as needed to facilitate blending. Place in 12 individual serving dishes (martini glasses make a nice presentation). Likewise, puree the coconut cream ingredients until smooth, using coconut water to facilitate blending. Spread a layer of the coconut cream over the persimmon pudding and chill for at least 2 hours. Garnish with unsweetened shredded coconut and some kumquat pinwheels. Nutritarian Vinaigrette (Basic Recipe) ½ cup vinegar or citrus juice (any flavored vinegar, balsamic, sherry, lemon juice, lime juice, Yuzu, verjus, etc) 1 cup water 2 teaspoons arrowroot powder, dissolved in an additional ¼ cup cold water Bring the vinegar and the cup of water to a boil in a small saucepan. Once boiling, whisk in the arrowroot-cold water mixture and let boil for 2 minutes, but no longer, whisking occasionally. Remove from the heat and let cool to room temperature. Refrigerate until ready to use. Makes about 1 ½ cups, approx. 6 servings.

Rhythm on the Rocks
The Otolith & Fuyu Small Batch Japanese Whiskey

Rhythm on the Rocks

Play Episode Listen Later Dec 7, 2022 53:48


Frizz and Bob share a new Japanese whisky, Fuyu Small Batch, and get their adreneline rush on with Levi and Matt from The Otolith. We chat all about their hot new album, overcoming loss and fear, jumping out of planes, the best Oreo flavor ever, and how to enjoy Malort- all while we try to figure out who let the dogs out.

Marti's Music Kitchen
MMK S3-46 Shoehorn Tapdancing Saxophone Cooking with Persimmons and Savory Beans

Marti's Music Kitchen

Play Episode Listen Later Nov 8, 2022 45:31


Welcome to Marti's Music Kitchen, the Fun food Podcast - with creative people - where anything can happen! On this next episode, we are talking with a man who expresses his passion for music through his saxophone - and his tap shoes - and drums and piano, flute and even the pot lids from his kitchen! Whatever he can find to express the rhythm that flows from him. In the kitchen, we make his go-to meal after a gig - Savory Bean Sauté and a Persimmon and Pumpkin Seed Salad. I'd never had persimmons and turns out - they're delicious! Shoehorn has made his way through over 30 countries, always tapping and playing his way into the local musical culture. Something to feed his hunger for multi-cultural diversity. He's also known for inventing the Tappercussion e-tap electric tap dance instrument, which he has used on some of the six albums he has released over the years. Let me just say that Shoehorn is one interesting guy! Find out more about Shoehorn, and hear him perform live on the show - on Episode 46 of Marti's Music Kitchen. Shoehorn's Links! instagram: miconshoehorn facebook: M.C. Shoehorn, tap dancing saxophonist youtube: https://youtu.be/dRDh7YS3ocg (MC Shoehorn Conley) website: https://shoehornmusic.com/ Marti's Links! http://MartiMendenhall.com http://Patreon.com/MartiMendenhall Marti's Music Kitchen Season 1 Cookbook https://martimendenhall.com/cookbook-store.html Persimmon Pumpkin Seed Salad Serves 4 INGREDIENTS ¼ cup red onion, minced Juice of 2 lemons 2 cups cabbage, thinly shredded 1-2 Fuyu persimmons, peeled and thinly sliced (a potato peeler works great!) 1 cup red lettuce greens, torn ¼ cup crumbled feta cheese ¼ cup pepitas (shelled pumpkin seeds) 2 tablespoons tahini 2 tablespoons balsamic vinegar 1 tablespoon olive oil Salt and pepper to taste DIRECTIONS Soak onion in lemon juice for a few minutes. Combine with remaining ingredients and serve. Savory Bean Sauté ADVANCE DIRECTIONS Soak dried beans overnight: cover with water by 1-2 inches. Drain and rinse. (Quick soak: cover beans with water by 2 inches in large pot. Boil 5 minutes; remove from heat, let sit one hour.) INGREDIENTS 2 tablespoons olive oil or butter 4 cloves garlic, chopped or 2 tablespoons garlic flakes ¼ cup onion, chopped 1 carrot, chopped ½ cup mushrooms, sliced 1 cup radish greens, chopped and blanched 1 cup dried mixed beans (should yield at least 2 cups when cooked) 1 tablespoon curry powder (S&B is recommended) 1-2 tablespoons soy sauce 3 cups rice, cooked If desired: salt, pepper, brewer's yeast, grated cheese, hot sauce (Sabor Mineiro is recommended) DIRECTIONS Cook soaked beans in water until tender. (Shoehorn loves his stove-top pressure cooker, but Instant Pot or stove-top boiling will do.) Blanch radish greens in boiling water. Strain and chop. Heat olive oil in skillet on medium-high heat. Add garlic, onion, carrot, mushrooms and radish greens, in that order. Turn heat to medium; saute 1 minute. Cover with beans; do not stir. Add curry powder and soy sauce. Stir when heated through and serve over rice. Season with any combination of salt, pepper, brewer's yeast, grated cheese, or hot sauce, to taste. #MartisMusicKit #MMK #MartiMendenhall #Food #Music #Podcast #Recipes #MusicAndFood #Cooking #OregonMusicNews #Podcast #Season1Cookbook #QuickBites #WhereAnythingCanHappen #CookingHacks #OregonMusicNews #Jazz #Shoehorn #TapDancing #Jazz #Culture #SavoryBeanSaute #Persimmons #Saxophone

Modern Persian Food
Persimmons

Modern Persian Food

Play Episode Listen Later Nov 2, 2022 18:19


Have you tried Persimmons?  It's one of the Beats most beloved fruit.  Who do you think loves it more?  Join us as we explore the two types of persimmons including fun ways to gobble them up.   Now for a listener quiz: Which are you going to try first?  Hachiya or Fuyu? Time for a quiz - which is good for baking?  Which version can be eaten like an apple without further ripening? Will you be adding them to your fall natural table decor?   This week's “Ask the Beats” comes from Bita's daughter, Leyla joon. Leyla wants to know… what is your favorite fall or winter dish to make? Beata:  Fesenjoon | Persian Poultry Walnut and Pomegranate Stew;  Haleem | Persian Oat and Poultry Porridge Bita:  Soups!  Creamy Soup eh Jo | Persian Barley Soup;  Chilis   Episodes referenced: Episode 7: Persian Fall Flavors   Recipes referenced: Walnut and Pomegranate Stew – Khoreshteh Fessenjoon – BeatsEats Persimmon French Toast – BeatsEats   All Modern Persian Food podcast episodes can be found at: Episodes Co-host Beata Nazem Kelley blog: BeatsEats – Persian Girl Desperately Addicted to Food! Co-host Bita Arabian blog: Oven Hug - Healthy Persian Recipes | Modern Persian Recipes   Sign up for the Modern Persian Food podcast email newsletter here!   Subscribe+ to the Modern Persian Food podcast on your favorite podcast player, and tell a friend. Podcast production by Alvarez Audio

Live From The Broken Hammer
LFTBH - 60 - THE END

Live From The Broken Hammer

Play Episode Listen Later Jan 12, 2022 66:03


We have been gone for a while now and well.... it is officially THE END.... of the year. We join up for a quick podcast session to wrap up the new year. We weren't quite quick enough and we have a special guest appearance for the second half. We missed you. We love you. We can't wait to hit the studio again. Happy New Year!    Support the show (https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=ZZKZ7CAJKJM46¤cy_code=USD&source=url)

Bamf Radio - Lofi and Chill
Best of 2021 - Licence to Chill

Bamf Radio - Lofi and Chill

Play Episode Listen Later Dec 27, 2021 89:23


Hey everyone! Welcome back! Another year coming to an end and with that brings this season to a close, it's been a pretty cool year of mixes to be honest. I honestly never thought that this mix would go this far, every year you guys are more and more. That's why I want to thank you from the bottom of my heart for all the reviews you leave me. This also brings us to another topic and that is that as you can see my show is free and I know you have a heart of gold but please don't donate money to me. I don't know how you found a way to donate to me but stop, this show will still be free whether you donate or not and what's more, you would make me much happier if you spend the money on yourselves. All I ask is that if you like any of the artists featured in the mixes that you follow them and support them. Thank you so much for this wonderful year, I hope you start 2022 with lots of love, lots of happiness and lots of health (mental health is also part of it). I love you all and see you next time

Flower Power Garden Hour
Flower Power Garden Hour 128: Listener Q&A

Flower Power Garden Hour

Play Episode Listen Later Dec 18, 2021 39:43


This is a listener Q&A episode.  Questions cover topics including: Cactus – how to remove new growth Fuyu persimmons --  this year's crop has seeds in the fruit, and in years past has not.  Ideas why? Plumeria – should it stay outside or bring inside during the cold weather Poinsettias – they are not turning red, even though they are being kept in relative dark.  Is there anything that can help them turn red? Bereavement – how do I keep healthy?  There are many different plants on this one   To ask questions for future shows, submit them at: Facebook Instagram email Marlene at marlenetheplantlady@gmail.com   Find Marlene over on YouTube, Instagram and Facebook

Diary of Doom
Chapter 106 - Low Flying Hawks

Diary of Doom

Play Episode Listen Later Nov 24, 2021 74:06


We are thrilled to feature EHA and AAL (Eddie and Alex respectively), the main creative forces behind Low Flying Hawks, as our guests for this chapter. Along with collaborators Toshi Kasai, Dale Crover, and Trevor Dunn, the duo released Fuyu earlier this year, which is the conclusion to their album trilogy. During our chat with them, we delve into the brilliance of Sabbath's Technical Ecstasy, a real love for slow music that makes you feel, brain dancing, Bohren's giant bass, emotional drummers, life as a Sisyphian task, reckoning with social media, standing out in a crowded scene, why big hands mean big guitar solos, recording on impulse, a possible Danzig cover band, and Scott Walker. Support Low Flying Hawks  Track featured is "Subatomic Sphere" off Fuyu, available now!

El Laboratorio de Juan
53 | TOR DES GÉANTS 2021. ¿Qué material utilicé, y por qué?

El Laboratorio de Juan

Play Episode Listen Later Oct 12, 2021 17:13


En el programa número 52 te expliqué el Tor des Géants, explorando la parte más personal o íntima.Esta semana hablaré de la parte más tangible y práctica: el material.Empezaré analizando las zapatillas que utilicé, en el mismo orden en que me las puse:1-RaidLight Responsiv Ultra.Utilizada 54 kms.2-La Sportiva Ultra Raptor.Utilizada 56 kms. y 5030 metros de desnivel positivo.3-Hoka Evo Mafate 2.Utilizada 46 kms. y 2626 metros de desnivel positivo.4-RaidLight Responsiv Dynamic.Utilizada 136 kms. y 12500 metros de desnivel positivo.ROPA TÉRMICA E IMPERMEABLE:Camisetas térmicas Hoko, modelo Geisha y Fuyu (180 gramos en talla M).Mallas largas térmicas Hoko.Chaqueta impermeable principales:RaidLight Hyperlight MP+ (20/25k)Haglöfs L.I.M Series Gore-Tex.Pantalones de agua impermeables:Raidlight MP+LafumaHaglöfsSalomon Bonatti ProMochila RaidLight Activ 6 litros.Accesorios:Frontal LedLenser NEO 10R (+ 1 batería extra para cada noche?Manoplas impermeables RaidLight Hyperlight MP+Calcetines Lurbel, modelos Desafío y Distance (entre 12 y 14 pares).Puedes contactarme en el correo:juan@ellaboratoriodejuan.conY a través de mis RRSS buscando: "el laboratorio de juan".

Views From Lot K - A Philly Sports Podcast
October 11, 2021 - Flying into a new season, Eagles Soar, Nittany Lions and Owls fall Silent

Views From Lot K - A Philly Sports Podcast

Play Episode Listen Later Oct 11, 2021 60:30


Max and Steve are back and they are joined by Mike! Before we hear from Mike, Steve and Max recap an Eagles win (0:24) and follow up the weekend of football with the disappointing performance of Penn State (11:40) and Temple (24:42). Mike joins in on the fun for a Flyers season preview (27:08) as they open their season on Friday. The Union (49:14) round us out with a recap of their 2-1 win over FC Cincinnati and a look into where they currently sit on the Eastern Table. Is Steve's losing streak finally over (55:51)? Find out the outcome of Steve's Spectacular Satisfactory Saturday Scholastic Parlay and much more on today's Views From Lot K! Thanks once again to special guest Mike Honick for joining the show today. Check him out on Twitch (https://www.twitch.tv/FUYU_crux) and Twitter (https://twitter.com/HandMeThe_Mike)!   Misfits (Instrumental) by RYYZN https://soundcloud.com/ryyzn Creative Commons  Attribution 3.0 Unported CC BY 3.0 Free Download / Stream: http://bit.ly/-misfits Music promoted by Audio Library https://youtu.be/iSSp4TH7Lks

Views From Lot K - A Philly Sports Podcast
September 13, 2021 - Eagles Demolish Falcons, Temple Takes Flight, Phillies Fire Still Burning

Views From Lot K - A Philly Sports Podcast

Play Episode Listen Later Sep 13, 2021 53:15


Welcome back to the Views From Lot K! So much to recap and discuss in the sports world this week, we had to get Steve to record while he is on vacation! Nick Sirianni and the Birds steam rolled the Falcons on defense, while the offense was hitting new octanes. Max and Steve discuss Jalen Hurts' impeccable decision making and knowledge of the new offensive scheme which lead to their commanding win (01:30). Philadelphia football squads only knew how to win as Temple beat the breaks off Akron for their first win of the season. Justin Lynch stepped up to get the offense rolling after havoc on the defense got them back in the game (16:20). The winning ways translated over to State College as Penn State took down Ball State with ease, contrary to Max's warning on Thursday. Steve is already looking ahead to the Whiteout instead of wanting to recap this game (20:45). Max talks the Union with special guest Mike Honick, prepping you for Wednesday's CONCACAF Champions League Semi-Final against Club America. Max and Mike talk formations, desired starting XI's, and how the Union can find a way to win (26:20). The winning ways of earlier do not come to the dumpster fire that is the Phillies organization. Max and Steve discuss their series with the Rockies that kills all hope and passion for the team, but someone didn't tell Bryce Harper to stop trying. His play has kept the Phils in the playoff hunt, and dragging the pain and suffering out for the rest of the fans (40:10). Finally, Bets of the Week! (49:00) All this and more on Views From Lot K!   Thanks once again to special guest Mike Honick for joining the show today. Check him out on Twitch (https://www.twitch.tv/FUYU_crux) and Twitter (https://twitter.com/HandMeThe_Mike)!   Misfits (Instrumental) by RYYZN https://soundcloud.com/ryyzn Creative Commons  Attribution 3.0 Unported CC BY 3.0 Free Download / Stream: http://bit.ly/-misfits Music promoted by Audio Library https://youtu.be/iSSp4TH7Lks

3 Boys In A Bar
S2/E2: High Ground

3 Boys In A Bar

Play Episode Listen Later Feb 16, 2021 35:38


Episode 2 of season 2 sees Tom, Marco and Will review another Aussie new release: High Ground. Set in the early 20th century in the remote Arnhem Land region of northern Australia, an indigenous community seeks revenge for an unprovoked and callous massacre by the white colonists. It's Will's turn to bring a whisky to the bar, and he departs the Australian shores in favour of a Japanese blended whisky, the Fuyu. High Ground stars Jacob Junior Nayinggul, Simon Baker, Callan Mulvey, Aaron Pedersen, Ryan Corr, Caren Pistorius, Sean Mununggurr, Witiyana Marika, Esmerelda Marimowa, Maximillian Johnson and Jack Thompson. Directed by Stephen Maxwell Johnson with stunning cinematography by Andrew Commis. In cinemas around Australia.

FM Talk 1065 Podcasts
Plain Gardening with Bill Finch 1-10-21 Hour 1

FM Talk 1065 Podcasts

Play Episode Listen Later Jan 10, 2021 43:34


Lawn and garden expert, author and columnist Bill Finch hosts this weekly gulf coast garden show Sundays 9 to 11 AM. planting in the next few weeks, this time of year, from the place you understand and care about, gardening-wise, where are we really from? differences in Southern living, peppers, a class for all, though much variation, as early in the process of spring as possible, what is spring, frost changes, 72 cell self watering tray, soaking up from bottom, tomato and pepper seeds, Fuyu persimmons,

Minoreba Rock
Ep. 393 | J-Pop de los 70s, 80s y 90s. Parte II | Minoreba FM

Minoreba Rock

Play Episode Listen Later Jan 10, 2021 110:15


1 — Hiromi Ota - Akai HaiHiru (1976) 2 — Momoe Yamaguchi - Kosumusu 秋桜 (1977) 3 — Kaori Yoshinari - hanikami tenshi (1982) 4 — Candies 冬の窓 - Fuyu no mado (1976) 5 — Hitomi Ishikawa - Kurumi Wari Ningyo (1978) 6 — Seiko Matsuda - hitomi wa Diamond (1983) 7 — REBECCA - Cotton Love (1989) 8 — BOΦWY - CLOUDY HEART (1985) 9 — B'z - dakara sono te wo hanashite (1988) 10 — Sharam Q - Zurui Onna (1995) 11 — The Beat Garden - Sky's is the limit 12 — Okui Masami - Rinbu-Revolution 13 — Anri - remember summer days 14 — Tatsuro Yamashita - Love Space 15 — Kyu Sakamoto - Sukiyaki 16 — Show-ya - Kurenai (Cover)

This Podcast is Propaganda
Episode 17: Slide Another War Crime on the Barbie

This Podcast is Propaganda

Play Episode Listen Later Dec 13, 2020 78:33


G'day Mate! This week we're all going Down Unda' to talk about how good ol' Aussie SAS boys were killing innocents over in Afghanistan. We discuss how  the Australian State is punishing the whistleblower with a possible 50 years in jail after trying for years to have his military superiors, the federal police, and high ranking politicians do something about the war crimes he witnessed, we look at the findings of the leaked Afghan Files, the Brereton Report, and the manufactured outrage around Zhao Lijian's tweet of a "doctored image" a.k.a political art.   CALL OUT FOR GUESTS - Contact me if you know someone or yourself want to be a guest on the Pod. Social Media: Twitter - @thispodispropa Instagram - @thispodispropaganda Youtube - This Podcast is Propaganda  Email - thispodcastispropaganda@gmail.com Libsyn Page -www.thispodispropaganda.libsyn.com   Want to Support the Podcast? You can visit the Pod's Patreon Page: Patreon Link - https://www.patreon.com/thispodispropaganda Visit the Official Merch Shop: Merch Link - https://teespring.com/stores/this-pod-is-propaganda-shop   Show Notes: The Afghan Files https://www.abc.net.au/news/2020-06-23/nick-xenophon-afghan-files-david-mcbride-witness-j-whistleblower/12382154  The Brereton Report https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp/rp2021/Chronologies/AllegationsAfghanistan Article on "Doctored Image" https://www.abc.net.au/news/2020-12-01/doctored-image-of-australian-soldier-tweeted-by-chinese-diplomat/12938244 Fu Yu's Photo https://i.insider.com/5fc69265037cbd00186132c2?width=2000&format=jpeg&auto=webp Other Source https://www.abc.net.au/news/2017-07-11/killings-of-unarmed-afghans-by-australian-special-forces/8466642?nw=0  https://www.abc.net.au/news/2016-10-13/supreme-court-judge-examining-special-forces-conduct-afghanistan/7927420 https://www.abc.net.au/news/2017-03-21/women-children-killed-raid-afghanistan-nz-sas-book/8373958

Hi Japanese
HJP 201 – มาแปลเพลงญี่ปุ่นกัน Masawo – Fuyu no Purezento

Hi Japanese

Play Episode Listen Later Dec 9, 2020 17:53


วันนี้เราจะมาแปลเพลงบอกรักเพราะ ๆ เข้ากับช่วงกำลังเข้าฤดูหนาว Masawo – Fuyu no Purezento สามารถดูคำศัพท์ได้ที่ https://sanshirojournal.com/hjp-201/

fuyu
Cuentos Infantiles Japoneses
FUYU NO OKURIMONO

Cuentos Infantiles Japoneses

Play Episode Listen Later Nov 11, 2020 8:55


Cuentos infantiles japoneses para niños o aprender el japonés

BamBoozled.Boston
WHISKY - FUYU - FREESTYLE

BamBoozled.Boston

Play Episode Listen Later Sep 24, 2020 57:11


FreeStyle episodes are episodes where we may have a lull or a cancellation with guess bookings and where we just do whatever comes to mind on the fly. This particular episode we introduce A video component in a “new” studio room. This was a ton of fun, and thanks to Mark we were able to pull off a pretty decent video production! If you don’t see it immediately on our youtube channel, be Patient, it’ll find its way there eventually. We REALLY were all over the place with this one, it was more of a commercial for ADD meds, and what the lack of could possibly do to people on the Whisky. The FUYU was delicious (of course), and is one of our favorites, but because we already have an episode on FUYU, we could’ve expanded with something else, but go with what you know!!! Thanks for listening, please give us suggestions at any of our socials or email, And don’t forget to SUBSCRIBE!!! sean@bamboozled.boston

Deep South Dining
Deep South Dining | Open For Business

Deep South Dining

Play Episode Listen Later Sep 21, 2020 50:19


More than just a place to eat, local restaurants are gathering places for friends and a vital business for the local economy. Today Malcolm and Carol talk with restaurant owner Jeff Good about the affect COVID-19 had on hid business and the way he managing these uncertain times. Also with the fall season making its arrival Felder Rushing (The Gestalt Gardener) joins the show to talk about your fall vegetable garden. Let’s eat y'all! Persimmon Pudding(as mentioned by Carol during the show)INGREDIENTS4 tablespoons/56 grams butter, melted, plus more for the dish5 Fuyu persimmons (about 2 1/4 pounds), trimmed and chopped2 eggs, beaten2 cups/400 grams sugar1 teaspoon/8 grams baking soda1 cup/240 milliliters buttermilk1 ½ cups/190 grams all-purpose flour2 ½ teaspoons/12 grams baking powder1 cup/240 milliliters heavy cream¼ teaspoon/ 1 1/2 grams salt½ teaspoon/3 milliliters vanilla extract Dash of cinnamonPREPARATIONHeat oven to 325 degrees and butter a 2-quart baking dish. Purée persimmons in a food processor or blender until smooth. Strain pulp through a fine mesh strainer into a bowl, using the back of a spoon or a spatula to push purée through. Measure out 2 cups of pulp (discard remaining pulp).Combine eggs, sugar and persimmon pulp in a large bowl and beat with an electric mixer on medium speed until well mixed. Stir baking soda into buttermilk, then add to persimmon mixture and beat to combine.In a separate bowl, sift together flour and baking powder. Beat flour mixture into persimmon mixture in 3 batches, alternating with the cream, beginning and ending with the flour.Stir in melted butter, salt, vanilla and cinnamon. Transfer batter to prepared dish and bake until pudding is set, 1 hour to 1 hour 15 minutes.Link: http://nyti.ms/1tb0EGa See acast.com/privacy for privacy and opt-out information.

Otaku no Kissaten
Otaku no Kissaten #01: Given Parte 1 - Não gosta de Boys' Love (vulgo "yaoi")? Tem certeza? Pois talvez vá gostar desse!

Otaku no Kissaten

Play Episode Listen Later Sep 7, 2020 46:38


Quer conhecer mais sobre a série Given - o mangá, drama-cd e anime? Neste primeira parte de nosso episódio piloto de estreia, daremos um panorama geral sobre o universo de Given, o Boys' Love (BL) que conquistou fãs entre todos os tipos de Otaku e ajudou a mudar a imagem do gênero junto a comunidade. Se você não conhece a série, que tal dar uma chance? E se você já é fã, aproveita para entrar no clima de Fuyu no Hanashi e se preparar para o filme!

BamBoozled.Boston
WHISKY - FUYU Japanese

BamBoozled.Boston

Play Episode Listen Later Jun 23, 2020 22:46


This episode features FUYU whisky, and we LOVED it! price was right, and the flavor was spot on. Join us for a taste....We were still in Covid 19 lockdown, and were still using the Squadcast remote podcast audio system (episodes 1-6). You can hear some Digital "ducking" on Mike's track, among several audio artifacts. I (Sean) found myself making up a few things to just to have something to say (haha).... thank God Mike did his homework here! This truly was an amazing whisky, and we recommend it to all. 

The Daily Gardener
January 9, 2020 Japan's Winter Peonies, Andre Baranowski's Garden Wild, Catherine Parr Traill, Elizabeth Gertrude Knight Britton, Beatrix Farrand, Marvin Gaye, Seed Catalog Poetry, The Lifelong Gardener by Toni Gattone, Jute Rope Plant Basket, and Silve

The Daily Gardener

Play Episode Listen Later Jan 9, 2020 19:40


Today we celebrate an incredible woman, a true pioneer of Canada and a writer and botanical illustrator. We'll learn about one of the most dedicated and famous bryologists, and she helped establish the New York Botanical Garden. Today’s Unearthed Words feature wonderful thoughts on the gardener's favorite winter reading material - seed catalogs. We Grow That Garden Library™ with a book that helps us Garden through the back half of our lives. I'll talk about a garden item that  will brighten up a corner in your cozy winter home and then we’ll wrap things up with the most charming, memorable, and heartbreaking story, and I'm so glad I stumbled on it, and I am so excited to share it with you. But first, let's catch up on a few recent events.   Subscribe Apple | Google | Spotify | Stitcher | iHeart   Curated Articles Japan's winter peonies (kan-botan) - IKIDANE NIPPON Check out Japan's winter peonies. They aren't allowed to flower in the Spring/Summer and are forced to bloom in winter. Each peony is covered with a little straw tent. Kan-botan (寒牡丹) or Fuyu-botan (冬牡丹) means “winter peonies.”   Andre Baranowski's Garden Wild - Flower Magazine New Book: Andre Baranowski’s Garden Wild. One garden features Jorge Sánchez - who transplanted stumps of slash pines from Florida and added mosses. Ingenious.   Now, if you'd like to check out these curated articles for yourself, you're in luck, because I share all of it with the Listener Community in the Free Facebook Group - The Daily Gardener Community. There’s no need to take notes or search for links - the next time you're on Facebook, search for Daily Gardener Community and request to join. I'd love to meet you in the group.   Important Events 1802   Today is the birthday of the Canadian-English writer and botanical illustrator Catherine Parr Traill - she was such an amazing woman. When Catherine was 30 years old, she was newly married, and she immigrated with her husband to Canada. Her family wasn't thrilled about any of it. They didn't approve of her choice and husband, and they certainly didn't like the idea of her leaving England. Yet,  there she was in a boat on the river to Peterborough when she saw some Cardinal Flowers growing along the riverbank. Catherine was enthralled. The flowers in Canada were drastically different from those she'd grown up with, and her passion for wildflowers would help sustain her during the hardships of settling in the Wilds of Canada. Catherine ultimately became known as the botanist of the Backwoods. Although she had never formally studied botany, her accomplishments were quite extraordinary. Catherine published a book called Canadian wildflowers. Her niece took care of the illustrations. The book was helpful and beautiful. It was bound together in a large folio with colored plates, which is now regarded as a rare and valuable antique book. One of the reasons the book is now so rare is that back in the mid-to-late 1800s, the book was used to decorate homes. Young mothers and wives would tear out the beautiful large hand-colored plates and frame them Dash, probably displaying them in their parlors or bedrooms. Settling in the Backwoods of Canada nearly broke her husband. Clearing the land was backbreaking work the weather  Dash, especially during the winter, was incredibly harsh, and for the first three years, there was nothing to harvest. Although they were landowners, there was little labor around to help. One of their homes was destroyed in a fire, and another was seized by the bank to pay off debt. It was Catherine's General optimism and enthusiasm for the outdoors that carried her family through the hardest years. In all, Catherine spent 65 years in Canada. She raised nine children. Experts agree that her best work was a book called Backwoods of Canada that was intended to be a handbook for emigrating women. Catherine's tone was cheerful and direct.  Her entire life, Catherine was incredibly observant and resourceful, and she pulled those skills together as she created the content for her writing. Despite all the terrible hardships she and her family endured, Catherine was a prolific writer, and she always stayed sweet. Catherine died in her home at the age of 98.   1857   Today is the birthday of the famous bryologist Elizabeth Gertrude Knight Britton. Elizabeth married the botanist Nathaniel Lord Britton. She was a teacher, and he was a professor of botany at Columbia University. Together, they helped create the New York Botanical Garden in the Bronx.  Their primary source of Inspirationtion was Kew Gardens in London. Elizabeth was a bryologist. Bryology is the study of mosses. The root, bryōs, is a Greek verb meaning to swell and is the etymology of the word embryo. Bryology will be easier to remember if you think of the ability of moss to expand as it takes on water. Uniquely skilled for her time, Elizabeth Britton was intelligent, resourceful, and not afraid to speak her mind. The author  Elizabeth Gilbert used the real-life Elizabeth Gertrude Britain as the inspiration for the heroine and her novel The Signature of All Things. In researching Britton, Gilbert read through many of her letters and correspondence.  Gilbert said that “In one of her letters, a fellow botanist had sent her a species of moth he thinks he has discovered and wants to name after himself. But Britain replied something like, ‘Do your research, my friend; I've got 20 of these in my cabinet already.’” Elizabeth Britton was also dedicated to conservation. In 1902, Elizabeth helped found the Wildflower Preservation Society of America.   2004  Today the Beatrix Farrand Society purchased the Garland Farm under the mission "to foster the art and science of horticulture and landscape design, with emphasis on the life and work of Beatrix Farrand." The goal was to preserve Garland Farm and Beatrix Farrand's final garden. Beatrix was a landscape gardener and landscape architect in the United States.     1969  Forty-Nine years ago today “I Heard It Through The Grapevine" by Marvin Gaye hit the #1 spot on the charts. It stayed there for seven weeks.   Unearthed Words Today’s Unearthed Words are all about seed catalogs.  If you are a new gardener, welcome to the joy of curling up on the couch with a cup of coffee and a notebook and a seed catalog. If you’re a veteran gardener, you got this down. In either case, you’ll enjoy these verses and poems on a gardener’s favorite Winter activity: going through seed catalogs.   There are two seasonal diversions that can ease the bite of any winter. One is the January thaw. The other is the seed catalogs. — Hal Borland   Aside from the garden of Eden, man’s great temptation took place when he first received his seed catalog. —  Henry Wadsworth Longfellow, 1807-1882, American poet   For gardeners, this is the season of lists and callow hopefulness;  hundreds of thousands of bewitched readers are poring over their catalogs, making lists . . . , and dreaming their dreams. —  Katharine White, “A Romp in the Catalogues,” The New Yorker, 1958, collected in Onward and Upward in the Garden, 1979   I read [garden catalogs] for news,  for driblets of knowledge,  for aesthetic pleasure,  and at the same time, I am planning the future -  so I read in dream. —  Katharine White, in The New Yorker, March 1, 1959, collected in Onward and Upward in the Garden   I have seen women looking at jewelry ads with a misty eye and one hand resting on the heart, and I only know what they’re feeling because that’s how I read the seed catalogs in January. —  Barbara Kingsolver, Animal, Vegetable, Miracle, 2007   I don't believe the half I hear, Nor the quarter of what I see! But I have one faith, sublime and true, That nothing can shake or slay; Each spring I firmly believe anew All the seed catalogs say! —  Carolyn Wells   Grow That Garden Library The Lifelong Gardener by Toni Gattone The subtitle to this book is: Garden with Ease and Joy at Any Age. Carl Honoré, the author of In Praise of Slowness, said this about Toni’s book: “The secret to making the most of later life is to keep doing what you love. With practical advice and gentle inspiration, Gattone shows us how gardening can work for people of any age.” As a Master Gardener, Toni teaches people how to garden all the time. One of the things she started noticing is that the majority of her students are seniors. As a senior herself, Toni quickly learned that adaptive gardening is a vital practice for people who want to continue to work in their Gardens as they age. As Toni says “My Generation, the Boomers, doesn't want to give up the things we love just because we're getting older. Never give up is our motto. My purpose for writing this book is to share what I've learned about how to keep gardening even when your back or knees are screaming at you.” And Toni offers ten adaptive gardening rules to live by. I won't read all 10 of them to you, but I'll share a few to help you get the gist One of the best things you can do for your body is stretched before you start gardening Save money and time by planting perennials and shrubs instead of annuals Finally, look for ways to make your gardening life easier - use self-watering containers  And by a Tool sharpener.   Great Gifts for Gardeners Woven Jute Rope Plant Basket up to 10 inches Flower Pots Floor Indoor House Potted Plant Planters Pots Washable Storage Organizer Basket Natural Materials Handwoven Rustic Home Décor, 11×11 inches Color: Black and Beige Stripes Woven Rope Plant Basket – Turn Your Indoor Plants into Modern Home Décor We’ve designed our woven rope plant basket using carefully selected premium-grade cotton and jute threads. We then expertly hand tightened wove them together to ensure the rope basket’s durability and stability. Pot and plant are not included. Best Fits 10’’ Flower Pot - Measuring 11 inches in diameter and 11 inches in height approximately, this house plant potter can easily fit flower pots with a diameter of 10 inches or less. It also looks lovely with a variety of flowers, indoor trees, and succulents like fiddle fig tree, cactus, monstera plant, aloes, snake plant. With the minimalist look and design of our big indoor potted plant planter, it’s ideal for adding a rustic yet modern touch to any room in your home, in office and hotel lobbies, restaurants, and many other places. INCREDIBLY VERSATILE: While our woven basket makes a great plant potter, it’s also perfect to use as a storage bin to help keep your home organized. It can easily carry clothes, bed sheets, books, fruits and veggies, office supplies, and more. EASY TO CARRY & STORE: Thanks to our rope basket’s concealed carrying handles on both sides, you can easily pick it up and move it anywhere you’d like. Plus, with its cotton and jute thread materials, it’s easy to fold and store away for later.   Today’s Botanic Spark Today's profile of Catherine Parr Traill is quite something, and I ran across an adorable story when I was researching her (it's a little heartbreaking as well). As I mentioned earlier, Catherine and her husband, Thomas, faced extraordinary challenges as settlers in the Backwoods of Canada. Whatever loveliness or dear possession they had brought with them from England ended up either ruined or sold or lost to them - one by one - in their great effort to survive. At one point, the only thing Catherine had left was her prized possession of silver spoons. They had been in her family for generations. One day, Catherine realized her spoons were gone. Distressed and alarmed, Catherine discovered that her young son Willie admitted he had taken them, and he had planted them in the garden. When she asked him why he said he wanted to get "more poons" (he couldn't say his s's properly). In any case, the entire family went out into the garden and searched and searched - but never found the silver spoons. But, I'm betting that every time Catherine worked in the garden, she was hopeful that she might run across them.

Podcast and a Half
Episode #22 Saxy Fruit

Podcast and a Half

Play Episode Listen Later Dec 12, 2019 78:11


We have a professional on the podcast, thanks again Tom. Join us for some tongue twisters, a VERY short riddle, and deleting things from existence.

Anime Café Podcast

¡Nuestras baristas han perdido la cuenta las veces que han llorado escuchando Fuyu no hanashi! Te traemos en esta ocasión un episodio recargado de cafeína para no faltar a los ensayos de la banda, por eso en esta ocasión te dejamos nuestras impresiones de Given. Recuerda que nos puedes escribir sugerencias y tus comentarios de todos nuestros episodios en nuestras redes sociales Instagram y Twitter @animecafepa escúchanos en : Spotify, Apple Podcast, Google Podcast, Pocket Cast y RadioPublic. #givenanime #given #anime #podcast

{abstract:japan}
Podcast 144: Kurisumasu Special!

{abstract:japan}

Play Episode Listen Later Dec 18, 2016 65:36


01 - Paul Anka “Christmas in Japan” from It’s Christmas Everywhere 02 - Shonen Knife “Space Christmas” from Single 03 - Funky Monkey Babys “Boku Wa Santa Claus” from Single 04 - AAA “Winter lander!!” from All 05 - SHANZA “Winter’s Review” from Single 06 - GLAY “Winter, again” from Single 07 - 中川晃教 “終らないクリスマス” from Single 08 - Yogurt Kinoko “Happy Xmas Sunshine Girl” from Emma & the Sun 09 - Kotaro Oshio “Last Christmas” from Blue Sky ~Kotaro Oshio Best Album~ 10 - 宏実 “Our Christmas Song” from HOT CHOCOLATE 11 - The Nightmare Before Christmas “Poor Jack” from Japanese Soundtrack 12 - the brilliant green “angel song -イヴの鐘-” from Los Angeles 13 - Various Artists “CANON ROCK” from MV 14 - The Douyou Pops 1 “Yuki” from Christmas to Fuyu no Uta Shuu 15 - Perfume “Twinkle Snow Powdery Snow” from Game 16 - Various Artists “Happy Christmas” from MV Notes: Have a very merry abstract Kurisumasu!! -Tyler Abstract.

christmas japan yuki fuyu kurisumasu
The Pen Addict
214: The Garish Quota

The Pen Addict

Play Episode Listen Later Jul 20, 2016 69:49


Brad and Myke hit on everything this week, from pencils and sharpeners, multi-cartridge fountain pens, and how Fuyu-gaki is the worst orange. They also have the definitive answer for the Trump pen.

otakugeneration's Podcast
OtakuGeneration (Show #523) Fuyu no hi

otakugeneration's Podcast

Play Episode Listen Later Jun 17, 2015 81:49


  Shownotes :: (show 523) :: (website) :: (podcast feed) :: (direct download) :: (direct iTunes link) With Fuyu no hi, recorded live June 14th, 2015. This week we talked about a set of collaborative shorts based on a poem. Join us, for another week, another show, with more otaku-tainment! Also #DFMPIA (Don't Forget Matt Pyson is still Awesome!) Community OG Networks Facebook (the page) OG twitter Call Us! ::: Skype Voicemail ::: You can leave us voicemail using Skype, at: otakugeneration or call: (610) 628.3154 -or- (206) 965.8154 ::: Google Voice ::: You can also leave us voicemail with Google Voice, at: 484.393.1405; remember to hit # after the tone.   Mentioned Stuff and Link(s) (during the show) OG Link Stitcher Patreon Bernhard Rants! (1.0) by Bernhard :: (rants@otakugeneration.net) Bernhard entertains us this week with another delightful and humorous rant! Monthly DVD Releases (2.0) by Albert :: (releases@otakugeneration.net) 2015-06-16 Captain Earth Collection 1 (S) (DVD, Blu-ray) The Cat Returns (DVD/Blu-ray Combo) Is This A Zombie?(aka Kore wa Zombi Desu ka?) of the Dead (DVD/Blu-ray Combo) (Anime Classics) Spirited Away (DVD/Blu-ray Combo) Tokyo Ravens Part 2 (DVD/Blu-ray Combo) 2015-06-23 Bleach(Uncut) Set 25 Hayate the Combat Butler: Cuties Season 4 Collection (S) (DVD, Blu-ray) Lupin the 3rd: The Castle of Cagliostro Collector's Edition (Blu-ray) Origin Movie (DVD, Blu-ray SE) (S.A.V.E. Edition) Ping Pong (DVD/Blu-ray Combo) 2015-06-30 .hack//G.U. Trilogy Movie (S) A Lull in the Sea Complete Series Premium Edition (Blu-ray) BlazBlue Alter Memory Complete Series (DVD/Blu-ray Combo) Chi's New Address (S) Magical Warfare Complete Collection (DVD, Blu-ray) My Little Monster Premium Edition (S) (DVD/Blu-ray Combo) Soul Eater Not! (DVD/Blu-ray Combo + LE) Soul Eater Complete Series Premium Edition (Blu-ray) Space Brothers Collection 3 (S) (DVD, Blu-ray) Sword Art Online II Set 1 (DVD, Blu-ray, Blu-ray LE) The Irregular at Magic High School Set 1 (S) (Blu-ray) Turn A Gundam Part 1 (S) Nickname ME! by Alan :: (nickname@otakugeneration.net) None this week. Don't let that stop you! =D You know you want one. Don't be shy! Email us and tell us something about you! Then you'll be uniquely identifiable among the other OG listeners! Check out the most recent nickname logo-mashup(s)! lady paperlock     And some other nickname logo-mashups we did! Email us for your nickname and you'll get one as well!   ...and now you get your own logo-mashup for your nickname... If you send us feedback, and you want us to nickname you, email us, at: otaku.generation@gmail.com With somewhere in the subject: NICKNAME ME NOTE: If we've already nicknamed you, you can't be re-nicked... unless you plead... lots! ...and we mean LOTS!!! =D For Podcast promos or MP3 Feedback, email us, at: otaku.generation@gmail.com With the exact subject: MP3 PROMOTION :: (for podcast promos) MP3 FEEDBACK :: (for audio feedback) In the body of the message, put: Your Name Your Podcast Your website Brief copy about your podcast for us to read NOTE: No copyrighted music, or clips! We won't play promos with this kind of content! Unless you own the copyright, and have given us written authorization! Join us next week... for something that smells like a podcast, looks like a podcast, and sounds like a podcast but really isn't what you thought. A new show every Wednesday, so "podcast-in" with us! Download us, snear at us, but give us a listen... and maybe we won't respect you in the morning... *unless you're wearing PJs than we can talk* It's still June! So far this month we marched forth and had pie. If you want to send us goodies like pie, we often don't say no. (pie, cake and cookies, papercraft, OG comics; like many listeners have) ...and if you can't do that will take your pitty or votes and [insert OG-bribes here]. We appreciate the votes, donations, and comments even if we don't read them on the show... and iTunes reviews. Word-of-mouth advertising is also appreaciated. Thanks for the support!

Sesho's Anime And Manga Reviews
Pocast Episode 214: Biomega Volume 2

Sesho's Anime And Manga Reviews

Play Episode Listen Later Jun 26, 2010 11:14


Podcast manga review of Biomega Volume 2 by Tsutomu Nihei. Translated by John Werry. Adapted by Stan! Originally published in Japan by Shueisha. Published in the US by Viz Signature, $12.99, Rated M for Mature. From the back cover:"In Tsutomu Nihei's nightmare vision of the future, the N5S virus has swept across the Earth, turning most of the population into zombie-like drones. Zoichi Kanoe, an agent of Toa Heavy Industry, is humanity's last hope, and he's not even human! With the help of Fuyu, a digitized intelligence built into the computer system of his heavy dual coil motorcycle, Zoichi's search for the key to salvation will take him on a journey across surreal landscapes and hurl him into battle against mind-bending evil. Prepare yourself for the ultimate trip-- Prepare yourself for the world of Biomega. After capturing Eon Green, DRF forces are amassing around Toa Heavy Industry headquarters and have taken Dr. Kurokawa and his daughter into custody. Zoichi must attempt a rescue--Dr. Kurokawa's laboratory may yield critical information on Eon Green. Elsewhere, Toa Heavy Industry agent Nishu Mizunoe searches for Kozlov Grebnev and the secrets he knows about  the DRF's research, origins and their apocalyptic plan for the entire human race!"My Grade: A

Sesho's Anime And Manga Reviews
Podcast Episode 212: Biomega Volume 1

Sesho's Anime And Manga Reviews

Play Episode Listen Later Jun 16, 2010 12:39


Podcast manga review of Biomega Volume 1 by Tsutomu Nihei. Translated by John Werry. Adapted by Stan! Originally published by Shueisha in Japan. Published in US by Viz Signature, $12.99, Rated Mature. From the back cover: The N5S virus has swept across the earth, turning most of the population into zombie-like drones. Zoichi Kanoe, an agent of Toa Heavy Industry, is humanity's last hope, and he's not even human! With the help of Fuyu, an artificial intelligence built into the computer system of his Heavy Duty Coil motorcycle, Zoichi's search for the key to salvation will take him on a journey across surreal landscapes and hurl him into battle against mind-bending evil. Zoichi Kanoe plunges into the depths of 9JO - an island city in the middle of the Pacific Ocean - in search of Eon Green, a girl with the power to transmute the N5S virus. He's not the only one looking for her, though... Agents of the Public Health Service's Compulsory Execution Unit are also in hot pursuit. Zoichi and his transhuman allies have no time to waste; the countdown to the zombie apocalypse has begun! My Grade: A Check out www.Vampybit.me 

Escucha japonés
48: Las cuatro estaciones

Escucha japonés

Play Episode Listen Later Apr 27, 2009


春は桜。はるは さくら。Haru wa sakura.En primavera, los cerezos.夏は暑い。なつは あつい。Natsu wa atsui.En verano hace calor.秋は紅葉。あきは こうよう。Aki wa kooyoo.En otoño, hojas caducas.冬は寒い。ふゆは さむい。Fuyu wa samui.En invierno hace frío.Vocabulario interesante:春 (はる, haru): Primavera.夏 (なつ, natsu): Verano.秋 (あき, aki): Otoño.冬 (ふゆ, fuyu): Invierno.暑い (あつい, atsui): Hace calor.寒い (さむい, samui): Hace frío.桜 (さくら, sakura): Cerezo.紅葉 (こうよう, kooyoo): Hojas caducas (en otoño).¡Ay, las estaciones! Es un tema tan popular en los comentarios que no hemos podido evitar sacarlo. También ayuda que nos parece muy fácil y útil. ¿Habéis disfrutado del Hanami este año? ¿Y el año pasado?

Japanese Kanji - Characters

fuyu