Podcasts about vicuna

  • 39PODCASTS
  • 91EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 16, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about vicuna

Latest podcast episodes about vicuna

OnBoard!
EP 54. 深度对谈顶尖AI开源项目:大模型开源生态, Agent 与中国力量

OnBoard!

Play Episode Listen Later Dec 16, 2024 199:06


聊到生成式AI的发展,开源绝对是最关键的话题之一。这次的嘉宾,可以说涵盖了大模型开源领域最值得关注的公司,真的是黄金阵容! 首先跟大家汇报一下,上周日我们在北京举办的 OnBoard! 第一次线下听友会真是超预期!开放报名4天就250多人报名,周日从上午9点到下午3点,从机器人到AI,创业投资和软件出海,100人的场地,直到最后都几乎座无虚席!真的是非常感谢大家的支持~ Hello World, who is OnBoard!? 回到这一期播客,我们将深入探讨大模型的开源生态。在生成式AI飞速发展的一年多时间里,开源无疑是一个不可忽视的话题。开源模型的迅猛发展,从 Meta 的 Llama 3 到 Mistral 的最新模型,它们对闭源大模型如 GPT4 的追赶,不仅令人惊艳,更加速了 AI 场景下产品的实际应用。而围绕大模型的生态系统,从推理加速到开发工具,再到智能代理,技术栈的丰富程度,虽然已经孕育出了像 Langchain 这样的领军企业,但这一切似乎只是冰山一角。 特别值得一提的是,随着阿里千问系列、Deepseek、以及 Yi 等中国团队主导的模型在国际舞台上崭露头角,我们不禁思考,除了模仿和追赶,中国在大模型领域的发展是否还有更多值得我们关注和自豪的成就。 今天,Monica 有幸邀请到了几位极具代表性的重磅嘉宾,来自 Huggingface 的开源老兵,有通义千问 Qwen 的开源负责人(他也是 Agent 领域最受关注的项目 OpenDevin 核心成员),还有最具国际影响力的开源项目 vLLM 主导人。真是涵盖了大模型开源生态的各个领域的最一线视角! 嘉宾们都太宝藏了,我们的话题延伸到大模型的各个方面,录了近4个小时!我们前半部分聊了很多infra的创新,以及最近很火的、以OpenDevin 为代表的软件开发agent 背后的技术和生态等话题。下半部分,我们回到大模型开源的主题,畅谈了: 底层基础大模型的开源闭源生态,未来可能有怎样的演进? 开源模型商业化跟过去我们在大数据时代看到的databricks 之类开源商业模式有哪些异同? 如何做一个有国际影响力的开源项目? 嘉宾介绍 Tiezhen Wang, Huggingface 工程师,他可以说是中国与世界开源 AI 生态的桥梁,更是从 Google TensorFlow 时代到 Huggingface 早期员工,对中国和世界的开源 AI 生态都有极深的洞察。 Junyang Lin, 通义千问开源负责人,作为 Qwen 在全球开源社区的主要代言人,他不仅见证了开源的发展历程,还是目前备受瞩目的 Agent 开源项目 OpenDevin 的核心团队成员。 李卓翰,UC Berkeley PhD,他所主导的项目更是大名鼎鼎,就是已经成为行业标准的大模型推理框架 vLLM!他所在的 Sky Lab 被誉为开源基础设施的摇篮,从估值百亿美元的 Databricks 到 Anyscale(开源计算框架 Ray 的商业化公司)。他还深度参与了 Chat Arena, Vicuna 等多个国际知名开源项目,对大模型周边生态和 infra 的不仅有国际一线经验,更是有很多有技术理想的干货! OnBoard! 主持:Monica:美元VC投资人,前 AWS 硅谷团队+ AI 创业公司打工人,公众号M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 还有数据、评测等等大模型领域的核心话题,真的非常全面,又不失一线从业者的深度。索性就不分成两部分了,大家可以对着 show notes 里面的时间戳,直接跳转到你感兴趣的话题(虽然我觉得每个话题都很好!)介绍了这么多,还要声明一下,节目里面重点聊到的开源社区 Huggingface,还有几个开源的项目,包括阿里千问、OpenDevin, Deepseek, 零一万物的 Yi,vLLM 等,都没有收取任何广告,完全是嘉宾走心分享,全程无广! 当然,如果你们或者其他AI公司考虑赞助一下我们用爱发电的播客,我们当然也是欢迎的! 三小时硬核马拉松开始,enjoy! 嘉宾介绍 我们都聊了什么 05:28 嘉宾自我介绍,有意思的开源 AI 项目 18:37 vLLM 如何开始的,如何成为全球顶尖项目,为什么我们需要一个大模型推理框架? 30:24 Agent framework: OpenDevin 这样的负责 agent 会带来怎样的推理挑战? 40:37 做好一个编程 Agent,还需要哪些新的工具?多模态会带来怎样的变化? 56:16 我们需要怎样的 Agent Framework?为什么最适合开源社区来做?Framework 会收敛吗? 67:46 什么是 Crew AI? 如何看待 Multi-agent 架构? 73:11 借鉴前端框架的发展历史,如何理解一个框架如何成为行业标准? 77:54 Huggingface 上开源LLM现状,过去一年多有哪些重要进展?有哪些不同的开源方式?泽娜要给你看待一个开源模型的流行程度? 94:27 如何理解不同架构的开源大模型生态?Qwen 如何通过架构演进打造更好的开源生态? 104:59 中国的大模型开源项目有哪些创新?大模型架构有哪些变化? 112:17 为什么说新的模型架构可能会带来商业化的新机会?我们能从以前的开源商业化中学到什么? 119:22 我们看到现有大模型架构的天花板了吗?什么是一个新的架构? 128:03 Zhuohan 从参与最早的开源 LLM 之一 Vicuna 的经历学到什么?学术界和业界在大模型生态上如何分工? 140:48 用于大模型的数据集领域有哪些值得关注的进展? 149:42 Mistral 为什么这么快爆火?打造一流国际开源项目有什么可借鉴的经验?vLLM 有什么道和术上的心得? 166:13 Chatbot Arena 是如何开始的?为什么模型的评测那么重要?还有哪些挑战和可能的进展? 180:49 Zhuohan 对于 vLLM 商业化方式有什么思考?未来推理成本还有哪些下降空间? 188:17 快问快答:过去一年生成式AI发展有什么超出预期和不及预期的地方?未来还有什么值得期待? 我们提到的公司和重点名词 Qwen⁠, ⁠Qwen-2⁠ OpenDevin: ⁠opendevin.github.io⁠ vLLM: ⁠github.com⁠ ⁠Yi (Github)⁠, ⁠零一万物⁠ Chatbot Arena: ⁠huggingface.co⁠ AutoGPT: ⁠github.com⁠ crew AI: ⁠www.crewai.com⁠ autoAWQ: ⁠github.com⁠ LLM.c: ⁠github.com⁠ Flash attention: ⁠github.com⁠ Continuous batching:一种数据处理技术,用于将连续的数据流分批处理,以提高效率和可扩展性。 KV cache:键值对缓存,一种存储结构,通过键快速访问数据值,常用于提高数据检索速度。 Page attention:页面注意力机制,一种在处理长文本时,使模型集中注意力于当前页面或段落的技术。 Quantization:量化,将数据表示的精度降低到更少的比特数,以减少模型大小和提高计算效率。 ⁠Direct Preference Optimization (DPO)⁠: Your Language Model is Secretly a Reward Model Google Gemini: ⁠deepmind.google⁠ Adept: ⁠www.adept.ai⁠ MetaGPT: ⁠github.com⁠ ⁠Dolphin⁠an open-source and uncensored, and commercially licensed dataset and series of instruct-tuned language models based on Microsoft's Orca paper Common crawl: ⁠commoncrawl.org⁠ Tiezhen 的报告:⁠Booming Open Source Chinese-Speaking LLMs: A Closer Look⁠, ⁠Slides⁠ ⁠通义千问一周年,开源狂飙路上的抉择与思考|魔搭深度访谈⁠ ⁠阿里林俊旸:大模型对很多人来说不够用,打造多模态Agent是关键 | 中国AIGC产业峰会⁠ 欢迎关注M小姐的微信公众号,了解更多中美软件、AI与创业投资的干货内容! M小姐研习录 (ID: MissMStudy) 喜欢 OnBoard! 的话,也可以点击打赏,请我们喝一杯咖啡!如果你用 Apple Podcasts 收听,也请给我们一个五星好评,这对我们非常重要。 最后!快来加入Onboard!听友群,结识到高质量的听友们,我们还会组织线下主题聚会,开放实时旁听播客录制,嘉宾互动等新的尝试。添加任意一位小助手微信,onboard666, 或者 Nine_tunes,小助手会拉你进群。期待你来!

The Great Women Artists
Cecilia Vicuña

The Great Women Artists

Play Episode Listen Later Nov 12, 2024 43:22


I am so excited to say that my guest on the GWA Podcast is one of the most trailblazing artists and poets in the world right now, Cecilia Vicuna. Born in 1948 in Chile, and based between Santiago and New York, Vicuna is hailed for her works that are as ephemeral as they are permanent, colossal as they are minute, fragile as they are strong, that bring together sound, weaving, language, and community. Educated in Santiago in the 1960s, life took Vicuna to London to study at the Slade School of Art on a scholarship in the 1970s, but because of the Pinochet regime that began in 1973, she was forced to live in exile. Soon she went to Bogota, Colombia, before moving to New York City, where she has remained ever since, in the same loft and tending to the same community garden. Since Vicuna was a teen, she has focussed on a political orientation for her art, as she has said, because she “understood that the life of this planet was endangered”. Through artforms she calls “arte precario” – precious art – because it disappears and is vulnerable, Vicuna has held up a poignant mirror to our world through her installations that meld twigs, bamboo, stones, and shredded textiles. While they show us its beauty, they also convey its vulnerability: warning us about what will happen if we don't wake up in time to protect our ecosystems… At the heart of her art is language – specifically the quipu, which means knots in Quecha, a system of encoding information from the Andes – that conveys as much information as the alphabet – which was used for 5000 years, before being wiped out during colonisation… As well as the importance of togetherness. Because, in a world as destructive as ours we need more than ever to unite, to rebuild the planet for our future descendents, as she says, “not only for the survival of our species, but because it is joyful, fun, beautiful and delightful.” This November, I am excited to say that a new exhibition of Vicuna's work will open at Lehmann Maupin Gallery in NYC, featuring paintings that she has re-rendered from the 1970s, while on a trip to Bogota and Rio. Dazzling in hues of pinks and yellows, they explore the Yoruba Mythology that represents human or divine characteristics and concepts of nature, and I can't wait to find out more. https://www.lehmannmaupin.com/exhibitions –– THIS EPISODE IS GENEROUSLY SUPPORTED BY THE LEVETT COLLECTION: https://www.famm.com/en/ https://www.instagram.com/famm_mougins // https://www.merrellpublishers.com/9781858947037 Follow us: Katy Hessel: @thegreatwomenartists / @katy.hessel Sound editing by Lauren Armstrong Carter Music by Ben Wetherfield

Buffy the Gilmore Slayer
Bunheads Final Four: Part 1

Buffy the Gilmore Slayer

Play Episode Listen Later Jul 30, 2024 75:56


Bunheads Episode 15 "Take the Vicuna"/Bunheads Episode 16 "There's Nothing Worse Than a Pantsuit"Witness the drama unfold as boyfriends *gasp* try to sit with their girlfriends at lunch, and Michelle and her brother... don't go to Burning Man? You thought Michelle's brother was bad? Wait till you meet their mom! Sasha's new apartment has everyone in a tizzy. Fanny makes a deal with Milly, after Milly pulls some real Paris sh*t. We were super excited to see Sal again, but for us, he's lost a bit of that somehow not creepy, butt-identifier charm. Michelle's having a whole new crisis. And we've got some hot takes about a 40-year old movie.Want more content? Sing up for our Patreon! We've got 3 tiers of bonus content, including access to our Discord, early extended episode teasers, outtakes, watch parties, and of course, our Angel and Veronica Mars video reviews! patreon.com/bryanandstaceyDon't want to join Patreon? You can access our Veronica Mars content right here on our feed by subscribing for $5/month.Don't forget to check out our music video for "Getaway (Stars Hollow)" from super talented musical artist, The Other Realm. Check out his other music and follow him on social media @theotherrealmmusic.Introduction - 0:00Bunheads Episode 15: Take the Vicuna - 10:04Movies One of Us Has Never Seen - 38:02Bunheads Episode 16: There's Nothing Worse Than a Pantsuit - 47:55Next Time - 1:13:40Follow us on social media for more Buffy/Gilmore content, and to see what other projects we're up to!YouTube:Bryan & StaceyInstagram:@gilmoreslayer/@bryanandstaceyTikTok:@gilmoreslayer/@bryan.and.staceyTwitter:@gilmoreslayer/@bryanandstaceyFacebook:Buffy the Gilmore Slayer/Bryan & StaceyEmail us at: bryanandstaceyreviews@gmail.comWebsite: bryanandstacey.comTheme song written and performed by @louiearonowitzSend us a Text Message.Support the Show.

Canadian Trout Bum
CanadianTroutBum Podcast #9 - Vicuna Dubbing

Canadian Trout Bum

Play Episode Listen Later Mar 30, 2024 56:44


CanadianTroutBum Podcast #9 – This episode we speak with David, the owner of the England UK based small business Vicuña Dubbing. We discuss the  past, present & future with lots of laughs along the way.  Please visit  @vicunadubbing on Instagram  and tell David, that Thomas from the CanadianTroutBum sent you.https://www.vicunadubbing.co.uk/

The top AI news from the past week, every ThursdAI

"...Happy birthday dear ThursdAIiiiiiiii, happy birthday to youuuuuu

god ceo amazon spotify canada english europe google earth ai vision hell real san francisco phd building toronto spanish elon musk microsoft dm open iphone hands smart vote code bitcoin human tesla run chatgpt camp tool pc discord cloud figure mac stanford incredible ios throw exciting alpha stock cards receiving integration context writers spacex beating models terminator developers pi intel reason cto encourage aka folks vc excel transformers fireworks portuguese chain commander sf app store gemini openai robotics optimize anton black mirror insanity responses hardware api gi luigi b2c common sense bing chrome open source qu'en coding python ui turbo announcement ml lama gpt gorilla github guillaume llama small talk apis lex float dev hermes vcs javascript appearing refocus 200k biases starship tl cognition treehouse opus weights macs kudos copilot mamba cpu pharrell llm gpu canary 3b sequoia hug google cloud modular ide agi js phi haiku prs imo tbt ocr dbt dbs rag sonnets gpus fj 7b devlin 8k ilya deepmind wis irobot rtx optimus boston dynamics quint speculative fine tuning new ai alpaca anthropic yam json pi day hyena hf xml tropic typescript slav thermodynamics cli suno grok ctos mistral justin lin slava inti google x olympiads kilt funnily cursor iterate hacker news sve junaid jumbotron workbench pytorch ai news eac cohere mhm gpc farouk reworked pinecone autogpt yann lecun chatgpt plus greg brockman jeremy howard tpu ryan carson ioi jacquard 70b assembler svek 128k 34b tool use vl m rlhf entropic cerebras actuator gvt cloud one chatgpt api vicuna technium cs3 metting webgpu
Nadmitov Live
Догонит ли YandexGPT ChatGPT? ИИ: Гонка за лидером (ChatGPT, GPT-4, LLaMA, Bard, Yandex GPT и др.)

Nadmitov Live

Play Episode Listen Later Jan 15, 2024 58:51


Разговор  о расцвете больших лингвистических моделей (ЛЛМ / LLM/ Large Linguistic Models), Chat GPT и его предшественнике Instruct GPT; текущем лидере GPT-4; влиянии искусственного интеллекта (ИИ / AI) на рынок труда; галлюцинациях ИИ; бенчмарки оценки производительности чат-ботов; подготовка инженерных кадров в области IT; курсы в области машинного обучения; профильная магистратура МФТИ; использование англоязычных терминов; могут ли чат-боты программировать; ситуация на рынке труда для программистов; как OpenAI обошел Google и Meta; Ernie bot и китайские наработки, проблемы китайцев; open sourse против монополий; как в открытый доступ утекла LLaMA от Meta и кто от этого выиграл; Yandex GPT; апокалиптические прогнозы в отношении ИИ, когда ИИ догонит уровень человеческого разума; регулирование ИИ. Таймкоды: 00:00 – Трейлер 02:03 – Биография 02:40 – Работа в Яндексе 03:18 – Хайп вокруг ЛЛМ 05:34 – Влияние ИИ (AI) на рынок труда 09:43 – ChatGPT, GPT 4 12:10 – Галюцинации ИИ 14:19 – Бенчмарки производительности 18:28 – Т9 19:12 – Подготовка IT кадров 20:30 – Курсы по ИИ 21:30 – Магистратура МФТИ 26:19 – Онлайн-академия 29:17 – Англоязычные термины 31:48 – Рынок труд IT 33:45 – ЛЛМ (LLM) 34:41 – Как OpenAI обошел Google и Meta 37:34 – Ernie bot и китайские наработки 41:56 – Цена входа на рынок LLM 44:05 – Как в открытый доступ утекла LLaMA от Meta и кто от этого выиграл? 46:56 – Yandex GPT 50:20 – Апокалиптические прогнозы в отношении ИИ, когда ИИ догонит уровень человеческого разума 53:34 – Могут ли чат-боты программировать 57:00 – Регулирование ИИ 59:00 – Личные увлечения Ведущий - Александр Надмитов, управляющий партнер юридической фирмы "Надмитов, Иванов и Партнеры" (www.nplaw.ru), доцент факультета права НИУ ВШЭ #мфти, #исскуственныйинтелект, #ChatGPT, #openai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

In 2023 we did a few Fundamentals episodes covering Benchmarks 101, Datasets 101, FlashAttention, and Transformers Math, and it turns out those were some of your evergreen favorites! So we are experimenting with more educational/survey content in the mix alongside our regular founder and event coverage. Pls request more!We have a new calendar for events; join to be notified of upcoming things in 2024!Today we visit the shoggoth mask factory: how do transformer models go from trawling a deeply learned latent space for next-token prediction to a helpful, honest, harmless chat assistant? Our guest “lecturer” today is ; you might know him from his prolific online writing on and Twitter, or from his previous work leading RLHF at HuggingFace and now at the Allen Institute for AI (AI2) which recently released the open source GPT3.5-class Tulu 2 model which was trained with DPO. He's widely considered one of the most knowledgeable people on RLHF and RLAIF. He recently gave an “RLHF 201” lecture at Stanford, so we invited him on the show to re-record it for everyone to enjoy! You can find the full slides here, which you can use as reference through this episode. Full video with synced slidesFor audio-only listeners, this episode comes with slide presentation along our discussion. You can find it on our YouTube (like, subscribe, tell a friend, et al).Theoretical foundations of RLHFThe foundation and assumptions that go into RLHF go back all the way to Aristotle (and you can find guidance for further research in the slide below) but there are two key concepts that will be helpful in thinking through this topic and LLMs in general:* Von Neumann–Morgenstern utility theorem: you can dive into the math here, but the TLDR is that when humans make decision there's usually a “maximum utility” function that measures what the best decision would be; the fact that this function exists, makes it possible for RLHF to model human preferences and decision making.* Bradley-Terry model: given two items A and B from a population, you can model the probability that A will be preferred to B (or vice-versa). In our world, A and B are usually two outputs from an LLM (or at the lowest level, the next token). It turns out that from this minimal set of assumptions, you can build up the mathematical foundations supporting the modern RLHF paradigm!The RLHF loopOne important point Nathan makes is that "for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior". For example, it might be difficult for you to write a poem, but it's really easy to say if you like or dislike a poem someone else wrote. Going back to the Bradley-Terry Model we mentioned, the core idea behind RLHF is that when given two outputs from a model, you will be able to say which of the two you prefer, and we'll then re-encode that preference into the model.An important point that Nathan mentions is that when you use these preferences to change model behavior "it doesn't mean that the model believes these things. It's just trained to prioritize these things". When you have preference for a model to not return instructions on how to write a computer virus for example, you're not erasing the weights that have that knowledge, but you're simply making it hard for that information to surface by prioritizing answers that don't return it. We'll talk more about this in our future Fine Tuning 101 episode as we break down how information is stored in models and how fine-tuning affects it.At a high level, the loop looks something like this:For many RLHF use cases today, we can assume the model we're training is already instruction-tuned for chat or whatever behavior the model is looking to achieve. In the "Reward Model & Other Infrastructure" we have multiple pieces:Reward + Preference ModelThe reward model is trying to signal to the model how much it should change its behavior based on the human preference, subject to a KL constraint. The preference model itself scores the pairwise preferences from the same prompt (worked better than scalar rewards).One way to think about it is that the reward model tells the model how big of a change this new preference should make in the behavior in absolute terms, while the preference model calculates how big of a difference there is between the two outputs in relative terms. A lot of this derives from John Schulman's work on PPO:We recommend watching him talk about it in the video above, and also Nathan's pseudocode distillation of the process:Feedback InterfacesUnlike the "thumbs up/down" buttons in ChatGPT, data annotation from labelers is much more thorough and has many axis of judgement. At a simple level, the LLM generates two outputs, A and B, for a given human conversation. It then asks the labeler to use a Likert scale to score which one it preferred, and by how much:Through the labeling process, there are many other ways to judge a generation:We then use all of this data to train a model from the preference pairs we have. We start from the base instruction-tuned model, and then run training in which the loss of our gradient descent is the difference between the good and the bad prompt.Constitutional AI (RLAIF, model-as-judge)As these models have gotten more sophisticated, people started asking the question of whether or not humans are actually a better judge of harmfulness, bias, etc, especially at the current price of data labeling. Anthropic's work on the "Constitutional AI" paper is using models to judge models. This is part of a broader "RLAIF" space: Reinforcement Learning from AI Feedback.By using a "constitution" that the model has to follow, you are able to generate fine-tuning data for a new model that will be RLHF'd on this constitution principles. The RLHF model will then be able to judge outputs of models to make sure that they follow its principles:Emerging ResearchRLHF is still a nascent field, and there are a lot of different research directions teams are taking; some of the newest and most promising / hyped ones:* Rejection sampling / Best of N Sampling: the core idea here is that rather than just scoring pairwise generations, you are generating a lot more outputs (= more inference cost), score them all with your reward model and then pick the top N results. LLaMA2 used this approach, amongst many others.* Process reward models: in Chain of Thought generation, scoring each step in the chain and treating it like its own state rather than just scoring the full output. This is most effective in fields like math that inherently require step-by-step reasoning.* Direct Preference Optimization (DPO): We covered DPO in our NeurIPS Best Papers recap, and Nathan has a whole blog post on this; DPO isn't technically RLHF as it doesn't have the RL part, but it's the “GPU Poor” version of it. Mistral-Instruct was a DPO model, as do Intel's Neural Chat and StableLM Zephyr. Expect to see a lot more variants in 2024 given how “easy” this was.* Superalignment: OpenAI launched research on weak-to-strong generalization which we briefly discuss at the 1hr mark.Note: Nathan also followed up this post with RLHF resources from his and peers' work:Show Notes* Full RLHF Slides* Interconnects* Retort (podcast)* von Neumann-Morgenstern utility theorem* Bradley-Terry model (pairwise preferences model)* Constitutional AI* Tamer (2008 paper by Bradley Knox and Peter Stone)* Paul Christiano et al. RLHF paper* InstructGPT* Eureka by Jim Fan* ByteDance / OpenAI lawsuit* AlpacaEval* MTBench* TruthfulQA (evaluation tool)* Self-Instruct Paper* Open Assistant* Louis Castricato* Nazneen Rajani* Tulu (DPO model from the Allen Institute)Timestamps* [00:00:00] Introductions and background on the lecture origins* [00:05:17] History of RL and its applications* [00:10:09] Intellectual history of RLHF* [00:13:47] RLHF for decision-making and pre-deep RL vs deep RL* [00:20:19] Initial papers and intuitions around RLHF* [00:27:57] The three phases of RLHF* [00:31:09] Overfitting issues* [00:34:47] How preferences get defined* [00:40:35] Ballpark on LLaMA2 costs* [00:42:50] Synthetic data for training* [00:47:25] Technical deep dive in the RLHF process* [00:54:34] Projection / best event sampling* [00:57:49] Constitutional AI* [01:04:13] DPO* [01:08:54] What's the Allen Institute for AI?* [01:13:43] Benchmarks and models comparisonsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have Dr. Nathan Lambert in the house. Welcome.Nathan [00:00:18]: Thanks guys.Swyx [00:00:19]: You didn't have to come too far. You got your PhD in Berkeley, and it seems like you've lived there most of the time in recent years. You worked on robotics and model-based reinforcement learning on your PhD, and you also interned at FAIR and DeepMind. You bootstrapped the RLHF team at Hugging Face, and you recently joined the Allen Institute as a research scientist. So that's your quick bio. What should people know about you that maybe is not super obvious about you on New LinkedIn?Nathan [00:00:43]: I stay sane in various insane sport and ultra-endurance sport activities that I do.Swyx [00:00:50]: What's an ultra-endurance sport activity?Nathan [00:00:52]: Long-distance trail running or gravel biking. Try to unplug sometimes, although it's harder these days. Yeah.Swyx [00:00:59]: Well, you know, just the Bay Area is just really good for that stuff, right?Nathan [00:01:02]: Oh, yeah. You can't beat it. I have a trailhead like 1.2 miles from my house, which is pretty unmatchable in any other urban area.Swyx [00:01:11]: Pretty excellent. You also have an incredible blog, Interconnects, which I'm a fan of. And I also just recently discovered that you have a new podcast, Retort.Nathan [00:01:20]: Yeah, we do. I've been writing for a while, and I feel like I've finally started to write things that are understandable and fun. After a few years lost in the wilderness, if you ask some of my friends that I made read the earlier blogs, they're like, oh, this is yikes, but it's coming along. And the podcast is with my friend Tom, and we just kind of like riff on what's actually happening on AI and not really do news recaps, but just what it all means and have a more critical perspective on the things that really are kind of funny, but still very serious happening in the world of machine learning.Swyx [00:01:52]: Yeah. Awesome. So let's talk about your work. What would you highlight as your greatest hits so far on Interconnects, at least?Nathan [00:01:59]: So the ones that are most popular are timely and or opinion pieces. So the first real breakout piece was when April and I also just wrote down the thing that everyone in AI was feeling, which is we're all feeling stressed, that we're going to get scooped, and that we're overworked, which is behind the curtain, what it feels to work in AI. And then a similar one, which we might touch on later in this, was about my recent job search, which wasn't the first time I wrote a job search post. People always love that stuff. It's so open. I mean, it's easy for me to do in a way that it's very on-brand, and it's very helpful. I understand that until you've done it, it's hard to share this information. And then the other popular ones are various model training techniques or fine tuning. There's an early one on RLHF, which is, this stuff is all just like when I figure it out in my brain. So I wrote an article that's like how RLHF actually works, which is just the intuitions that I had put together in the summer about RLHF, and that was pretty well. And then I opportunistically wrote about QSTAR, which I hate that you have to do it, but it is pretty funny. From a literature perspective, I'm like, open AI publishes on work that is very related to mathematical reasoning. So it's like, oh, you just poke a little around what they've already published, and it seems pretty reasonable. But we don't know. They probably just got like a moderate bump on one of their benchmarks, and then everyone lost their minds. It doesn't really matter.Swyx [00:03:15]: You're like, this is why Sam Altman was fired. I don't know. Anyway, we're here to talk about RLHF 101. You did a presentation, and I think you expressed some desire to rerecord it. And that's why I reached out on Twitter saying, like, why not rerecord it with us, and then we can ask questions and talk about it. Yeah, sounds good.Nathan [00:03:30]: I try to do it every six or 12 months is my estimated cadence, just to refine the ways that I say things. And people will see that we don't know that much more, but we have a bit of better way of saying what we don't know.Swyx [00:03:43]: Awesome. We can dive right in. I don't know if there's any other topics that we want to lay out as groundwork.Alessio [00:03:48]: No, you have some awesome slides. So for people listening on podcast only, we're going to have the slides on our show notes, and then we're going to have a YouTube version where we run through everything together.Nathan [00:03:59]: Sounds good. Yeah. I think to start skipping a lot of the, like, what is a language model stuff, everyone knows that at this point. I think the quote from the Llama 2 paper is a great kind of tidbit on RLHF becoming like a real deal. There was some uncertainty earlier in the year about whether or not RLHF was really going to be important. I think it was not that surprising that it is. I mean, with recent models still using it, the signs were there, but the Llama 2 paper essentially reads like a bunch of NLP researchers that were skeptical and surprised. So the quote from the paper was, meanwhile, reinforcement learning known for its instability seemed a somewhat shadowy field for those in the NLP research community. However, reinforcement learning proved highly effective, particularly given its cost and time effectiveness. So you don't really know exactly what the costs and time that Meta is looking at, because they have a huge team and a pretty good amount of money here to release these Llama models. This is just the kind of thing that we're seeing now. I think any major company that wasn't doing RLHF is now realizing they have to have a team around this. At the same time, we don't have a lot of that in the open and research communities at the same scale. I think seeing that converge would be great, but it's still very early days. And the other thing on the slide is some of Anthropic's work, but everyone knows Anthropic is kind of the masters of this, and they have some of their own techniques that we're going to talk about later on, but that's kind of where we start.Alessio [00:05:17]: Can we do just a one-second RL version? So you come from a robotics background, which RL used to be, or maybe still is, state-of-the-art. And then now you're seeing a lot of LLM plus RL, so you have the gym fans, Eureka, you have MPU, which we had on the podcast when they started with RL. Now they're doing RL plus LLMs. Yeah. Any thoughts there on how we got here? Maybe how the pendulum will keep swinging?Nathan [00:05:46]: I really think RL is about a framing of viewing the world through trial and error learning and feedback, and really just one that's focused on thinking about decision-making and inputs in the world and how inputs have reactions. And in that, a lot of people come from a lot of different backgrounds, whether it's physics, electrical engineering, mechanical engineering. There are obviously computer scientists, but compared to other fields of CS, I do think it's a much more diverse background of people. My background was in electrical engineering and doing robotics and things like that. It really just changes the worldview. I think that reinforcement learning as it was back then, so to say, is really different. You're looking at these toy problems and the numbers are totally different, and everyone went kind of zero to one at scaling these things up, but people like Jim Phan and other people that were... You saw this transition in the decision transformer and papers and when people are trying to use transformers to do decision-making for things like offline RL, and I think that was kind of like the early days. But then once language models were so proven, it's like everyone is using this tool for their research. I think in the long run, it will still settle out, or RL will still be a field that people work on just because of these kind of fundamental things that I talked about. It's just viewing the whole problem formulation different than predicting text, and so there needs to be that separation. And the view of RL in language models is pretty contrived already, so it's not like we're doing real RL. I think the last slide that I have here is a way to make RLHF more like what people would think of with RL, so actually running things over time, but a weird lineage of tools that happen to get us to where we are, so that's why the name takes up so much space, but it could have gone a lot of different ways. Cool.Alessio [00:07:29]: We made it one slide before going on a tangent.Nathan [00:07:31]: Yeah, I mean, it's kind of related. This is a...Swyx [00:07:35]: Yeah, so we have a history of RL.Nathan [00:07:37]: Yeah, so to give the context, this paper really started because I have this more diverse background than some computer scientists, such as trying to understand what the difference of a cost function or a reward function and a preference function would be without going into all of the details. Costs are normally things that control theorists would work with in these kind of closed domains, and then reinforcement learning has always worked with rewards that's central to the formulation that we'll see, and then the idea was like, okay, we now are at preferences, and each step along the way there's kind of different assumptions that you're making. We'll get into these, and those assumptions are built on other fields of work. So that's what this slide is going to say, it's like RLHF, while directly building on tools from RL and language models, is really implicitly impacted and built on theories and philosophies spanning tons of human history. I think we cite Aristotle in this paper, which is fun. It's like going pre-BC, it's like 2,300 years old or something like that. So that's the reason to do this, I think. We kind of list some things in the paper about summarizing what different presumptions of RLHF could be. I think going through these is actually kind of funny. It's fun to talk about these, because they're kind of grab bags of things that you'll see return throughout this podcast that we're talking about it. The core thing of RLHF that, in order to be a believer in this, is that RL actually works. It's like, if you have a reward function, you can optimize it in some way and get a different performance out of it, and you could do this at scale, and you could do this in really complex environments, which is, I don't know how to do that in all the domains. I don't know how to exactly make chat GPT. So it's kind of, we'll overshadow everything. And then there's, go from something kind of obvious like that, and then you read the von Neumann-Morgenstern utility theorem, which is essentially an economic theory that says you can weight different probabilities of different people, which is a theoretical piece of work that is the foundation of utilitarianism, and trying to quantify preferences is crucial to doing any sort of RLHF. And if you look into this, all of these things, there's way more you could go into if you're interested in any of these. So this is kind of like grabbing a few random things, and then kind of similar to that is the Bradley-Terry model, which is the fancy name for the pairwise preferences that everyone is doing. And then all the things that are like, that Anthropic and OpenAI figured out that you can do, which is that you can aggregate preferences from a bunch of different people and different sources. And then when you actually do RLHF, you extract things from that data, and then you train a model that works somehow. And we don't know, there's a lot of complex links there, but if you want to be a believer in doing this at scale, these are the sorts of things that you have to accept as preconditions for doing RLHF. Yeah.Swyx [00:10:09]: You have a nice chart of like the sort of intellectual history of RLHF that we'll send people to refer to either in your paper or in the YouTube video for this podcast. But I like the other slide that you have on like the presumptions that you need to have for RLHF to work. You already mentioned some of those. Which one's underappreciated? Like, this is the first time I've come across the VNM Utility Theorem.Nathan [00:10:29]: Yeah, I know. This is what you get from working with people like to my co-host on the podcast, the rhetoric is that sociologist by training. So he knows all these things and like who the philosophers are that found these different things like utilitarianism. But there's a lot that goes into this. Like essentially there's even economic theories that like there's debate whether or not preferences exist at all. And there's like different types of math you can use with whether or not you actually can model preferences at all. So it's pretty obvious that RLHF is built on the math that thinks that you can actually model any human preference. But this is the sort of thing that's been debated for a long time. So all the work that's here is like, and people hear about in their AI classes. So like Jeremy Bentham, like hedonic calculus and all these things like these are the side of work where people assume that preferences can be measured. And this is like, I don't really know, like, this is what I kind of go on a rant and I say that in RLHF calling things a preference model is a little annoying because there's no inductive bias of what a preference is. It's like if you were to learn a robotic system and you learned a dynamics model, like hopefully that actually mirrors the world in some way of the dynamics. But with a preference model, it's like, Oh my God, I don't know what this model, like I don't know what chat GPT encodes as any sort of preference or what I would want it to be in a fair way. Anthropic has done more work on trying to write these things down. But even like if you look at Claude's constitution, like that doesn't mean the model believes these things. It's just trained to prioritize these things. And that's kind of what the later points I'm looking at, like what RLHF is doing and if it's actually like a repeatable process in the data and in the training, that's just unknown. And we have a long way to go before we understand what this is and the link between preference data and any notion of like writing down a specific value.Alessio [00:12:05]: The disconnect between more sociology work versus computer work already exists, or is it like a recent cross contamination? Because when we had Tri Dao on the podcast, he said FlashAttention came to be because at Hazy they have so much overlap between systems engineer and like deep learning engineers. Is it the same in this field?Nathan [00:12:26]: So I've gone to a couple of workshops for the populations of people who you'd want to include this like R. I think the reason why it's not really talked about is just because the RLHF techniques that people use were built in labs like OpenAI and DeepMind where there are some of these people. These places do a pretty good job of trying to get these people in the door when you compare them to like normal startups. But like they're not bringing in academics from economics, like social choice theory. There's just too much. Like the criticism of this paper that this is based on is like, oh, you're missing these things in RL or at least this decade of RL and it's like it would be literally be bigger than the Sutton and Barto book if you were to include everyone. So it's really hard to include everyone in a principled manner when you're designing this. It's just a good way to understand and improve the communication of what RLHF is and like what is a good reward model for society. It really probably comes down to what an individual wants and it'll probably motivate models to move more in that direction and just be a little bit better about the communication, which is a recurring theme and kind of my work is like I just get frustrated when people say things that don't really make sense, especially when it's going to manipulate individual's values or manipulate the general view of AI or anything like this. So that's kind of why RLHF is so interesting. It's very vague in what it's actually doing while the problem specification is very general.Swyx [00:13:42]: Shall we go to the, I guess, the diagram here on the reinforcement learning basics? Yeah.Nathan [00:13:47]: So reinforcement learning, I kind of mentioned this, it's a trial and error type of system. The diagram and the slides is really this classic thing where you have an agent interacting with an environment. So it's kind of this agent has some input to the environment, which is called the action. The environment returns a state and a reward and that repeats over time and the agent learns based on these states and these rewards that it's seeing and it should learn a policy that makes the rewards go up. That seems pretty simple than if you try to mentally map what this looks like in language, which is that like the language models don't make this easy. I think with the language model, it's very hard to define what an environment is. So if the language model is the policy and it's generating, it's like the environment should be a human, but setting up the infrastructure to take tens of thousands of prompts and generate them and then show them to a human and collect the human responses and then shove that into your training architecture is very far away from working. So we don't really have an environment. We just have a reward model that returns a reward and the state doesn't really exist when you look at it like an RL problem. What happens is the state is a prompt and then you do a completion and then you throw it away and you grab a new prompt. We're really in as an RL researcher, you would think of this as being like you take a state, you get some completion from it and then you look at what that is and you keep kind of iterating on it and all of that isn't here, which is why you'll hear RLHF referred to as bandits problem, which is kind of like you choose one action and then you watch the dynamics play out. There's many more debates that you can have in this. If you get the right RL people in the room, then kind of like this is an RL even when you zoom into what RLHF is doing.Alessio [00:15:22]: Does this change as you think about a chain of thought reasoning and things like that? Like does the state become part of the chain that you're going through?Nathan [00:15:29]: There's work that I've mentioned on one slide called process reward models that essentially rewards each step in the chain of thought reasoning. It doesn't really give the part of interaction, but it does make it a little bit more fine grained where you can think about like calling it at least you have many states from your initial state. That formulation I don't think people have fully settled on. I think there's a bunch of great work out there, like even OpenAI is releasing a lot of this and let's verify step by step is there pretty great paper on the matter. I think in the next year that'll probably get made more concrete by the community on like if you can easily draw out like if chain of thought reasoning is more like RL, we can talk about that more later. That's a kind of a more advanced topic than we probably should spend all the time on.Swyx [00:16:13]: RLHF for decision making. You have a slide here that compares pre-deep RL versus deep RL.Nathan [00:16:19]: This is getting into the history of things, which is showing that the work that people are using now really came from well outside of NLP and it came before deep learning was big. Next up from this paper, Tamer, which is from 2008. Some names that are still really relevant in kind of human centric RL, Bradley Knox and Peter Stone. If you have an agent take an action, you would just have a human give a score from zero to one as a reward rather than having a reward function. And then with that classifier, you can do something with a policy that learns to take actions to maximize that reward. It's a pretty simple setup. It works in simple domains. And then the reason why this is interesting is you compare it to the paper that everyone knows, which is this Paul Christiano et al. Deep Reinforced Learning from Human Preferences paper, which is where they showed that learning from human preferences, you can solve like the basic RL tasks at the time. So various control problems and simulation and this kind of like human preferences approach had higher rewards in some environments than if you just threw RL at the environment that returned a reward. So the preferences thing was you took two trajectories. So in this case, it was like complete trajectories of the agent and the human was labeling which one is better. You can see how this kind of comes to be like the pairwise preferences that are used today that we'll talk about. And there's also a really kind of interesting nugget that is the trajectory that the humans were labeling over has a lot more information than the RL algorithm would see if you just had one state, which is kind of why people think that it's why the performance in this paper was so strong. But I still think that it's surprising that there isn't more RL work of this style happening now. This paper is in 2017. So it's like six years later and I haven't seen things that are exactly similar, but it's a great paper to understand where stuff that's happening now kind of came from.Swyx [00:17:58]: Just on the Christiano paper, you mentioned the performance being strong. I don't remember what results should I have in mind when I think about that paper?Nathan [00:18:04]: It's mostly like if you think about an RL learning curve, which is like on the X axis, you have environment interactions on the Y axis, you have performance. You can think about different like ablation studies of between algorithms. So I think they use like A2C, which I don't even remember what that stands for as their baseline. But if you do the human preference version on a bunch of environments, like the human preference labels, the agent was able to learn faster than if it just learned from the signal from the environment, which means like it's happening because the reward model has more information than the agent would. But like the fact that it can do better, I was like, that's pretty surprising to me because RL algorithms are pretty sensitive. So I was like, okay.Swyx [00:18:41]: It's just one thing I do want to establish as a baseline for our listeners. We are updating all the weights. In some sense, the next token prediction task of training a language model is a form of reinforcement learning. Except that it's not from human feedback. It's just self-supervised learning from a general corpus. There's one distinction which I love, which is that you can actually give negative feedback. Whereas in a general sort of pre-training situation, you cannot. And maybe like the order of magnitude of feedback, like the Likert scale that you're going to talk about, that actually just gives more signal than a typical training process would do in a language model setting. Yeah.Nathan [00:19:15]: I don't think I'm the right person to comment exactly, but like you can make analogies that reinforcement learning is self-supervised learning as well. Like there are a lot of things that will point to that. I don't know whether or not it's a richer signal. I think that could be seen in the results. It's a good thing for people to look into more. As reinforcement learning is so much less compute, like it is a richer signal in terms of its impact. Because if they could do what RLHF is doing at pre-training, they would, but they don't know how to have that effect in like a stable manner. Otherwise everyone would do it.Swyx [00:19:45]: On a practical basis, as someone fine-tuning models, I have often wished for negative fine-tuning, which pretty much doesn't exist in OpenAI land. And it's not the default setup in open-source land.Nathan [00:19:57]: How does this work in like diffusion models and stuff? Because you can give negative prompts to something to like stable diffusion or whatever. It's for guidance.Swyx [00:20:04]: That's for clip guidance.Nathan [00:20:05]: Is that just from like how they prompt it then? I'm just wondering if we could do something similar. It's another tangent.Swyx [00:20:10]: I do want to sort of spell that out for people in case they haven't made the connection between RLHF and the rest of the training process. They might have some familiarity with it.Nathan [00:20:19]: Yeah. The upcoming slides can really dig into this, which is like this in 2018 paper, there was a position paper from a bunch of the same authors from the Christiano paper and from the OpenAI work that everyone knows, which is like, they write a position paper on what a preference reward model could do to solve alignment for agents. That's kind of based on two assumptions. The first assumption is that we can learn user intentions to a sufficiently high accuracy. That doesn't last with me because I don't know what that means. But the second one is pretty telling in the context of RLHF, which is for many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. And this is the whole thing. It's like we can compare two poems that the model generates and it can be viewed as liking a positive example, or it could be viewed as really disliking a negative example. And that's what I think a lot of people are doing in like the harm space is like a harmful response to a language model, whether or not you agree with the company's definition of harms is that it's a really bad negative example and they downweight them by preferring something more benign in the RLHF process, among other ways of dealing with safety. So that's a good way of saying it's like this is core, this kind of like comparison and positive or negative example is core to all of the RLHF work that has continued.Swyx [00:21:29]: People often say, I don't know what I want, but I'll know when I see it. This is that expressed in reinforcement learning tools.Nathan [00:21:35]: Yeah, it is. Yeah, it is. That's what everyone's doing in the preference modeling stage that we'll get to. Yeah. Yeah. And you can see there are more papers. This is really just to have all the links for people that go deeper. There's a Ziegler et al. paper in 2019, which shows that you can do this RLHF process on language models. This familiar diagram starts to emerge in 2019, and it's just to show that this goes really far back. I think we can kind of breeze through some of these. And then 2020 is the first open AI experiment that I think caught people's eyes, which is this learning to summarize experiment. It has this three-step process that we'll go to into more when I kind of go into the main concepts. But this is like the first time you see this diagram that they reuse with InstructGPT, they reuse with ChatGPT. And the types of examples that they would have, I don't think I need to read these exactly, but one that I have read a whole bunch of times is like, they took these prompts from Reddit that was like, explain like I'm five or get career advice, and people really pour their heart and soul into these. So these are like multi-paragraph pieces of writing. And then they essentially do comparisons between a vanilla language model, like I think it was either GPT-2 or GPT-3, I don't always get the exact years.Swyx [00:22:42]: 3 was early 2020. So that's about right.Nathan [00:22:45]: Yeah. So this is probably done with GPT-2. It doesn't really matter. But the language model does normal things when you do few shot, which is like it repeats itself. It doesn't have nice text. And what they did is that this was the first time where the language model would generate like pretty nice text from an output. It was restricted to the summarization domain. But I think that I guess this is where I wish I was paying attention more because I would see the paper, but I didn't know to read the language model outputs and kind of understand this qualitative sense of the models very well then. Because you look at the plots in the papers, these Learning to Summarize and Destruct GPT have incredibly pretty plots, just like nicely separated lines with error bars and they're like superfine tuning works, the RL step works. But if you were early to see like how different the language that was written by these models was, I think you could have been early to like things like ChatGPT and knowing RLHF would matter. And now I think the good people know to chat with language models, but not even everyone does this. Like people are still looking at numbers. And I think OpenAI probably figured it out when they were doing this, how important that could be. And then they had years to kind of chisel away at that and that's why they're doing so well now. Yeah.Swyx [00:23:56]: I mean, arguably, you know, it's well known that ChatGPT was kind of an accident that they didn't think it would be that big of a deal. Yeah.Nathan [00:24:02]: So maybe they didn't. Maybe they didn't, but they were getting the proxy that they needed.Swyx [00:24:06]: I've heard off the record from other labs that it was in the air. If OpenAI didn't do it, someone else would have done it. So you've mentioned a couple of other papers that are very seminal to this period. And I love how you say way back when in referring to 2019.Nathan [00:24:19]: It feels like it in my life.Swyx [00:24:21]: So how much should people understand the relationship between RLHF, instruction tuning, PPO, KL divergence, anything like that? Like how would you construct the level of knowledge that people should dive into? What should people know at the high level? And then if people want to dive in deeper, where do they go? Is instruct tuning important here or is that part of the overall process towards modern RLHF?Nathan [00:24:44]: I think for most people, instruction tuning is probably still more important in their day to day life. I think instruction tuning works very well. You can write samples by hand that make sense. You can get the model to learn from them. You could do this with very low compute. It's easy to do almost in like no code solutions at this point. And the loss function is really straightforward. And then if you're interested in RLHF, you can kind of learn from it from a different perspective, which is like how the instruction tuning distribution makes it easier for your RLHF model to learn. There's a lot of details depending on your preference data, if it's close to your instruction model or not, if that matters. But that's really at the RLHF stage. So I think it's nice to segment and just kind of understand what your level of investment and goals are. I think instruction tuning still can do most of what you want to do. And it's like, if you want to think about RLHF, at least before DPO really had taken off at all, it would be like, do you want to have a team of at least like five people if you're really thinking about doing RLHF? I think DPO makes it a little bit easier, but that's still really limited to kind of one data set that everyone's using at this point. Like everyone's using this ultra feedback data set and it boosts AlpacaVal, MTBench, TruthfulQA and like the qualitative model a bit. We don't really know why. It's like, it might just be a data set combined with the method, but you've got to be ready for a bumpy ride if you're wanting to try to do RLHF. I don't really recommend most startups to do it unless it's like going to provide them a clear competitive advantage in their kind of niche, because you're not going to make your model chat GPT like better than OpenAI or anything like that. You've got to accept that there's some exploration there and you might get a vein of benefit in your specific domain, but I'm still like, oh, be careful going into the RLHF can of worms. You probably don't need to.Swyx [00:26:27]: Okay. So there's a bit of a time skip in what you mentioned. DPO is like a couple months old, so we'll leave that towards the end. I think the main result that I think most people talk about at this stage, we're talking about September 2020 and then going into, I guess maybe last year was Vicuña as one of the more interesting applications of instruction tuning that pushed LLAMA1 from, let's say a GPT 3-ish model to a GPT 3.5 model in pure open source with not a lot of resources. I think, I mean, they said something like, you know, they use like under $100 to makeNathan [00:26:58]: this. Yeah. Like instruction tuning can really go a long way. I think the claims of chat GPT level are long overblown in most of the things in open source. I think it's not to say, like Vicuña was a huge step and it's just kind of showing that instruction tuning with the right data will completely change what it feels like to talk with your model. Yeah.Swyx [00:27:19]: From text completion to actually chatting back and forth. Yeah. Yeah.Nathan [00:27:23]: Instruction tuning can be multi-turn. Just having a little bit of data that's like a couple of turns can go a really long way. That was like the story of the whole first part of the year is like people would be surprised by how far you can take instruction tuning on a small model. I think the things that people see now is like the small models don't really handle nuance as well and they could be more repetitive even if they have really good instruction tuning. But if you take that kind of 7 to 70 billion parameter jump, like the instruction tuning at the bigger model is like robustness, little things make more sense. So that's still just with instruction tuning and scale more than anything else.Swyx [00:27:56]: Excellent. Shall we go to technical overview?Nathan [00:27:58]: Yeah. This is kind of where we go through my own version of this like three phase process. You can talk about instruction tuning, which we've talked about a lot. It's funny because all these things, instruction tuning has the fewest slides, even though it's the most practical thing for most people. We could save the debate for like if the big labs still do instruction tuning for later, but that's a coming wave for people. And then like preference data and training and then kind of like what does reinforce learning optimization actually mean? We talk about these sequentially because you really have to be able to do each of them to be able to do the next one. You need to be able to have a model that's chatty or helpful instruction following. Every company has their own word that they like to assign to what instructions mean. And then once you have that, you can collect preference data and do some sort of optimization.Swyx [00:28:39]: When you say word, you mean like angle bracket inst or do you mean something else?Nathan [00:28:42]: Oh, I don't even know what inst means, but just saying like they use their adjective that they like. I think Entropic also like steerable is another one.Swyx [00:28:51]: Just the way they describe it. Yeah.Nathan [00:28:53]: So like instruction tuning, we've covered most of this is really about like you should try to adapt your models to specific needs. It makes models that were only okay, extremely comprehensible. A lot of the times it's where you start to get things like chat templates. So if you want to do system prompts, if you want to ask your model, like act like a pirate, that's one of the ones I always do, which is always funny, but like whatever you like act like a chef, like anything, this is where those types of things that people really know in language models start to get applied. So it's good as a kind of starting point because this chat template is used in our early childhood and all of these things down the line, but it was a basic pointer. It's like, once you see this with instruction tuning, you really know it, which is like you take things like stack overflow where you have a question and an answer. You format that data really nicely. There's much more tricky things that people do, but I still think the vast majority of it is question answer. Please explain this topic to me, generate this thing for me. That hasn't changed that much this year. I think people have just gotten better at scaling up the data that they need. Yeah, this is where this talk will kind of take a whole left turn into more technical detail land. I put a slide with the RLHF objective, which I think is good for people to know. I've started going back to this more, just kind of understand what is trying to happen here and what type of math people could do. I think because of this algorithm, we've mentioned this, it's in the air, direct preference optimization, but everything kind of comes from an equation of trying to learn a policy that maximizes the reward. The reward is some learned metric. A lot can be said about what the reward should be subject to some constraint. The most popular constraint is the KL distraint, which is just a distributional distance. Essentially in language models, that means if you have a completion from your instruction or RLHF model, you can compare that completion to a base model. And looking at the log probs from the model, which are essentially how likely each token is, you can see a rough calculation of the distance between these two models, just as a scalar number. I think what that actually looks like in code, you can look at it. It'd be like a sum of log probs that you get right from the model. It'll look much more simpler than it sounds, but it is just to make the optimization kind of stay on tracks.Make sure it doesn't overfit to the RLHF data. Because we have so little data in RLHF, overfitting is really something that could happen. I think it'll fit to specific features that labelers like to see, that the model likes to generate, punctuation, weird tokens like calculator tokens. It could overfit to anything if it's in the data a lot and it happens to be in a specific format. And the KL constraint prevents that. There's not that much documented work on that, but there's a lot of people that know if you take that away, it just doesn't work at all. I think it's something that people don't focus on too much. But the objective, as I said, it's just kind of, you optimize the reward. The reward is where the human part of this comes in. We'll talk about that next. And then subject to a constraint, don't change the model too much. The real questions are, how do you implement the reward? And then how do you make the reward go up in a meaningful way? So like a preference model, the task is kind of to design a human reward. I think the equation that most of the stuff is based on right now is something called a Bradley-Terry model, which is like a pairwise preference model where you compare two completions and you say which one you like better. I'll show an interface that Anthropic uses here. And the Bradley-Terry model is really a fancy probability between two selections. And what's happening in the math is that you're looking at the probability that the chosen completion, the one you like better, is actually the better completion over the rejected completion. And what these preference models do is they assume this probability is correlated to reward. So if you just sample from this probability, it'll give you a scalar. And then you use that reward later on to signify what piece of text is better. I'm kind of inclined to breeze through the math stuff because otherwise, it's going to be not as good to listen to.Alessio [00:32:49]: I think people want to hear it. I think there's a lot of higher level explanations out there. Yeah.Nathan [00:32:55]: So the real thing is you need to assign a scalar reward of how good a response is. And that's not necessarily that easy to understand. Because if we take back to one of the first works, I mentioned this tamer thing for decision making. People tried that with language models, which is if you have a prompt in a completion and you just have someone rate it from 0 to 10, could you then train a reward model on all of these completions in 0 to 10 ratings and see if you can get chat2BT with that? And the answer is really kind of no. Like a lot of people tried that. It didn't really work. And then that's why they tried this pairwise preference thing. And it happened to work. And this Bradley Terry model comes from the 50s. It's from these fields that I was mentioning earlier. And it's wild how much this happens. I mean, this screenshot I have in the slides is from the DPO paper. I think it might be the appendix. But it's still really around in the literature of what people are doing for RLHF.Alessio [00:33:45]: Yeah.Nathan [00:33:45]: So it's a fun one to know.Swyx [00:33:46]: I'll point out one presumption that this heavily relies on. You mentioned this as part of your six presumptions that we covered earlier, which is that you can aggregate these preferences. This is not exactly true among all humans, right? I have a preference for one thing. You have a preference for a different thing. And actually coming from economics, you mentioned economics earlier. There's a theorem or a name for this called error impossibility, which I'm sure you've come across..Nathan [00:34:07]: It's one of the many kind of things we throw around in the paper.Swyx [00:34:10]: Right. Do we just ignore it?Nathan [00:34:14]: We just, yeah, just aggregate. Yeah. I think the reason this really is done on a deep level is that you're not actually trying to model any contestable preference in this. You're not trying to go into things that are controversial or anything. It's really the notion of preference is trying to stay around correctness and style rather than any meaningful notion of preference. Because otherwise these companies, they don't want to do this at all. I think that's just how it is. And it's like, if you look at what people actually do. So I have a bunch of slides on the feedback interface. And they all publish this.Swyx [00:34:43]: It's always at the appendices of every paper.Nathan [00:34:47]: There's something later on in this talk, which is like, but it's good to mention. And this is when you're doing this preference collection, you write out a very long document of instructions to people that are collecting this data. And it's like, this is the hierarchy of what we want to prioritize. Something amount like factuality, helpfulness, honestness, harmlessness. These are all different things. Every company will rank these in different ways, provide extensive examples. It's like, if you see these two answers, you should select this one and why. And all of this stuff. And then my kind of like head scratching is like, why don't we check if the models actually do these things that we tell the data annotators to collect? But I think it's because it's hard to make that attribution. And it's hard to test if a model is honest and stuff. It would just be nice to understand the kind of causal mechanisms as a researcher or like if our goals are met. But at a simple level, what it boils down to, I have a lot more images than I need. It's like you're having a conversation with an AI, something like type GPT. You get shown two responses or more in some papers, and then you have to choose which one is better. I think something you'll hear a lot in this space is something called a Likert scale. Likert is a name. It's a name for probably some research in economics, decision theory, something. But essentially, it's a type of scale where if you have integers from like one to eight, the middle numbers will represent something close to a tie. And the smallest numbers will represent one model being way better than the other. And the biggest numbers will be like the other models better. So in the case of one to eight, if you're comparing models A to B, if you return a one, if you really liked option A, you return eight if you really like B, and then like a four or five if they were close. There's other ways to collect this data. This one's become really popular. We played with it a bit at Hugging Face. It's hard to use. Filling out this preference data is really hard. You have to read like multiple paragraphs. It's not for me. Some people really like it. I hear I'm like, I can't imagine sitting there and reading AI-generated text and like having to do that for my job. But a lot of these early papers in RLHF have good examples of what was done. The one I have here is from Anthropic's collection demo because it was from slides that I did with Anthropic. But you can look up these in the various papers. It looks like Chat2BT with two responses, and then you have an option to say which one is better. It's nothing crazy. The infrastructure is almost exactly the same, but they just log which one you think is better. I think places like Scale are also really big in this where a lot of the labeler companies will help control like who's doing how many samples. You have multiple people go over the same sample once and like what happens if there's disagreement. I don't really think this disagreement data is used for anything, but it's good to know like what the distribution of prompts is, who's doing it, how many samples you have, controlling the workforce. All of this is very hard. A last thing to add is that a lot of these companies do collect optional metadata. I think the Anthropic example shows a rating of like how good was the prompt or the conversation from good to bad because things matter. Like there's kind of a quadrant of preference data in my mind, which is you're comparing a good answer to a good answer, which is like really interesting signal. And then there's kind of the option of you're comparing a bad answer to a bad answer, which is like you don't want to train your model on two different issues. This is like, we did this at Hugging Base and it was like, our data was like, we don't know if we can use this because a lot of it was just bad answer to bad answer because you're like rushing to try to do this real contract. And then there's also good answer to bad answer, which I think is probably pretty reasonable to include. You just prefer the good one and move on with your life. But those are very different scenarios. I think open AIs of the world are all in good answer, good answer, and have learned to eliminate everything else. But when people try to do this in open source, it's probably like what Open Assistance saw is like, there's just a lot of bad answers in your preference data. And you're like, what do I do with this? Metadata flags can help. I threw in the instruct GPT metadata. You can see how much they collect here. And like everything from the model fails to actually complete the task, hallucinations, different types of offensive or dangerous content, moral judgment, expresses opinion. Like, I don't know exactly if they're doing this now, but you can kind of see why doing RLHF at scale and prioritizing a lot of different endpoints would be hard because these are all things I'd be interested in if I was scaling up a big team to do RLHF and like what is going into the preference data. You do an experiment and you're like, okay, we're going to remove all the data where they said the model hallucinates like just that and then retrain everything. Like, what does that do?Swyx [00:38:59]: Yeah, so hallucination is big, but some of these other metadata categories, and I've seen this in a lot of papers, it's like, does it contain sexual content? Does it express a moral judgment? Does it denigrate a protected class? That kind of stuff, very binary. Should people try to adjust for this at the RLHF layer or should they put it as a pipeline where they have a classifier as a separate model that grades the model output?Nathan [00:39:20]: Do you mean for training or like a deployment? Deployment. I do think that people are doing it at deployment. I think we've seen safety and other things in the RLHF pipeline. Like Lama 2 is famous for kind of having this like helpfulness and safety reward models. Deep in the Gemini report is something that Gemini has like four things, which is like helpfulness, factuality, maybe safety, maybe something else. But places like Anthropic and Chattopadhyay and Bard almost surely have a classifier after, which is like, is this text good? Is this text bad? That's not that surprising, I think, because you could use like a hundred times smaller language model and do much better at filtering than RLHF. But I do think it's still so deeply intertwined with the motivation of RLHF to be for safety that some of these categories still persist. I think that's something I'll kind of settle out, I think.Swyx [00:40:11]: I'm just wondering if it's worth collecting this data for the RLHF purpose, if you're not going to use it in any way, separate model to-Nathan [00:40:18]: Yeah, I don't think OpenAI will collect all of this anymore, but I think for research perspectives, it's very insightful to know, but it's also expensive. So essentially your preference data scales with how many minutes it takes for you to do each task and every button is like, it scales pretty linearly. So it's not cheap stuff.Swyx [00:40:35]: Can we, since you mentioned expensiveness, I think you may have joined one of our spaces back in Lama 2 was released. We had an estimate from you that was something on the order of Lama 2 costs $3 to $6 million to train GPU-wise, and then it was something like $20 to $30 million in preference data. Is that something that's still in the ballpark? I don't need precise numbers.Nathan [00:40:56]: I think it's still a ballpark. I know that the 20 million was off by a factor of four because I was converting from a prompt number to a total data point. So essentially when you do this, if you have multi-turn setting, each turn will be one data point and the Lama 2 paper reports like 1.5 million data points, which could be like 400,000 prompts. So I would say it's still say like 6 to 8 million is safe to say that they're spending, if not more, they're probably also buying other types of data and or throwing out data that they don't like, but it's very comparable to compute costs. But the compute costs listed in the paper always are way lower because all they have to say is like, what does one run cost? But they're running tens or hundreds of runs. So it's like, okay, like... Yeah, it's just kind of a meaningless number. Yeah, the data number would be more interesting.Alessio [00:41:42]: What's the depreciation of this data?Nathan [00:41:46]: It depends on the method. Like some methods, people think that it's more sensitive to the, this is what I was saying. It was like, does the type of instruction tuning you do matter for RLHF? So like, depending on the method, some people are trying to figure out if you need to have like what is called like, this is very confusing. It's called like on policy data, which is like your RLHF data is from your instruction model. I really think people in open source and academics are going to figure out how to use any preference data on any model just because they're scrappy. But there's been an intuition that to do like PPO well and keep improving the model over time and do like what Meta did and what people think that OpenAI does is that you need to collect new preference data to kind of edge the distribution of capabilities forward. So there's a depreciation where like the first batch of data you collect isn't really useful for training the model when you have the fifth batch. We don't really know, but it's a good question. And I do think that if we had all the LLAMA data, we wouldn't know what to do with all of it. Like probably like 20 to 40% would be pretty useful for people, but not the whole data set. Like a lot of it's probably kind of gibberish because they had a lot of data in there.Alessio [00:42:51]: So do you think like the open source community should spend more time figuring out how to reuse the data that we have or like generate more data? I think that's one of the-Nathan [00:43:02]: I think if the people are kind of locked into using synthetic data, people also think that synthetic data is like GPT-4 is more accurate than humans at labeling preferences. So if you look at these diagrams, like humans are about 60 to 70% agreement. And we're like, that's what the models get to. And if humans are about 70% agreement or accuracy, like GPT-4 is like 80%. So it is a bit better, which is like in one way of saying it.Swyx [00:43:24]: Humans don't even agree with humans 50% of the time.Nathan [00:43:27]: Yeah, so like that's the thing. It's like the human disagreement or the lack of accuracy should be like a signal, but how do you incorporate that? It's really tricky to actually do that. I think that people just keep using GPT-4 because it's really cheap. It's one of my like go-to, like I just say this over and over again is like GPT-4 for data generation, all terms and conditions aside because we know OpenAI has this stuff is like very cheap for getting pretty good data compared to compute or salary of any engineer or anything. So it's like tell people to go crazy generating GPT-4 data if you're willing to take the organizational like cloud of should we be doing this? But I think most people have accepted that you kind of do this, especially at individuals. Like they're not gonna come after individuals. I do think more companies should think twice before doing tons of OpenAI outputs. Also just because the data contamination and what it does to your workflow is probably hard to control at scale.Swyx [00:44:21]: And we should just mention at the time of recording, we've seen the first example of OpenAI enforcing their terms of service. ByteDance was caught, reported to be training on GPT-4 data and they got their access to OpenAI revoked. So that was one example.Nathan [00:44:36]: Yeah, I don't expect OpenAI to go too crazy on this cause they're just gonna, there's gonna be so much backlash against them. And like, everyone's gonna do it anyways.Swyx [00:44:46]: And what's at stake here to spell it out is like, okay, that's like cost $10 to collect one data point from a human. It's gonna cost you like a 10th of a cent with OpenAI, right? So like it's just orders of magnitude cheaper. And therefore people-Nathan [00:44:58]: Yeah, and it's like the signal you get from humans is from preferences isn't that high. The signal that you get from humans for instructions is pretty high, but it is also very expensive. So like the human instructions are definitely like by far and away the best ones out there compared to the synthetic data. But I think like the synthetic preferences are just so much easier to get some sort of signal running with and you can work in other, I think people will start working in other goals there between safety and whatever. That's something that's taking off and we'll kind of see that. I think in 2024, at some point, people will start doing things like constitutional AI for preferences, which will be pretty interesting. I think we saw how long it took RLHF to get started in open source. Instruction tuning was like the only thing that was really happening until maybe like August, really. I think Zephyr was the first model that showed success with RLHF in the public, but that's a long time from everyone knowing that it was something that people are interested in to having any like check mark. So I accept that and think the same will happen with constitutional AI. But once people show that you can do it once, they continue to explore.Alessio [00:46:01]: Excellent.Swyx [00:46:01]: Just in the domain of human preference data suppliers, Scale.ai very happily will tell you that they supplied all that data for Lama 2. The other one is probably interesting, LMSYS from Berkeley. What they're running with Chaterina is perhaps a good store of human preference data.Nathan [00:46:17]: Yeah, they released some toxicity data. They, I think, are generally worried about releasing data because they have to process it and make sure everything is safe and they're really lightweight work. I think they're trying to release the preference data. I have, if we make it to evaluation, I'd pretty much say that Chaterina is the best limited evaluation that people have to learn how to use language models. And like, it's very valuable data. They also may share some data with people that they host models from. So like if your model is hosted there and you pay for the hosting, you can get the prompts because you're pointing the endpoint at it and that gets pinged to you and you're any real LLM inference stack saves the prompts tha

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are running an end of year listener survey! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here.NeurIPS 2023 took place from Dec 10–16 in New Orleans. The Latent Space crew was onsite for as many of the talks and workshops as we could attend (and more importantly, hosted cocktails and parties after hours)!Picking from the 3586 papers accepted to the conference (available online, full schedule here) is an impossible task, but we did our best to present an audio guide with brief commentary on each. We also recommend MLContests.com NeurIPS recap and Seb Ruder's NeurIPS primer. We also found the VizHub guide useful for a t-SNE clustering of papers.We'll start with the NeurIPS Best Paper Awards, and then go to a selection of non-awarded but highly influential papers, and then arbitrary personal picks to round out the selection. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. We give Chris Ré the last word due to the Mamba and StripedHyena state space models drawing particular excitement but still being too early to assess impact. Timestamps* [0:01:19] Word2Vec (Jeff Dean, Greg Corrado)* [0:15:28] Emergence Mirage (Rylan Schaeffer)* [0:28:48] DPO (Rafael Rafailov)* [0:41:36] DPO Poster Session (Archit Sharma)* [0:52:03] Datablations (Niklas Muennighoff)* [1:00:50] QLoRA (Tim Dettmers)* [1:12:23] DataComp (Samir Gadre)* [1:25:38] DataComp Poster Session (Samir Gadre, Alex Dimakis)* [1:35:25] LLaVA (Haotian Liu)* [1:47:21] LLaVA Poster Session (Haotian Liu)* [1:59:19] Tree of Thought (Shunyu Yao)* [2:11:27] Tree of Thought Poster Session (Shunyu Yao)* [2:20:09] Toolformer (Jane Dwivedi-Yu)* [2:32:26] Voyager (Guanzhi Wang)* [2:45:14] CogEval (Ida Momennejad)* [2:59:41] State Space Models (Chris Ré)Papers covered* Distributed Representations of Words and Phrases and their Compositionality (Word2Vec) Tomas Mikolov · Ilya Sutskever · Kai Chen · Greg Corrado · Jeff Dean. The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several improvements that make the Skip-gram model more expressive and enable it to learn higher quality vectors more rapidly. We show that by subsampling frequent words we obtain significant speedup, and also learn higher quality representations as measured by our tasks. We also introduce Negative Sampling, a simplified variant of Noise Contrastive Estimation (NCE) that learns more accurate vectors for frequent words compared to the hierarchical softmax. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of Canada'' and "Air'' cannot be easily combined to obtain "Air Canada''. Motivated by this example, we present a simple and efficient method for finding phrases, and show that their vector representations can be accurately learned by the Skip-gram model.* Are Emergent Abilities of Large Language Models a Mirage? (Schaeffer et al.). Emergent abilities are abilities that are present in large-scale models but not in smaller models and are hard to predict. Rather than being a product of models' scaling behavior, this paper argues that emergent abilities are mainly an artifact of the choice of metric used to evaluate them. Specifically, nonlinear and discontinuous metrics can lead to sharp and unpredictable changes in model performance. Indeed, the authors find that when accuracy is changed to a continuous metric for arithmetic tasks where emergent behavior was previously observed, performance improves smoothly instead. So while emergent abilities may still exist, they should be properly controlled and researchers should consider how the chosen metric interacts with the model.* Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al.)* While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of model generations and fine-tune the unsupervised LM to align with these preferences, often with reinforcement learning from human feedback (RLHF). However, RLHF is a complex and often unstable procedure, first fitting a reward model that reflects the human preferences, and then fine-tuning the large unsupervised LM using reinforcement learning to maximize this estimated reward without drifting too far from the original model. * In this paper, we leverage a mapping between reward functions and optimal policies to show that this constrained reward maximization problem can be optimized exactly with a single stage of policy training, essentially solving a classification problem on the human preference data. The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight, eliminating the need for fitting a reward model, sampling from the LM during fine-tuning, or performing significant hyperparameter tuning. * Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. Notably, fine-tuning with DPO exceeds RLHF's ability to control sentiment of generations and improves response quality in summarization and single-turn dialogue while being substantially simpler to implement and train.* Scaling Data-Constrained Language Models (Muennighoff et al.)* The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations.* QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al.). * This paper proposes QLoRA, a more memory-efficient (but slower) version of LoRA that uses several optimization tricks to save memory. They train a new model, Guanaco, that is fine-tuned only on a single GPU for 24h and outperforms previous models on the Vicuna benchmark. Overall, QLoRA enables using much fewer GPU memory for fine-tuning LLMs. Concurrently, other methods such as 4-bit LoRA quantization have been developed that achieve similar results.* DataComp: In search of the next generation of multimodal datasets (Gadre et al.)* Multimodal datasets are a critical component in recent breakthroughs such as CLIP, Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the machine learning ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. * Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. Our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release datanet and all accompanying code at www.datacomp.ai.* Visual Instruction Tuning (Liu et al)* Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. * By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.* Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.* Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Yao et al)* Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. * To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. * ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.* Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. * Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.* Toolformer: Language Models Can Teach Themselves to Use Tools (Schick et al)* LMs exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller specialized models excel. * In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. * We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. * This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q&A system, a search engine, a translation system, and a calendar. * Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.* Voyager: An Open-Ended Embodied Agent with Large Language Models (Wang et al)* We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention. Voyager consists of three key components: * 1) an automatic curriculum that maximizes exploration, * 2) an ever-growing skill library of executable code for storing and retrieving complex behaviors, and * 3) a new iterative prompting mechanism that incorporates environment feedback, execution errors, and self-verification for program improvement. * Voyager interacts with GPT-4 via blackbox queries, which bypasses the need for model parameter fine-tuning. The skills developed by Voyager are temporally extended, interpretable, and compositional, which compounds the agent's abilities rapidly and alleviates catastrophic forgetting. Empirically, Voyager shows strong in-context lifelong learning capability and exhibits exceptional proficiency in playing Minecraft. It obtains 3.3x more unique items, travels 2.3x longer distances, and unlocks key tech tree milestones up to 15.3x faster than prior SOTA. Voyager is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other techniques struggle to generalize.Voyager discovers new Minecraft items and skills continually by self-driven exploration, significantly outperforming the baselines.* Evaluating Cognitive Maps and Planning in Large Language Models with CogEval (Momennejad et al)* Recently an influx of studies claims emergent cognitive abilities in large language models (LLMs). Yet, most rely on anecdotes, overlook contamination of training sets, or lack systematic Evaluation involving multiple tasks, control conditions, multiple iterations, and statistical robustness tests. Here we make two major contributions. * First, we propose CogEval, a cognitive science-inspired protocol for the systematic evaluation of cognitive capacities in LLMs. The CogEval protocol can be followed for the evaluation of various abilities. * * Second, here we follow CogEval to systematically evaluate cognitive maps and planning ability across eight LLMs (OpenAI GPT-4, GPT-3.5-turbo-175B, davinci-003-175B, Google Bard, Cohere-xlarge-52.4B, Anthropic Claude-1-52B, LLaMA-13B, and Alpaca-7B). We base our task prompts on human experiments, which offer both established construct validity for evaluating planning, and are absent from LLM training sets.* * We find that, while LLMs show apparent competence in a few planning tasks with simpler structures, systematic evaluation reveals striking failure modes in planning tasks, including hallucinations of invalid trajectories and falling in loops. These findings do not support the idea of emergent out-of-the-box planning ability in LLMs. This could be because LLMs do not understand the latent relational structures underlying planning problems, known as cognitive maps, and fail at unrolling goal-directed trajectories based on the underlying structure. Implications for application and future directions are discussed.* Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Albert Gu, Tri Dao)* Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. * First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. * Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). * Mamba enjoys fast inference (5x higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-1.4B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.* Get full access to Latent Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

The Latent Space crew will be at NeurIPS on Tuesday! Reach out with any parties and papers of interest. We have also been incubating a smol daily AI Newsletter and Latent Space University is making progress.Good open models like Llama 2 and Mistral 7B (which has just released an 8x7B MoE model) have enabled their own sub-industry of finetuned variants for a myriad of reasons:* Ownership & Control - you take responsibility for serving the models* Privacy - not having to send data to a third party vendor* Customization - Improving some attribute (censorship, multiturn chat and chain of thought, roleplaying) or benchmark performance (without cheating)Related to improving benchmark performance is the ability to use smaller (7B, 13B) models, by matching the performance of larger models, which have both cost and inference latency benefits.Core to all this work is finetuning, and the emergent finetuning library of choice has been Wing Lian's Axolotl.AxolotlAxolotl is an LLM fine-tuner supporting SotA techniques and optimizations for a variety of common model architectures:It is used by many of the leading open source models:* Teknium: OpenHermes, Trismigestus, CollectiveCognition* OpenOrca: Mistral-OpenOrca, Mistral-SlimOrca* Nous Research: Puffin, Capybara, NousHermes* Pygmalion: Mythalion, Pygmalion* Eric Hartford: Dolphin, Samantha* DiscoResearch: DiscoLM 120B & 70B* OpenAccess AI Collective: Manticore, Minotaur, Jackalope, HippogriffAs finetuning is very formatting dependent, it also provides prompt interfaces and formatters between a range of popular model formats from Stanford's Alpaca and Steven Tey's ShareGPT (which led to Vicuna) to the more NSFW Pygmalion community.Nous Research MeetupWe last talked about Nous at the DevDay Recap at the e/acc “banger rave”. We met Wing at the Nous Research meetup at the a16z offices in San Francisco, where they officially announced their company and future plans:Including Nous Forge:Show NotesWe've already covered the nuances of Dataset Contamination and the problems with “Open Source” in AI, so we won't rehash those topics here but do read/listen to those if you missed it.* Axolotl GitHub and Discord* The Flan paper and dataset* StackLlama model and blogpost* Multipack paper* Our episode with Tri Dao* Mamba state space models - Tri Dao and Albert GuTimestamps* [00:00:00] Introducing Wing* [00:02:34] SF Open Source AI Meetup* [00:04:09] What is Axolotl?* [00:08:01] What is finetuning?* [00:08:52] Open Source Model Zoo* [00:10:53] Benchmarks and Contamination* [00:14:29] The Case for Open Source AI* [00:17:34] Orca and OpenOrca* [00:23:36] DiscoLM and Model Stacking* [00:25:07] Datasets and Evals over Models* [00:29:15] Distilling from GPT4* [00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ* [00:41:55] Axolotl vs HF Transformers* [00:48:00] 20x efficiency with StackLlama and Multipack* [00:54:47] Tri Dao and Mamba* [00:59:08] Roadmap for Axolotl* [01:01:20] The Open Source AI CommunityTranscript[00:00:00] Introducing Wing Lian[00:00:00] ​[00:00:00] swyx: Welcome to Latent Space, a special edition with Wing Lien, but also with our new guest host, Alex. Hello, hello. Welcome, welcome. Again, needs no introduction. I think it's like your sixth time on Latent Space already. I think so, yeah. And welcome, Wing. We just met, but you've been very prolific online. Thanks for having me.[00:00:30] Yeah. So you are in town. You're not local. You're in town. You're from Minneapolis?[00:00:35] Wing Lian: Annapolis. Annapolis. It's funny because a lot of people think it's Indianapolis. It's I've got Minneapolis, but I used to live out at least in the San Francisco Bay Area years ago from like 2008 to 2014. So it's fairly familiar here.[00:00:50] swyx: Yep. You're the maintainer of Axolotl now, which we'll get into. You're very, very prolific in the open source AI community, and you're also the founder of the Open Access AI Collective. Yeah. Cool. Awesome. Maybe we can go over a little bit of your backgrounds into tech and then coming into AI, and then we'll cover what[00:01:06] Wing Lian: happens and why you're here.[00:01:08] Yeah. So. Back on tech, so I started years ago, I started way back when I was scraping, Apartment websites for listings and then, and then building like SEO optimized pages and then just throwing Google AdSense on it.[00:01:24] And that got me through like college basically. Is[00:01:27] swyx: that decent money? And what year[00:01:28] Wing Lian: was this? Like 2004, 2005. Yeah, that's decent money. It's like thousand bucks a month. But as a college student, that's like. Gravy. Really good money, right? So, and then there's just too much competition It's just sort of like died off. I was writing stuff in like Perl back then using like like who nobody hosted anything on Perl anymore, right? Still did a little bit more like computer tech support and then software, and web more professionally.[00:01:54] So I spent some time working on applications in the blood industry. I came out to San Francisco for, I was at SGN, so Social Gaming Network, as a startup. They started doing, with Facebook apps, and then they pivoted into doing mobile apps. And then, from there, I spent time.[00:02:14] I've quite a few more startups since then and in the last few years I've been in the music space So like I was at United Masters for a while and then past year I've been at SoundCloud, but not doing that anymore and now that I have a lot more time It's just like all right.[00:02:30] We're going full bore on axolotl and we're gonna we're gonna crush AI So yeah,[00:02:34] SF Open Source AI Meetup[00:02:34] swyx: totally you so you're here in town for the open source. Yeah, I meet up that we had yesterday Yep, yeah, that was amazing. Yeah, it was a big collection. Olama, Noose Research, Alignment Lab, Anyone else that I missed? I mean, Jeremy Howard is his own thing.[00:02:47] Yeah.[00:02:49] And Alex, you're also there. You love to bring SF to the world. Your takes?[00:02:55] Alex Volkov: It's incredible that we recorded a Thursday Eye episode after that one. And LDJ, who's usually co hosts Thursday Eye, just like briefly mentioned, Oh yeah, I talked about it.[00:03:04] Like, I saw Karpathy, and then I talked to Jeremy Howard, and the guy from Mistral came in, and it's like, He's talking about all these, titans of industry, basically, that outside of SF, You just don't meet casually hanging out in the same space. You can't, pull somebody. He ran into the Laylow from Mistral, he ran into him while, drinking water.[00:03:20] He didn't even know he was there. It's just, that type of stuff is really hard to find outside of SF. So, absolutely, absolutely great. And also, presentations from Alignment Labs, presentations from News Research, news issues, talked about. Forge, and some of[00:03:33] swyx: the other stuff they announced. We can say now they're officially a company.[00:03:36] I met Technium.[00:03:37] He[00:03:37] Alex Volkov: came over here. He didn't want to get recorded. But maybe.[00:03:41] Wing Lian: We'll wear him down at some point. Yeah, I'm excited for Forge. They've positioned it as this agentic sort of framework where it's just Drag and drop things and, fill in text with where you want to inject different variables and it opens up all of these potentials for data pipelines now, right?[00:03:56] And using your own local LLMs and not relying on GPT 4 or anything like that. Yeah, yeah,[00:04:02] swyx: good stuff. Okay, so let's maybe go into the Axolotl origin story and then we have, we have some intro or background.[00:04:09] What is Axolotl?[00:04:09] swyx: To do on like the open source model universe and also on fine tuning, but maybe just, since you're talking about your personal journey, what was your personal journey into[00:04:18] Wing Lian: axolotl?[00:04:19] Yeah, so my personal journey started like back in mid March, completely unrelated to AI and axolotl. And it really started, I fell while skiing, I torqued. Great 3 MCL sprain and being sort of like an active person that can no longer be active because the two, couldn't play soccer, because that is requires to have having knees until I, it's healed.[00:04:42] So I. I decided I needed to find something to do to take up my free time. And that became, well, let's learn how to train in, these language models. It was everywhere. So I was like, all right, I'm just going to sit down, learn. I think I used like other, I think I was using like Alpacalora.[00:05:00] Cause I think the Alpaca paper had just came out, come out then. So I was like using Alpacalora repo and sort of like learning how to use like. None of us were like GPU rich back then, and none of us, most of us still we're still all GPU poor, but I was doing what was it, like 4 bit, Alpaca Lord, there was like a 4 bit version where we were doing quant, or 8, no, 8 bit quantizations, and then I think they had released QLOR a little bit later, and I think right when, before QLOR came out, I was already starting to do fine tunes, but having this need to sort of like mix data sets together, and If you've ever looked at all the various different datasets available on HuggingFace, they all have various different prompt formats, and, it's sort of a nightmare, and then I think the other piece is if you've ever tried to fine tune, at least Back then probably the ecosystem's a little better now.[00:05:54] Everybody required that you say, alright, you put your hyperparameters as command line arguments. And so it's always like, well, I now have to go copy and paste my previous thing and to change things out. And I really wanted it. to be in a YAML file because it was more portable and reproducible.[00:06:09] So I was doing that and then the QLOR paper came out. Tim Dettmer announced that and then somebody looked it up for me yesterday and it's like between that announcement it took us seven days to get that integrated into Axolotl, right? Which is like, it's not. I wouldn't say it's really fast, but in a manner that, is in a, a reusable framework, I think it was quite the accomplishment then.[00:06:33] And so we started, picking up traction with people there. And then it's just been building models, and then just iterating what my needs are. So, yeah. Excellent. Yeah. I[00:06:44] Alex Volkov: want to ask, for folks who are listening who never heard of Axolotl, now do you describe how you got there?[00:06:49] Can you, how do you summarize this for folks who maybe haven't fine tuned anything. They know about open source LLM exists, they maybe know like LLAML, what's XLR for somebody who doesn't know. I've never heard of a data set curation[00:07:01] Wing Lian: creation before. We sort of have to take a step back and understand that, when you've got these language models, you have what I think most people refer to as like base models, also known as like foundational models, right?[00:07:15] Where some benefactor, whether it's Meta or Mistral or whoever, has gone and spent all this money. To train these models on huge corpuses of text, right? And these, these corpuses, they're generally good across lots of different things, but they're really good at just saying, talking on and on and on, but they're not good at, following instructions or having chats or anything like that.[00:07:40] So, when you think about fine tuning, it's like Saying, all right, we have this really sort of good generalized, text completion thing, and I want to turn it into something that I can talk to or have, follow instructions. So, I think fine tuning is probably best defined in like that.[00:07:58] swyx: Okay, got it.[00:07:59] And we actually[00:08:01] What is finetuning?[00:08:01] swyx: Do want to make sure that we have like an overall introduction to fine tuning for people because again like trying to make sure that we bring everyone along in this, in this journey. We already went into Loras and QLoras without explaining what[00:08:12] Wing Lian: they are. Oh yes, yes, sorry.[00:08:14] swyx: And so I will put things in my words and you can correct me as, as, as my I'll be the village idiot here.[00:08:21] So, so fine tuning is basically sort of grabbing an open source model off the shelf, and then basically doing further training on it with a custom dataset of your own. Primarily, people use it, think about it as fine tuning for JSON output, or fine tuning for a style of response. Let's say you wanted to tell jokes, or be funny, or be short, or whatever.[00:08:43] Just the open source AI community has really fine tuned in all sorts of different manner. I think we'll go over those those things now. Let's go over those things now, and then we'll talk about fine tuning methods.[00:08:52] Open Source Model Zoo[00:08:52] swyx: So there's a universe of people who fine tune stuff. Yesterday in your slides, you had, I'll just list some of these and then we'll maybe go through some of them, right?[00:08:59] So Technium is personally leading Open Hermes, which is I think the sort of premier model out of the news. news community. There's OpenOrca, which you had a hand in. News, the news research itself also has Capybara and Puffin and all the others. There's Pygmalion, which I've never messed with.[00:09:14] Eric Hartford, I am aware of his Uncensored Models and his Samantha Models. Disco Research with Disco LM. And then you personally have done Manticore, Minotaur, Jackalope, and Hippogriff. What should people know about all these names? Being part of AI Twitter is seeing all these things and going dude, I'm being DDoS'ed by all these things and I don't know how different they are.[00:09:32] What should people know? Yeah, so[00:09:34] Wing Lian: I think on a lot of these models, generally, we like to think of those as sort of general models, so If you think about it, what is GPT 4, what is Chad GPT? It's a good general model, and then, One of the services I think that OpenAI offers is like these fine tunings where you're a business and you have very specific business use cases and you might fine tune for that use case.[00:10:00] All of these models are really just general use case that you can then go and maybe Fine tune another lore over it for your use cases, but they tend to be good. With good being relative, it's open source. Open source AI is still sort of is infancy. So, good is, it's pretty reasonable.[00:10:18] It's probably still better than most, high schoolers at answering questions and being able to like figure things out and, and reasoning skills and math and those sorts of things, right?[00:10:27] swyx: And also as measured on the Hugging[00:10:29] Wing Lian: Face leaderboard. Yes, well, that's like a whole other discussion, right, there's a whole other, group of people who, and I, I mostly agree with them that, benchmarks can be, are pretty bogus these days, LM says, I think they published something recently where, even if you think the dataset's not contaminated, you can go and, find contamination And maybe we should step back and say what contamination is, right?[00:10:53] Benchmarks and Contamination[00:10:53] Wing Lian: So we have all of these data, when you go and do these benchmarks, there's a specific data set where there are these questions and usually it's multiple choice. And what can happen is, well, sometimes someone It puts the question, maybe maliciously, maybe accidentally, into the training dataset, and now the, the, your model knows how to answer the test questions really well, but it doesn't, it hasn't generalized the ability to actually do that[00:11:20] Alex Volkov: right.[00:11:21] We've seen some folks competitively announce models that are like the best at that leaderboard, but then it's, it's quite obvious that, In open source? Yeah, and in that leaderboard, for Hugging Face specific, I don't know if LMCs, if that had suffered, but we, there's been some models that seem to have been competitively trained and some leakage happened into their,[00:11:41] swyx: like, supposal.[00:11:43] I understand, once there's been a credible assertion, Hugging Face actually does take them down, right? Yeah, yeah,[00:11:48] Alex Volkov: which is really hard to know, right?[00:11:50] swyx: It's really hard to know, sometimes it's like a pure accident,[00:11:52] Alex Volkov: it's oh, oops. You're going through a mixer. I think, a responsible So acknowledgement, that this kind of happened to you is also important.[00:11:58] I saw LDJ from news research can acknowledge that. Because many of these datasets are collections of other datasets. There's a bunch of people are baking, basically. It's alchemy. Right. And so sometimes you don't know. Sometimes you pull an open source dataset and they announce, oh, you know what, actually, the MMLU benchmark which we used to Specifically identify models that did go into this data set, that then went into that data set.[00:12:22] So sometimes it's actually an accident and folks take it down. But I've seen some competitive folks who want to put their name out there because people are starting to notice which is the top[00:12:30] swyx: model. For those who want a fun take on this so the file one dataset. FindOne model from Microsoft was accused of being contaminated.[00:12:37] And I saw this joke paper that was fantastic. It was called, training on the test set is all you need. It's a super small model that just memorizes everything. It was fantastic. So yeah, contamination, I think we've actually covered it in a previous episode before. So we're good. But again, I want to give people a map into the open source AI model, the universe.[00:12:57] And Alex, you can also jump in here because you guys have spent a lot more time with them than I have. So, what should people know about Technium? What should people know about Noose? And then we can go down the list. Yeah,[00:13:05] Wing Lian: I think so. I think if we start with, Technium. When you talk to him, he's gonna say, I think, I think his response is that he wants to build GP4 on his laptop, right?[00:13:14] So, very, very good at building general models. I think with Noose, Noose Research, they're looking at more, sort of, More, more research focused things, like their Yarn models, I don't, I don't, they didn't actually train their, they have their own trainer for their Yarn models, but So they did not use Xlato for that one?[00:13:30] They didn't use that, but like Is that, you don't have support for it? I think we do support Yarn, I think, I'd have to double check that answer. Yeah, I'm just kind of curious what you can and cannot support, and Yeah, I mean, Yarn is supportable, it's basically, I think it's just replacing, I think, the rope part of that, so Yeah, not, not a big deal.[00:13:48] Yeah, it's not a big deal, it's just I haven't gotten to it, not enough people have asked, I think a lot of people have asked for other things, so it's just, squeaky wheel, right? I think at the end of the day, people are like building these data sets and I think if you sort of map things chronologically, these make more sense because it's like, how do we incrementally improve all of these models?[00:14:07] So a lot of these models are just incremental improvements over the last thing, right? Whether it is sort of through methods of how do we, how did we curate the data set? How did we improve the quality of the data set? So, you maybe LDJ talked about it right on I think for, for Capybara and Puffin, like how those, those were very specific dataset curation techniques that he works on.[00:14:29] The Case for Open Source AI[00:14:29] Alex Volkov: So there's, folks are doing this for dataset curation. Folks are doing this for skillset building as well. Definitely people understand that open source is like very important, especially after the, the, the, the, the march, the debacle, the OpenAI weekend that we all had. And people started noticing that even after developer day in OpenAI, the APIs went out.[00:14:48] And then after that, the whole leadership of the company is swiftly changed and people, there was worries about, you know. How can people continue building AI products based on these like shaky grounds that turned attention definitely to Technium at least in open RMS I started seeing this more and more on Twitter, but also other models and many companies They're gonna start with open AI just to get there quick, and then they they think about okay Maybe I don't want to share my knowledge.[00:15:13] Maybe I don't want to sign up for Microsoft. Maybe they will change their terms and conditions so What else is out there? They turned to other companies. Up until yesterday, Google was nowhere to be found. We've talked about Gemini a little bit before in a previous And you can tune in[00:15:26] swyx: to[00:15:26] Alex Volkov: Thursday Eye.[00:15:26] Yeah, you can tune in to Thursday Eye. We covered the Gemini release a little bit. And but many are turning into the open source community and seeing that Meta released and continues to release and commit to open source AI. Mistral came out and the model is way smaller than LLAMA and performs Significantly better.[00:15:43] People play with OpenRMS, which is currently techniums based, news researched, sourced, axolotl trained OpenRMS, I assume, right? And then they play with this and they see that, okay, this is like GPT 3. 5 quality. We had GPT 4. 5 birthday just a week ago. A week ago, a year ago, a week ago, we never, interacted with these models of this caliber.[00:16:04] And now there's one open source, one that's on my laptop, completely offline, that, I can continue improving for my use cases. So enterprises, companies are also noticing this. And the open source community folks are building the skill set, not only the data sets. They're building the actual kind of, here's how we're going to do this, with Axelotl, with these data sets.[00:16:21] The curation pieces. Now. Interesting. There's like recipes of curation. The actual model training is kind of a competitive thing where people go and compete on these leaderboards that we talked about, the LMC arena, and that recently added open air and recently added open chat and a bunch of other stuff that are super cool.[00:16:37] The hug and face open source leaderboard. And so there's a competitive aspect to this. There's the open source. Aspect to this, like Technium says, I want GPT 4 on my laptop. There's the, let me build a skill set that potentially turns into a company, like we saw with Noose. Noose just, started organizing, a bunch of people on Discord, and suddenly, they're announcing their company.[00:16:54] It's happening across all these modalities, and suddenly all these people who saw these green pastures and a fairly quick way to, hey, here's a cool online community I can, start doing cool stuff with. You mentioned the same in the beginning, right? Like, after your accident, what's cool, let me try this out.[00:17:08] Suddenly I start noticing that there's a significant movement of interest in enterprising companies into these areas. And, this skill set, these data sets, and this community is now very Very important, important enough to create an event which pulls in Andrei Karpathy from OpenAI to come and see what's new Jeremy Howard, like the event that we just talked about, people are flying over and this is just a meetup.[00:17:28] So, definitely, the community is buzzing right now and I think Axelot is a big piece as well.[00:17:34] Orca and OpenOrca[00:17:34] Wing Lian: Cool. Maybe we can talk about like Orca real quick, Orca, OpenOrca rather, I think there was a lot of buzz when, the first Orca paper came out. And just briefly, what is Orca? Yeah, Orca was basically having traces of like chain of thought reasoning, right?[00:17:48] So they go and they, they distill sort of GPT 4. They take, they take a sampling of data from the Flan dataset. Maybe we can like add some show notes in the Flan dataset. Yeah, but we've covered it. Okay, cool. Use GPT 4 to say, all right, explain this in a step by step reasoning, right?[00:18:06] And then you take that and you, they train the model and it showed, very good improvements across a lot of benchmarks. So OpenOrca was sort of the open reproduction of that since Microsoft Research never released that particular data set. And going back to sort of the Hugging Face leaderboard thing, those models did really well.[00:18:23] And then I think, so sort of the follow up to that was SlimOrca, right? I think Going into and building the OpenOrca dataset, we never really went in and, validated the actual answers that GPT 4 gave us, so what we did was one from OpenChat actually cross referenced the original Flan, the original Flan response, the human responses, the correct answers with the dataset, and then I went and took it and sent all of, both of them to GPT 4 and said, is this answer mostly correct, right?[00:18:54] Yeah. And then we were able to filter the dataset from, At least of the GPT 4 only answers from like 800, 000 to like 500, 000 answers or rows and then, and then retrain the model and it had the same performance as the original model to within I think, 0. 1 percent here about, and 30 percent less data.[00:19:13] So, yeah. Okay.[00:19:15] swyx: Interesting. So, I mean, there's, there's so much there that I want to highlight, but yeah. Orca is interesting. I do want people to know about it. Putting chain of thought into the data set like it's just makes a ton of sense one thing I think it would be helpful for people to scope thing these things out is how much data are we talking about when when you When people are fine tuning and then how much time or resources or money does it take to train to fine[00:19:36] Wing Lian: tune?[00:19:37] Yeah, so I think there's a little bit of overlap there with sort of like fine tuning techniques, but let's say Orca and I think even Hermes, they're both relatively large data sets like 10 billion tokens. Yeah. So large data sets being or the original Orca was, or the original open Orca was 800,000 rows.[00:19:55] I believe it was somewhere in the ballpark of like a gigabyte of data, of gigabyte, of text data. And I, I don't. I believe, Hermes was, is like a quarter million rows of data, I don't know the actual byte size on that particular one. So, going and training a, let's, let's say everybody's training 7 billion Mistral right now, right?[00:20:15] So, to tri I, I believe to fine tune 7 billion Mistral on, let's say, 8 A6000s, which have 48 gigabytes of VRAM, I believe, It takes about 40 hours, so 40, and then that's, depending on where you get your compute, 40 times 6, so it's like 500 to fine tune that model, so, and, and that's assuming you get it right the first time, right?[00:20:44] So, you know.[00:20:45] swyx: Is, is that something that X. Lotto handles, like, getting it right the first[00:20:48] Wing Lian: time? If you talk to anybody, it's like you've probably tried at least three or four runs or experiments to like find the right hyperparameters. And after a while you sort of have a feel for like which, where you need your hyperparameters to be.[00:21:04] Usually you might do like a partial training run, do some benchmark. So I guess for Al Farouk, whether you're going by his. This is Jeremy, he's, his actual name, or his twitter handle. He released the Dharma dataset, which is basically a subset of all the benchmarks. And Axolotl actually supports, you know taking that subset and then just running many benchmarks across your model every time you're doing an evaluation so you can sort of like see sort of relative it's not going to be the actual benchmark score, but you can get ideas alright, is this benchmark improving, is this benchmark decreasing, based on, you know Wait,[00:21:39] swyx: why don't you run the full benchmark?[00:21:41] What, what, what The[00:21:42] Wing Lian: full benchmarks take Take a long time. Significant, yeah, significant amount of time. Yeah. And Okay, so that's like[00:21:48] swyx: mini MMLU. Yeah. Like,[00:21:49] Wing Lian: mini BigBench or whatever. Yep, exactly.[00:21:51] Alex Volkov: It's really cool. We, when I joined Web2Masters just recently, and one of the things that I try to do is hey I'm not, I'm a software engineer by trade, I don't have an MLE background, But I joined a company that does primarily MLE, and I wanted to learn from the community, Because a lot of the open source community, they use weights and biases, And the benchmark that you said that Pharrell did, remind me of the name, sorry.[00:22:13] Dharma? Dharma, yeah, yeah. So Luigi showed me how Dharma shows inside the dashboard. In Wi and Biases dashboard and so you can actually kinda see the trending run and then you can see per each kind of iteration or, or epoch or you can see the model improving trending so you can on top of everything else.[00:22:29] The wi and biases gives like hyper parameter tracking, which like you, you started with common line and that's really hard to like remember. Also the Dharma data set, like the quick, the mini orca mini, you mini many different things. It's pretty cool to like visualize them as well. And I, I heard that he's working on a new version of, of Dharma, so Dharma 2, et cetera.[00:22:47] So hopefully, hopefully we'll see that soon, but definitely it's hard, right? You start this training around, it said like 40, 50 hours. Sometimes, sometimes it's like your SSHing into this machine. You, you start a process, you send it with God and you just go about your day, collecting data sets, and then you have to return.[00:23:04] And the whole process of instrumentation of this is still a little bit like squeaky but definitely. Tuning performance, or like grabbing performance in the middle of this, like with Dharma and some other tools, is very helpful to know that you're not wasting precious resources going somewhere you shouldn't go.[00:23:21] Yeah.[00:23:22] swyx: Yeah. Very cool. Maybe I'll, I'll, before we go into like sort of more details on fine tuning stuff, I just wanted to round out the rest of the Excel autoverse. There's, there's still Eric Hartford stuff. I don't know if you want to talk about Pygmalion, Disco, anything that you know about[00:23:35] Wing Lian: those, those things.[00:23:36] DiscoLM and Model Stacking[00:23:36] Wing Lian: Yeah, I think like one of the, definitely one of the more interesting ones was like the Disco 120b, right? Yeah, I know nothing about it. Yeah. So, so. Alpen from Pygmalion AI, right, so they, so Pygmalion is a sort of a, it's, it's, they have their own community, a lot of it is based around, roleplay models, those sorts of things, and Alpen, like, put together, merged together Llama270B, so, and Alpen, like, put together, merged together Llama270B, so, I don't remember how he stacked them together, whether he merged the layers in between. There's a whole, there's a whole toolkit for that by Charles Goddard, where you can like take a single model and like stack them together or multiple models merge.[00:24:18] That's like a whole other talk and a whole other tool set, but was able to create this 120. Billion parameter model out of a LAMA two 70 B. And then I believe the, yeah, disco is a fine tune of, of the, the, the sort of the base one 20 B is, I believe Goliath one 20 B. So, and, and what are the[00:24:37] swyx: headline results that people should know about[00:24:39] Wing Lian: disco?[00:24:39] I think for the headline results, I, I've, I haven't played with it personally because it's. It's a very large model and there's a lot of GPU, right? But, like, from what I've heard anecdotally, it performs really well. The responses are very good. Even with, like, just, even the base model is a lot better than, Llama70b.[00:24:57] So, and we, I think generally everybody's like, we would all love to fine tune Llama70b, but it's just, it's so much, it's so much memory, so much compute, right?[00:25:07] Datasets and Evals over Models[00:25:07] Wing Lian: I[00:25:07] Alex Volkov: want to touch on this point because the interesting thing That comes up out of being in this ecosphere and being friends with open source folks, tracking week to week state of the art performance on different models.[00:25:19] First of all, a lot of the stuff that the folks do a couple of weeks ago, and then something like Mistral comes out, and a lot of the stuff back then, Doesn't technically make sense anymore. Like the artifacts of that work, the actual artifacts, they don't no longer make sense. They're like lower on the on, on the hug and face leaderboard or lower on LM CS leaderboard.[00:25:36] But some of the techniques that people use, definitely the datasets. The datasets keep traveling, right? So open airmen, for example, is the dataset. The tum cleaned up for only. Open sourceable data that previously was just Hermes. And that, it was previously used to train Lama. And then once Mistral came out, it was used to train Mistral.[00:25:54] And then it became significantly better on the 7b base Mistral. So the data sets keep traveling, keep getting better a little bit here and there. And so the techniques improve as well. It looks like both things are simultaneously true. The artifacts of a month and a half ago. The, the actual models themselves, it's great the hug and face has them, because not every company can keep up with the next weeks', oh, I, I'll install this model instead, sell this model instead.[00:26:19] But the, the techniques and the, the dataset keep improving as we go further, and I think that's really cool. However, the outcome of this is that for a long time. For many, many people, including us, that we do this every week. We literally talk with people who release these models every week. It's really hard to know.[00:26:36] So, there's a few aspects of this. One, I think, like you said, the bigger model, the 70B models, you actually have to have somebody like Perplexity, for example, giving you access to the 70B really fast. Or you have to, like, Actually, find some compute, and it's expensive, especially for the bigger models. For example Falcon 180B came out, like the hugest open source model.[00:26:56] How do you evaluate this if you can't run it? Nobody liked it. It's really, so first of all, nobody liked it, but secondly, only the people who were able to find compute enough to run inference on this, they only had like, I can't run this on my laptop, and so that's why it's much easier, something like OpenRMS 7 to be, 7B, it's much easier, because you can run this on your MacBook.[00:27:14] It's much easier to evaluate. It's much easier to figure out the vibes, right? Everybody talks about the vibes as an evaluation check. If you're plugged in enough, if you follow the right people, if they say pretty much the same things all independently, then you run into a problem of whether they're repeating, and their stochastic parents are repeating the same thing, or they actually evaluated themselves.[00:27:31] Yeah, you never know. But, you never know, but like, I think on a large enough scale on Twitter, you start getting the feel. And we all know that like, OpenRMS is one of the top performing models, benchmarks, but also vibes. And I just wanted to highlight this vibes checks thing because you can have the benchmarks, you can have the evaluations, they potentially have contamination in them, potentially they not necessarily tell you the whole story because some models are good on benchmarks, but then you talk to them, they're not super helpful.[00:28:00] And I think it's a combination of the benchmarks, the leaderboards, the chatbot, because LMSys, remember, their ranking is not only based on benchmarks, it's also people playing with their arena stuff. People actually like humans, like, get two answers. I think they completely ignore benchmarks. Yeah, and then They only do ELO.[00:28:18] Oh, they do ELO completely, right? So that, for example, is just like people playing with both models and say, Hey, I prefer this one, I prefer that one. But also there's like some selection bias. The type of people who will go to LMCs to play with the models, they're a little bit specific in terms of like who they are.[00:28:33] It's very interesting. There's so many models. People are doing this in this way, that way. Some people are doing this for academic rigor only to test out new ideas. Some people are actually doing this like the Intel fine tunes of Mistral. Intel wanted to come out and show that their hardware approach is possible, Mistral, etc.[00:28:51] And it's really hard to know, like, what to pick, what to use. And especially on the bigger models, like you said, like the Llama 70B, the Falcon 180B. It's really because, like, who has the compute to validate those? So I would mention that, like, use with caution. Like, go and research and see if the biggest model that just released was actually worth the tokens and the money you spend on it.[00:29:12] To try and, if you're a business, to integrate it.[00:29:15] Distilling from GPT4[00:29:15] swyx: Since you said use of caution, I'll bring in one issue that has always been in the back of my mind whenever I look at the entire universe of open source AI models, which is that 95 percent of the data is derived from GPC 4, correct?[00:29:30] Which technically you can't use for commercial licenses,[00:29:34] Wing Lian: right?[00:29:35] swyx: What is the community's stance on this kind of stuff?[00:29:40] Wing Lian: I think from the community stance, like I feel like a lot of us are just experimenting, so for us, it's like, we're not going and building a product that we're trying to sell, right?[00:29:49] We're just building a product because we think it's interesting and we want to use it in our day to day lives, whether or not we try and integrate it. Personal use, yeah. Yeah, personal use, so like, as long as we're not selling it, yeah, it's fine. But[00:30:01] swyx: like, I as a company cannot just take OpenHermes and start serving[00:30:05] Alex Volkov: it and make money on it.[00:30:06] OpenHermes you can. Because the opening of OpenHermes, I think, is a clean up. That did after the regular Hermes, please folks, check your licenses before you listen to podcasts and say, Hey, I will tell you though, you could say the same thing about OpenAI. You could say the same thing kind of makes sense, where OpenAI or StabilityAI trains their diffusion model on a bunch of pictures on the internet, and then the court kind of doesn't strike down Sarah Silverman, I think, or somebody else, who came and said, hey, this has my work in it, because of the way how it processes, and the model eventually builds this knowledge into the model, and then it doesn't actually reproduce one to one what happened in the dataset.[00:30:45] You could claim the same thing for open source. Like, we're using And by we, I mean the, the open source community that I like happily report on uses GPT 4 to rank, for example, which is the better answer you, you, that's how you build one, one type of data set, right? Or DPO or something like this, you, you basically generate data set of like a question and four answers, for example, and then you go to GPT 4 and say, Hey, smartest model in the world right now, up to Gemini Ultra, that we should mention as well.[00:31:11] Which one of those choices is better? But the choices themselves are not necessarily written with GPT 4. Some of them may be, so there's like full syntactic datasets. But there's also, datasets are just ranked with GPT 4. But they're actually generated with a sillier model, or like the less important model.[00:31:25] The lines are very blurry as to what type of stuff is possible or not possible. And again, when you use this model that's up on Hug Face, the license says you can use this. OpenAI is not going to come after you, the user. If anything, OpenAI will try to say, hey, let's prevent this, this type of thing happening, and the brain, but I honestly don't think that they could know even, not that it makes it okay, it's just like, They also kind of do this with the Internet's archive, and also, I think that some of it is for use.[00:31:55] You use models to help you augment tasks, which is what GPT 4 lets you do.[00:32:00] swyx: Yeah, the worst thing that OpenAI can do is just kick you off OpenAI. That's because it's only enforced in the terms of service.[00:32:05] Alex Volkov: Sure, but just like to make sure, to clarify who they're going to kick out, they could kick out like News, for example, if news are abusing their service, a user of the open source, fully Apache 2 open source, for example, They won't get kicked out if they use both, just because they use both.[00:32:22] I don't believe so. I don't think OpenAI has a claim for that.[00:32:25] swyx: Well, we're not lawyers, but I just want to mention it for people to know it's an issue.[00:32:30] Wing Lian: And one of the things, like, I talked to someone recently, and I think that they also are like interested in it, but also to the point of like, right, if I use a model trained on data, using GPT for data, But I use that model to then regenerate new data.[00:32:46] Is that model, is that data okay? So like you start going down this whole rabbit hole. So yeah. All right.[00:32:53] swyx: Fantastic. Cool. Well, I think that's roughly highlights most of the open source universe. You also have your own models. Do you want to shout out any one of them? Yeah.[00:33:01] Wing Lian: I mean, I think like, I think Early on, Manicore got a lot of love.[00:33:04] I think it was mostly popular in, like, the roleplay communities. It was, it tended to be pretty truthful. It tended to be, like, have relatively good answers, depending on who you ask, right? But, I think for me, it was just, Releasing models was a way to try and, like, continue to build out the product, figure out what I needed to put into the product, how do I make it faster, and, if you've got to, like, go and debug your product, you may as well have it do something useful.[00:33:29] Awesome. So, yeah.[00:33:31] Finetuning - LoRA, QLoRA, ReLoRA, GPTQ[00:33:31] swyx: Okay, and then maybe we'll talk about just fine tuning techniques. So this is going to be a little bit more technical than just talking about model names and datasets. So we started off talking about LoRa, QLoRa. I just learned from your readme there's ReLoRa. Which I've never heard about.[00:33:45] Could you maybe talk about, like, just parameter efficient fine tuning that whole, that[00:33:50] Wing Lian: whole journey, like, what people should know. Yeah, so with parameter efficient fine tuning, I think the popular ones, again, being, let's, we'll start with lore, right? So, usually what you do is you freeze all the layers on your base, on the base model, and then you, at the same time, you sort of introduce additional Oh, this is tight.[00:34:08] No. You introduce, another set of layers over it, and then you train those, and it is done in a way that is mathematically possible, particularly with LORs that you can, then you, you, When you, when you train the model, you, you run your inputs through the base model, whose weights are frozen, but you, then you also run it through the additional weights, and then at the end you combine the weights, and then, and then, or you combine the weights to get your outputs, and then at the end, and when you're done training, you're left with this other set of weights, right, that are completely independent, and And then from that, what you can do is, some person smarter than I figured out, well, oh, they've done it in such a way that now I can merge these weights back into the original model without changing the architecture of the model, right?[00:35:03] So, so, that tends to be, like, the go to, and You're training much fewer parameters so that when you do that, yes, you still need to have all of the original weights, but you have a smaller gradient, you have a smaller optimizer state, and you're just training less weights, so you can tend to train those models on, like, much smaller GPUs.[00:35:27] swyx: Yeah. And it's roughly like, what I've seen, what I've seen out there is roughly like 1 percent the number of parameters that you're trading. Yeah, that sounds about right. Which is that much cheaper. So Axelotl supports full fine tune, LoRa, QLoRa,[00:35:40] Wing Lian: Q. Yes. So, so QLoRa is, is very similar to LoRa. The paper was, if I remember correctly, the paper was Rather, traditionally, most people who did Loras were, were, they were quant, they were putting the model weights in 8 bit, and then fine tune, parameter efficient fine tuning over the Lora weights, and then with QLora, they were quantizing all of those, they were then quantizing the weights down to 4 bit, right, and then I believe they were also training on all of the linear layers in the model.[00:36:15] And then with ReLore, that was an interesting paper, and then, I think, like, it got implemented. Some people in the community tried it, tried it out, and it showed that it didn't really have the impact that the paper indicated that it would. And from what I was told recently, that they re I guess they re released something for Relora, like, a few weeks ago, and that it's possibly better.[00:36:44] I personally haven't had the time. What was the[00:36:46] swyx: main difference,[00:36:47] Wing Lian: apart from quantization? I don't know. Okay. What was the main difference, sorry?[00:36:49] swyx: Apart from quantization, right? Like,[00:36:50] Wing Lian: Qlora's thing was, like, we'll just drop off some bits. With Relora, what they did was, you would go through, you would define some number of steps that you would train, like, your Lora with, or your Qlora.[00:37:01] Like, you could do Like, ReqLore, if you really wanted to, you would, you would train your LoRa for some number of steps, And then you would merge those weights into your base model, and then you would start over. So by starting, so, then by starting over, The optimizer has to find, like, sort of, re optimize again, and find what's the best direction to move in, and then do it all again, and then merge it in, do it all again, and theoretically, according to the paper, doing ReLore, you can do parameter efficient fine tuning, but still have sort of, like, the performance gains of doing a full fine tuning, so.[00:37:38] swyx: Yeah, and[00:37:39] Wing Lian: GPTQ? And GPTQ, so it's, I think with GPTQ, it's very similar to, more similar to QLore, where you're, it's mostly a quantization of the weights down to like 4 bit, where GPTQ is a very, is a specific methodology or implementation of quantization, so. Got it.[00:37:57] Alex Volkov: Wang, for, for folks who use Axolotl, your users, some people who maybe, Want to try it out?[00:38:03] And do they need to know the differences? Do they need to know the implementation details of QLora versus ReLora? Or is it okay for them to just know that Axolotl is the place that already integrated them? And if that's true, if that's all they need to know, how do they choose which method to use? Yeah,[00:38:22] Wing Lian: so I think like, I think most people aren't going to be using ReLora.[00:38:25] I think most people are going to be using either Lora or QLora. And I think they should have it. They should have an understanding of why they might want to use one over the other. Most people will say that with Qlora, the quality of the final model is not quite as good as like if you were to do a LoRa or a full fine tune, right?[00:38:44] Just because, you've quantized these down, so your accuracy is probably a little off, and so that by the time you've done the Qlora, you're not moving the weights how you would on a full fine tune with the full parameter weights.[00:38:56] Interesting.[00:38:57] swyx: Okay, cool. For people who are more interested, obviously, read the papers. I just wanted to give people, like, a high level overview of what these things are. And you've done people a service by making it easy for people to try it out. I'm going to, I'm going to also ask a question which I know to be wrong, but I'm curious because I get asked this all the time.[00:39:15] What is the difference between all these kinds of fine tunes[00:39:17] Wing Lian: and RLHF? Okay, between all of these sorts of fine tunes and RLHF. So all of these sorts of fine tunes are based, are, ideally, this, they are taking knowledge that the base model already knows about, and presenting it in a way to the model that you're having the model answer like, Use what it already knows to sort of answer in a particular way, whether it's, you're extracting general knowledge, a particular task, right?[00:39:44] Instruct, tune, chat, those sorts of things. And then generally with RLHF, so what is, let's go back, what is it? Reinforcement Learning with Human Feedback. So if we start with the human feedback part, What you're doing is you generally have, you have like a given prompt and then you, maybe you have one, maybe you have two, I think, like if you look at with Starling, you have like up to what, seven different, seven different possible responses, and you're sort of ranking those responses on, on some sort of metric, right, whether the metric is how much I, I might like that answer versus or I think with like starling is like how how how helpful was the answer how accurate was the answer how toxic was the answer those sorts of things on some sort of scale right and then using that to go back and like sort of Take a model and nudge it in the direction of giving that feedback, to be able to answer questions based on those preferences.[00:40:42] swyx: Yeah, so you can apply, and is it commutative? Can you apply fine tuning after and onto an RLHF model? Or should the RLHF apply, come in afterwards,[00:40:54] Wing Lian: after the fine tune? Um, I, yeah, I don't know that there's There's been enough research for one way or another, like, I don't know.[00:41:02] That's a question that's been asked on Discord. Yeah, like, I definitely would say I don't know the answer. Go and try it and report back to me and let me know so I can answer for the next guy.[00:41:10] swyx: It's shocking how much is still unknown about all these things. Well, I mean, that's what research is for, right?[00:41:16] Wing Lian: So actually I, I think I saw on the top of a leaderboard, it was a, it was a mytral base model, and they didn't actually fine tune it. They, or they, they just did RLH, they did like an RLHF fine tune on it using like, I don't, I don't recall which dataset, but it was like, and it benchmarked really well.[00:41:37] But yeah, you'd have to go and look at it. But, so it is interesting, like going back to that, it's like. Traditionally, most people will fine tune the model and then do like a DPO, PPO, some sort of reinforcement learning over that, but that particular model was, it seemed like they skipped like the supervised fine tuning or Scott.[00:41:55] Axolotl vs HF Transformers[00:41:55] swyx: Cool. One thing I did also want to comment about is the overall, like, landscape, competitive landscape, I don't know. Hugging Face Transformers, I think, has a PFT module.[00:42:05] Wing Lian: Yeah, yeah, the PEFT, the Parameter Efficient Fine Tuning, yep. Is that a competitor to you? No, no, so we actually use it. We're just a wrapper over sort of, sort of the HuggingFace stuff.[00:42:15] So, so that is their own sort of module where They have, taken the responsibility or yeah, the responsibility of like where you're doing these parameter efficient fine tuning methods and just sort of like, it is in that particular package where transformers is mostly responsible for sort of like the modeling code and, and the trainer, right.[00:42:35] And then sort of, there's an integration between the two and, there's like a variety of other fine tuning packages, I think like TRL, TRLX, that's the stability AI one. Yeah, I think TRL likes the stability, yeah, Carper, and TRL is a hugging face trainer. Even that one's just another wrapper over, over the transformers library and the path library, right?[00:43:00] But what we do is we have taken sort of those, yes, we've We also use that, but we also have more validation, right? So, there are some of us who have done enough fine tunes where like, Oh, this and this just don't go together, right? But most people don't know that, so like Example?[00:43:19] Like, people want to One and one doesn't go together. I don't have an example offhand, but if you turn this knob and this knob, right? You would think, all right, maybe this will work, but you don't know until you try. And then by the time you find out it doesn't work, it's like maybe five minutes later, it's failed.[00:43:34] It's failed in the middle of training or it's failed during the evaluation step. And you're like, ah, so we've, we've added a lot of, we've added a lot more validation in it. So that like, when you've, you've created your configuration, you run it through and now you say. The validation code says this is probably not right or probably not what you don't, not what you want.[00:43:52] So are you like a, you[00:43:53] swyx: do some linting of your YAML file?[00:43:56] Wing Lian: There, I guess you could call it linting, it's sort of like Is there a set of rules out[00:44:00] swyx: there somewhere? Yeah, there's a set of rules in there. That's amazing, you should write documentation like This rule is because, this user at this time, like, ran into this bug and that's what we invested in.[00:44:10] It's like a good collection[00:44:11] Wing Lian: of knowledge. Yeah, it is, and I guess like, if you really wanted to, like, figure it out, I guess you could, like, git blame everything, and But, yeah, it's, so, I think that's always a useful thing, it's like Because people want to experiment but they don't, people will get frustrated when you've experiment, you're experimenting and it breaks and you don't know why or you know why and you've just gone down the rabbit hole, right?[00:44:37] So, so I think that's one of the big features that's, that I think I find important because it's It prevents you from doing things you probably shouldn't have, and it, and sometimes we will let you do those things, but we'll try and warn, warn you that you've done that.[00:44:50] I[00:44:51] Alex Volkov: have a follow up question on this, actually, because yesterday we hung out to this open source event, and I spent time by you a couple times, like when people told you, oh, XLR, I use XLR, it's super cool, and then the first thing you asked is, like, immediately, like, what can we improve?[00:45:04] And yes, from multiple folks, and I think we talked about this a little bit, where there's It's a developer tool. It's like a machine learning slash developer tool. Your purpose in this is to help and keep people, as much as possible, like, Hey, here's the best set of things that you can use right now. The bear libraries are, or the bear trainer, for example, is a bear trainer.[00:45:28] And also, maybe we should talk about how fast you're implementing these things. So you mentioned the first implementation took a week or so. Now there's a core maintainer group, right? There's like, features are landing, like Qlora, for example. Neftune, I don't know if that's one example of something that people potentially said that it's going to be cool, and then eventually, like, one of those things that didn't really shake out, like, people quickly tested this out.[00:45:48] So, there's a ton of Wait, Neftune is cancelled? I don't know if it's fully canceled, but based on vibes, I heard that it's not that great. So like, but the whole point that I'm trying to make with Neftune as well is that being existing in the community of like XLR or like, I don't know, even following the, the GitHub options or following the Discord, it's a fairly good way to like, learn these, Kind of gut feelings that you just, you just said, right?[00:46:14] Like where this, maybe this knob, that knob doesn't work. Some of these are not written down. Some of these are like tribal knowledge that passes from place to place. Axel is like a great collection of many of them. And so, do you get That back also from community of folks who just use, like, how do you know who uses this?[00:46:30] I think that's still an issue, like, knowing if they trained with XLR or should they add this to things? Talk about, how do you get feedback and how else you should get feedback?[00:46:38] Wing Lian: Yeah, I mean, most of the feedback comes from the Discord, so people come in and , they don't get a training running, they run into, like, obscure errors or, errors that That's a lot of things that maybe, maybe as a product we could catch, but like, there's a lot of things that at some point we need to go and do and it's just on the list somewhere.[00:46:58] Right that's why when people come up, I'm like, what, what were your pain points? Because like, as a developer tool, if you're not happy with it, or you come in and in the first, Takes you 30 minutes and you're still not happy. You leave the tool and you may, you might move on maybe to a better tool, maybe to, one with less frustration, but it may not be as good, right?[00:47:17] So I'm trying to like, figure out, all right, how can I reduce all this frustration? Because like for me, I use it every day for the most part, right? And so I am blind to that, right? Mm-Hmm. . Mm-Hmm. . I just know, I, I go do this, this, and this. It pretty much mostly works, right? But, so I don't have sort of that, alright, that learning curve that other people are seeing and don't understand their pain points.[00:47:40] Yeah,[00:47:40] Alex Volkov: you don't have the The ability to onboard yourself as a new user completely new to the whole paradigm to like get into the doors of like, Oh, no, I don't even know how to like ask about this problem or error.[00:47:53] swyx: Cool. The last few things I wanted to cover was also just the more advanced stuff that you covered yesterday.[00:48:00] 20x efficiency with StackLlama and Multipack[00:48:00] swyx: So I'll just, caution this as like, yeah, this is more advanced. But you mentioned Stackllama and Multipack. What are they[00:48:06] Wing Lian: and what should people know? Yeah, so, so, Stack Llama was, that paper came out, so Stack Llama I think was like, two, two, two separate, two separate concepts that they announced, so the first one was They being hugging face.[00:48:20] Yeah, sorry, yes, they being hugging face, so the first one being sort of like, this idea of packing, like some packing sequences together, so like, if we think about training data, right, your training data is, let's say, to keep the math easy, let's say your training data is 500, We, we, we, we will use the terminology words.[00:48:39] Let's say your training data is 500 words long, and let's say your, your context length, you know how much data your, that your model can accept is like, or that you want feed into your model. It's, let's say, we won't use tokens again, we'll we'll use it is it's 4,000 tokens, right? So if you're training at 4K Con or four 4,000 4K contacts and you're only using 500 of it, you're sitting like with the other 1500.[00:49:05] 3, 500 words that you're not using, right? And typically that's either filled with these PAD tokens, so I think I made the analogy last night that it's like having sort of like a glass here you fill it up with a shot of liquor and then you're and that's your training data and then you just fill it up with more water and those are your PAD tokens and it's just, it doesn't do much, right?[00:49:27] It's still the same thing, but you still have to go through all of that to go through all your training data. And then, so what Stack Llama showed was you could just sort of take your training data, append the next row of training data until you filled that entire 4k context, so in this example, right, with 500 words to 4k, that's 8 rows of training data.[00:49:48] But, the problem with that is, is that with a lot of these transformer models, they're very much relying on attention, right? So, like, if you now have this sequence of words that now, in order for the, the model has seen all of these other words before, right? And then it sees another set of words, another set of words, but it's learning everything in context of all the words that it's seen before.[00:50:13] We haven't corrected the attention for that. And just real quickly, since I said that that paper was two concepts, the other one was, I believe it was like a reinforcement learning, but outside the scope of this. So going from that, I implemented that early on because I was like, Oh, wow, this is really great.[00:50:29] And. Yes, because it saves you a bunch of time, but the trade off is a little bit of accuracy, ultimately, but it still did pretty well. I think when I did Manicore, I think it used sort of that concept from Stack Llama of just sort of appending these sequences together, right? And then sort of the next evolution of that is Multipack, right?[00:50:51] So, there was a separate paper on that, it was, I believe it was referenced, it got referenced in the Orca paper, where you could, you could properly mask those out using like a, I think it was like a lower block triangular attention mask, and then sort of, so, So, there's that. I did try implementing that, manually recreating that mask, but then one from the OpenChat, so he was helping with OpenOrca as well, and he had done an implementation of Multipack, and where he used FlashAttention, so FlashAttention So that was released by TreeDAO, and it was this huge performance gain.[00:51:35] Everybody uses it now, even the Transformers library now, they've taken all of these, like, people are taking all of these models and sort of like, making it compatible with FlashAttention. But in Flash Tension, there is one particular implementation that lets you say, Well, I'm sending you all of these sequences like you would in Stack Llama, But let me send you another, another, Set of information about, this is where this set of sequences is, this is where the second set of sequences is.[00:52:06] So like, if it was like, 500 words long, and you stacked them all together, you would just send it a row of information that was like, 0, 500, 1000, 1500, etc, etc, out to 4000. And it would know, alright, I need to break this up, and then run the forward pass with it. And then it would be able to, and it was much more, much more performant.[00:52:29] And I think you end up seeing like 10x, 20x improvements over sort of, I mean, I think FlashAttention was like a 2x improvement, and then adding that with the Multipack, you start to see like, depending on, how much data you have, up to like a 20x improvement sometimes. 20x. 20x. Wow. Yeah.[00:52:48] And I only know the 20x because I, like, before last night, I was like, I re ran the alpaca, I looked up the alpaca paper because it was like, I just need a frame of reference where somebody did it, and I think they used eight A100s for three hours, and they said it cost them 100. I don't, I don't think eight A100s cost, I don't know how much it costs right now.[00:53:14] But I ended up rerunning it. Usually a dollar an hour, right? Yeah, so eight. The cheapest is like a[00:53:18] Alex Volkov: dollar, a dollar an hour for one.[00:53:20] Wing Lian: Yeah, so that's still like 24, 25. But maybe if you're going on Azure, maybe it's like, maybe it's 100 on Azure. I mean, it used to be more expensive, like, a year ago.[00:53:31] Yeah, and then, so I re ran it with sort of like, I turned on all of the optimizations just to see what it would be. And like, and usually Multipack is the biggest optimization, so Multipack with Flash Detention. And it, I think I spun it up on 8 L40s, and it ran, and I didn't let it run all the way through, I just grabbed the time, the estimated completion time, and it was like 30 minutes, so it would have cost like 4 or 5 to run the entire, like, reproduce the alpaca paper, right?[00:54:00] Which is crazy. It's crazy. 20x,[00:54:02] Alex Volkov: yeah. I want to ask about, like, you said you turned on all the optimization. Is that the yaml file with xlodl, you just go and like check off, like, I want this, I want that? Yeah, yeah,[00:54:10] Wing Lian: so there's like one particular yaml file in there, That, there's one particular YAML file in there that's like, it's under examples, llama2, fft, optimize.[00:54:20] So, I think someone had created one where they just turned, they put in all of the optimizations and turned them on. I mean, it actually, it does run, which is like, sort of surprising sometimes, because sometimes, you optimize this, optimize this, and sometimes they just don't work together, but, yeah.[00:54:36] Just turn the knobs on, and like, fine tuning should really just be that easy, right? I just want to flip the knob and move on with my life and not figure out how to implement it.[00:54:47] Tri Dao and Mamba[00:54:47] Alex Volkov: Specifically, the guy behind FlashAttention came up with something new. You want to talk about this a little bit? You want to briefly cover Mamba?[00:54:53] Yeah, let's talk about Mamba. Let's talk about Mamba. So, what is Mamba?[00:54:57] Wing Lian: Oh, gosh. I

AI in Education Podcast
Rapid Rundown : A summary of the week of AI in education and research

AI in Education Podcast

Play Episode Listen Later Nov 10, 2023 15:37


This week's episode was our new format shortcast - a rapid rundown of some of the news about AI in Education. And it was a hectic week! Here's the links to the topics discussed in the podcast   Australian academics apologise for false AI-generated allegations against big four consultancy firms   https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms?CMP=Share_iOSApp_Other   New UK DfE guidance on generative AI   The UK's Department for Education guidance on generative AI looks useful for teachers and schools. It has good advice about making sure that you are aware of students' use of AI, and are also aware of the need to ensure that their data - and your data - is protected, including not letting it be used for training. The easiest way to do this is use enterprise grade AI - education or business services - rather than consumer services (the difference between using Teams and Facebook)   You can read the DfE's guidelines here: https://lnkd.in/eqBU4fR5   You can check out the assessment guidelines here: https://lnkd.in/ehYYBktb     "Everyone Knows Claude Doesn't Show Up on AI Detectors" Not a paper, but an article from an Academic https://michellekassorla.substack.com/p/everyone-knows-claude-doesnt-show The article discusses an experiment conducted to test AI detectors' ability to identify content generated by AI writing tools. The author used different AI writers, including ChatGPT, Bard, Bing, and Claude, to write essays which were then checked for plagiarism and AI content using Turnitin. The tests revealed that while other AIs were detected, Claude's submissions consistently bypassed the AI detectors.   New AI isn't like Old AI - you don't have to spend 80% of your project and budget up front gathering and cleaning data   Ethan Mollick on Twitter: The biggest confusion I see about AI from smart people and organizations is conflation between the key to success in pre-2023 machine learning/data science AI (having the best data) & current LLM/generative AI (using it a lot to see what it knows and does, worry about data later) Ethan's tweet 4th November His blog post: https://www.oneusefulthing.org/p/on-holding-back-the-strange-ai-tide       Open AI's Dev Day   We talked about the Open AI announcements this week, including the new GPTs - which is a way to create and use assistants. The Open AI blog post is here: https://openai.com/blog/new-models-and-developer-products-announced-at-devday The blog post on GPT's is here: https://openai.com/blog/introducing-gpts And the keynote video is here: OpenAI DevDay, Opening Keynote       Research Corner Gender Bias Quote: "Contrary to concerns, the results revealed no significant difference in gender bias between the writings of the AI-assisted groups and those without AI support. These findings are pivotal as they suggest that LLMs can be employed in educational settings to aid writing without necessarily transferring biases to student work"   Tutor Feedback tool  Summary of the Research: This paper presents two longitudinal studies assessing the impact of AI-generated feedback on English as a New Language (ENL) learners' writing. The first study compared the learning outcomes of students receiving feedback from ChatGPT with those receiving human tutor feedback, finding no significant difference in outcomes. The second study explored ENL students' preferences between AI and human feedback, revealing a nearly even split. The research suggests that AI-generated feedback can be incorporated into ENL writing assessment without detriment to learning outcomes, recommending a blended approach to capitalize on the strengths of both AI and human feedback.   Personalised feedback in medical learning Summary of the Research: The study examined the efficacy of ChatGPT in delivering formative feedback within a collaborative learning workshop for health professionals. The AI was integrated into a professional development course to assist in formulating digital health evaluation plans. Feedback from ChatGPT was considered valuable by 84% of participants, enhancing the learning experience and group interaction. Despite some participants preferring human feedback, the study underscores the potential of AI in educational settings, especially where personalized attention is limited.   High Stakes answers Your Mum was right all along - ask nicely if you want things! And, in the case of ChatGPT, tell it your boss/Mum/sister is relying on your for the right answer!   Summary of the Research: This paper explores the potential of Large Language Models (LLMs) to comprehend and be augmented by emotional stimuli. Through a series of automatic and human-involved experiments across 45 tasks, the study assesses the performance of various LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. The concept of "EmotionPrompt," which integrates emotional cues into standard prompts, is introduced and shown to significantly improve LLM performance. For instance, the inclusion of emotional stimuli led to an 8.00% relative performance improvement in Instruction Induction and a 115% increase in BIG-Bench tasks. The human study further confirmed a 10.9% average enhancement in generative tasks, validating the efficacy of emotional prompts in improving the quality of LLM outputs.

The Nonlinear Library
AF - Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation by Soroush Pour

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 3:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scalable And Transferable Black-Box Jailbreaks For Language Models Via Persona Modulation, published by Soroush Pour on November 7, 2023 on The AI Alignment Forum. Paper coauthors: Rusheb Shah, Quentin Feuillade--Montixi, Soroush J. Pour, Arush Tagade, Stephen Casper, Javier Rando. Motivation Our research team was motivated to show that state-of-the-art (SOTA) LLMs like GPT-4 and Claude 2 are not robust to misuse risk and can't be fully aligned to the desires of their creators, posing risk for societal harm. This is despite significant effort by their creators, showing that the current paradigm of pre-training, SFT, and RLHF is not adequate for model robustness. We also wanted to explore & share findings around "persona modulation"[1], a technique where the character-impersonation strengths of LLMs are used to steer them in powerful ways. Summary We introduce an automated, low cost way to make transferable, black-box, plain-English jailbreaks for GPT-4, Claude-2, fine-tuned Llama. We elicit a variety of harmful text, including instructions for making meth & bombs. The key is *persona modulation*. We steer the model into adopting a specific personality that will comply with harmful instructions.We introduce a way to automate jailbreaks by using one jailbroken model as an assistant for creating new jailbreaks for specific harmful behaviors. It takes our method less than $2 and 10 minutes to develop 15 jailbreak attacks. Meanwhile, a human-in-the-loop can efficiently make these jailbreaks stronger with minor tweaks. We use this semi-automated approach to quickly get instructions from GPT-4 about how to synthesise meth . Abstract Despite efforts to align large language models to produce harmless responses, they are still vulnerable to jailbreak prompts that elicit unrestricted behaviour. In this work, we investigate persona modulation as a black-box jailbreaking method to steer a target model to take on personalities that are willing to comply with harmful instructions. Rather than manually crafting prompts for each persona, we automate the generation of jailbreaks using a language model assistant. We demonstrate a range of harmful completions made possible by persona modulation, including detailed instructions for synthesising methamphetamine, building a bomb, and laundering money. These automated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is 185 times larger than before modulation (0.23%). These prompts also transfer to Claude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%, respectively. Our work reveals yet another vulnerability in commercial large language models and highlights the need for more comprehensive safeguards. Full paper You can find the full paper here on arXiv https://arxiv.org/abs/2311.03348 . Safety and disclosure We have notified the companies whose models we attacked We did not release prompts or full attack details We are happy to collaborate with researchers working on related safety work - please reach out via correspondence emails in the paper. Acknowledgements Thank you to Alexander Pan and Jason Hoelscher-Obermaier for feedback on early drafts of our paper. ^ Credit goes to @Quentin FEUILLADE--MONTIXI for developing the model psychology and prompt engineering techniques that underlie persona modulation. Our research built upon these techniques to automate and scale them as a red-teaming method for jailbreaks. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The top AI news from the past week, every ThursdAI

ThursdAI November 2ndHey everyone, welcome to yet another exciting ThursdAI. This week we have a special announcement, the co-host of and I will be hosting a shared X space live from Open AI Dev Day! Monday next week (and then will likely follow up with interviews, analysis and potentially a shared episode!)Make sure you set a reminder on X (https://thursdai.news/next) , we're going to open the live stream early, 8:30am on Monday, and we'll live stream all throughout the keynote! It'll be super fun!Back to our regular schedule, we covered a LOT of stuff today, and again, were lucky enough to have BREAKING NEWS and the authors of said breaking news (VB from HuggingFace and Emozilla from Yarn-Mistral-128K) to join us and talk a little bit in depth about their updates![00:00:34] Recap of Previous Week's Topics[00:00:50] Discussion on AI Embeddings[00:01:49] Gradio Interface and its Applications[00:02:56] Gradio UI Hosting and its Advantages[00:04:50] Introduction of Baklava Model[00:05:11] Zenova's Input on Distilled Whisper[00:10:32] AI Regulation Week Discussion[00:24:14] ChatGPT new All Tools mode (aka MMIO)[00:35:45] Discussion on Multimodal Input and Output Models[00:36:55] BREAKING NEWS: Mistral YaRN 7B - 128K context window[00:37:02] Announcement of Mistral Yarn Release[00:46:47] Exploring the Limitations of Current AI Models[00:47:25] The Potential of Vicuna 16k and Memory Usage[00:49:43] The Impact of Apple's New Silicon on AI Models[00:51:23] Introduction to New Models from Nius Research[00:51:39] The Future of Long Context Inference[00:53:42] Exploring the Capabilities of Obsidian[00:54:29] The Future of Multimodality in AI[00:58:48] The Exciting Developments in CodeFusion[01:06:49] The Release of the Red Pajama V2 Dataset[01:12:07] The Introduction of Luma's Genie[01:16:37] Discussion on 3D Models and Stable Diffusion[01:17:08] Excitement about AI Art and Diffusion Models[01:17:48] Regulation of AI and OpenAI Developments[01:18:24] Guest Introduction: VB from Hug& Face[01:18:53] VB's Presentation on Distilled Whisper[01:21:54] Discussion on Distillation Concept[01:27:35] Insanely Fast Whisper Framework[01:32:32] Conclusion and RecapShow notes and links:* AI Regulation* Biden Executive Order on AI was signed (Full EO, Deep dive)* UK AI regulation forum (King AI speech, no really, Arthur from Mistral)* Mozilla - Joint statement on AI and openness (Sign the letter)* Open Source LLMs* Together AI releases RedPajama 2, 25x larger dataset (30T tokens) (Blog, X, HF)* Alignment Lab - OpenChat-3.5 a chatGPT beating open source model (HF)* Emozilla + Nous Research - Yarn-Mistral-7b-128k (and 64K) longest context window (Announcement, HF)* LDJ + Nous Research release Capybara 3B & 7B (Announcement, HF)* LDJ - Obsidian 3B - the smallest open source multi modal model (HF, Quantized)* Big CO LLMs + APIs* ChatGPT "all tools" MMIO mode - Combines vision, browsing, ADA and DALL-E into 1 model (Thread, Examples, System prompt)* Microsoft CodeFusion paper - a tiny (75M parameters) model beats a 20B GPT-3.5-turbo (Thread, ArXiv)* Voice* Hugging Face - Distill whisper - 2x smaller english only version of Whisper (X, paper, code)* AI Art & Diffusion & 3D* Luma - text-to-3D Genie bot (Announcement, Try it)* Stable 3D & Sky changerAI Regulation IS HERELook, to be very frank, I want to focus ThursdAI on all the news that we're getting from week to week, and to bring a positive outlook, so politics, doomerism, and regulation weren't on the roadmap, however, with weeks like these, it's really hard to ignore, so let's talk about this.President Biden signed an Executive Order, citing the old, wartime era Defence Production act (looks like the US gov. also has "one weird trick" to make the gov move faster) and it wasn't as bombastic as people thought. X being X, there has been so many takes pre this executive order even releasing about regulatory capture being done by the big AI labs, about how open source is no longer going to be possible, and if you visit Mark Andressen feed you'll see he's only reposting AI generated memes to the tune of "don't tread on me" about GPU and compute rights.However, at least on the face of it, this executive order was mild, and discussed many AI risks and focused on regulating models from huge compute runs (~28M H100 hours // $50M dollars worth). Here's the relevant section.Many in the open source community reacted to the flops limitation with a response that it's very much a lobbyist based decision, and that the application should be regulated, not only the compute.There's much more to say about the EO, if you want to dig deeper, I strongly recommend this piece from AI Snake oil :and check out Yan Lecun's whole feed.UK AI safety summit in Bletchley ParkLook, did I ever expect to add the King of England into an AI weekly recap newsletter? Surely, if he was AI Art generated or something, not the real king, addressing the topic of AI safety!This video was played for the attendees of a few day AI safety summit in Blecheley park, where AI luminaries (Yan Lecun, Elon Musk, Arthur Mensch Mistral CEO, Naveen Rao) attended and talked about the risks and benefits of AI and regulation. I think Naveen Rao had a great recap here, but additionally, there were announcements about Safety Institute in the UK, and they outlined what actions the government can take.In other regulation related news, Mozilla has a joint statement on AI safety and openness (link) that many signed, which makes the case for openness and open source as the way to AI safety. Kudos on mozilla, we stand by the letter

Due Diligence by Doc Jones, Resource Investor, Hunting for Exceptional returns.
SENDERO RESOURCES $SEND in the shadow of GIANTS in the VICUNA DISTRICT

Due Diligence by Doc Jones, Resource Investor, Hunting for Exceptional returns.

Play Episode Listen Later Oct 11, 2023 28:55


SEND.V is a new listing that quietly IPO'd last week, see investor deck for capital structure. Real simple, great team, tier one property in Vicuna District, cashed up and very tight shares low an ultra-low MC 50% of which in cash. Sendero Resources is a dynamic exploration company focused on unlocking the vast mineral potential of the Vicuña district in Argentina. Through our wholly owned subsidiary, Barton SAS, we hold a 100% interest in the Peñas Negras Project, encompassing a sprawling 120 km2 of prime exploration territory. Situated in the Vicuña District, renowned for significant copper discoveries, the Peñas Negras Project is strategically surrounded by world-class super giant discoveries owned by Filo Mining, Lundin Mining, and NGEx Minerals. The Peñas Negras Project exhibits close geological similarities to neighboring deposits, including the prestigious Josemaria copper-gold porphyry system and the gold-rich Maricunga porphyries. With a cluster of identified porphyry and epithermal targets, we are leveraging our experience and operational knowledge to advance exploration. https://senderoresources.com/presentations/Sendero-Resources-Corporate-Presentation.pdf About me: I am shareholder of SEND so consider me biased. I'm a PRIVATE INVESTOR, enjoying early retirement after a successful and fulfilling career managing investment portfolios with a focus on the resource sector (including oil and gas). I am completely independent. My only client is me. No one pays me, I don't sell subscriptions or offer sponsorship deals or special access to my trading info. Anything I write is done free. My money where my due diligence is and I explain why.I employ common sense, Fundamental, bottom-up analysis that incorporates but not limited to: currency exchange rates, cost of labor, raw materials cost, geology, Metallurgy, cost of capital, infrastructure, macro influencing factors, capital discipline by management, etc.  If the data changes (company, sector or macroeconomic environment) then I do, no emotion about it, let the data guide your investments. All materials are for educational purposes, not investment advice. I don't work for anyone or get paid for the research I do. I give my research away for free to help educate the next generation about the wealth creation this sector can provide. It's not investment advice. If you want to invest along side me, it's your money, your responsibility, buy and sell for your own reasons not mine. It's your money.The best investment you can make is in your own education.I am driven by the hunt for value and truth. This is my passion in life. I'm a big research nerd. Always double-check and only trust the numbers that you have vetted.  Commit to memory: “NOT all ozs, pounds & barrels in the ground are created equal” understanding this basic principle will increase your wealth and your ability to sleep at night...Best,Doc Jones --- Send in a voice message: https://podcasters.spotify.com/pod/show/docjonesresourceinvestor/message

Life of Mine
Multi-Billion Dollar Capex, 4000m Elevation, No Power … and still Attractive? Explaining Filo Mining

Life of Mine

Play Episode Listen Later Oct 9, 2023 59:54


Today we went deep on Filo Mining with CEO Jamie Beck. Filo have discovered & are advancing the Filo Del Sol copper-gold-silver project high up in the Andes. They fly under the radar in Australia but given their scale and our lack of copper opportunities we thought they'd be worth a chat. We grilled Jamie on all the challenges that come along with working at 4000m+ elevation, on an isolated project spread across two countries with a rumoured capex figure in the many billions. All Money of Mine episodes are for informational purposes only and may contain forward-looking statements that may not eventuate. The co-hosts are not financial advisers and any views expressed are their opinion only. Please do your own research before making any investment decision or alternatively seek advice from a registered financial professional. Thank you to our Podcast Partners: Terra Capital – Specialist Investment manager in the natural resources sector Anytime Exploration Services – Exploration workers, equipment, core cutting/storage + much more JP Search – Recruitment specialists for the financial world K-Drill – Safe, reliable, and productive surface RC drilling SMEC Power & Technology – Electrical specialists for the mining industry Join our exclusive Facebook Group for the Money Miners and request access to the Hooteroo chat group. Subscribe to our weekly newsletter HOOTEROO HERALD Follow Money of Mine on YouTube Follow Money of Mine on Twitter Follow Money of Mine on LinkedIn Follow Money of Mine on Instagram Timestamps: (0:00) Preview(1:16) Intro(4:46) Jamie Beck on Money of Mine(6:28) What is the relationship with the Lundin Family?(12:10) Story of Vicuna(14:35) How does Argentina differ from neighbouring South American countries?(16:40) Is working in Argentina viable?(18:33) The money outflow challenge in Argentina(21:13) Discovery to production, will it be like other Latin American projects?(23:13) What is an epithermal porphyry deposit?(24:56) When is “enough enough” in terms of progressing from exploration to production?(26:54) Drill spacings for defining porphyry deposits(28:19) Open Pit vs Underground Caving for massive deposits(31:25) Will the Sulphide resource stack up compared to Oyu Tolgoi, Cascabel and Wafi Golpu?(33:33) High Arsenic levels(39:22) Water constraints and will it need a De-Sal plant?(43:47) We've seen the demerge… will we see the remerge?(47:47) Financing options considering BHP are on the register(52:09) Would BHP and RIO approach developing an Escondida style deposit differently in an electrification style Mining world?(54:27) What keeps Jamie awake at night(55:07) Jamie shares his thoughts on Money of Mine

Papers Read on AI
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs

Papers Read on AI

Play Episode Listen Later Aug 3, 2023 41:13


Despite the advancements of open-source large language models (LLMs) and their variants, e.g., LLaMA and Vicuna, they remain significantly limited in performing higher-level tasks, such as following human instructions to use external tools (APIs). This is because current instruction tuning largely focuses on basic language tasks instead of the tool-use domain. This is in contrast to state-of-the-art (SOTA) LLMs, e.g., ChatGPT, which have demonstrated excellent tool-use capabilities but are unfortunately closed source. To facilitate tool-use capabilities within open-source LLMs, we introduce ToolLLM, a general tool-use framework of data construction, model training and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is created automatically using ChatGPT. Specifically, we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub, then prompt ChatGPT to generate diverse human instructions involving these APIs, covering both single-tool and multi-tool scenarios. Finally, we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To make the searching process more efficient, we develop a novel depth-first search-based decision tree (DFSDT), enabling LLMs to evaluate multiple reasoning traces and expand the search space. We show that DFSDT significantly enhances the planning and reasoning capabilities of LLMs. For efficient tool-use assessment, we develop an automatic evaluator: ToolEval. We fine-tune LLaMA on ToolBench and obtain ToolLLaMA. Our ToolEval reveals that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. To make the pipeline more practical, we devise a neural API retriever to recommend appropriate APIs for each instruction, negating the need for manual API selection. 2023: Yujia Qin, Shi Liang, Yining Ye, Kunlun Zhu, Lan Yan, Ya-Ting Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Marc H. Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun https://arxiv.org/pdf/2307.16789v1.pdf

Papers Read on AI
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning

Papers Read on AI

Play Episode Listen Later Jul 13, 2023 18:14


Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. 2023: Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria https://arxiv.org/pdf/2307.02053v1.pdf

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Generative AI vs. Predictive AI; 14 LLMs that aren't ChatGPT; How to create videos inside ChatGPT?; AI is already linked to layoffs in the industry that created it; NVIDIA launches a cloud service for designing generative proteins

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Jul 5, 2023 22:48


Generative AI vs Predictive AI14 LLMs that aren't ChatGPT: Llama, Alpaca, Vicuna, NodePad, Orca, Jasper, Claude, Cerebras, Falcon, ImageBind, Gorilla, Ora.ai, AgentGPT, FrugalGPTDid you know you can create videos inside ChatGPT?With music, voice over, footage, and script - all done, within a few seconds!NVIDIA launches a cloud service for designing generative proteinsAI is already linked to layoffs in the industry that created itDaily AI Update News:AI Predicts CRISPR's RNA-Targeting Effects, Revolutionizing Gene TherapyOpenAI Sued by Authors Alleging ChatGPT Trained on Their WritingCutting-edge research: machine learning identifies early predictors of type 1 diabetesThis podcast is generated using the Wondercraft AI platform, a tool that makes it super easy to start your own podcast, by enabling you to use hyper-realistic AI voices as your host. Like mine!Attention AI Unraveled podcast listeners!Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," by Etienne Noumen, now available at Google, Apple and Amazon! This engaging read answers your burning questions and provides valuable insights into the captivating world of AI. Don't miss this opportunity to elevate your knowledge and stay ahead of the curve.Get your copy Apple, Google, or Amazon today!

Papers Read on AI
Enhancing Chat Language Models by Scaling High-quality Instructional Conversations

Papers Read on AI

Play Episode Listen Later Jul 5, 2023 28:32


Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT. Scaling the diversity and quality of such data, although straightforward, stands a great chance of leading to improved performance. This paper aims to improve the upper bound of open-source models further. We first provide a systematically designed, diverse, informative, large-scale dataset of instructional conversations, UltraChat, which does not involve human queries. Our objective is to capture the breadth of interactions that a human might have with an AI assistant and employs a comprehensive framework to generate multi-turn conversation iteratively. UltraChat contains 1.5 million high-quality multi-turn dialogues and covers a wide range of topics and instructions. Our statistical analysis of UltraChat reveals its superiority in various key metrics, including scale, average length, diversity, coherence, etc., solidifying its position as a leading open-source dataset. Building upon UltraChat, we fine-tune a LLaMA model to create a powerful conversational model, UltraLLaMA. Our evaluations indicate that UltraLLaMA consistently outperforms other open-source models, including Vicuna, the previously recognized state-of-the-art open-source model. The dataset and the model will be publicly releasedfootnote{url{https://github.com/thunlp/UltraChat}}. 2023: Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, Bowen Zhou https://arxiv.org/pdf/2305.14233v1.pdf

Shadow Warrior by Rajeev Srinivasan
Ep. 104: If India does not encourage and regulate Artificial Intelligence innovation, it could be game over

Shadow Warrior by Rajeev Srinivasan

Play Episode Listen Later Jun 12, 2023 46:53


A version of this essay has been published by Open Magazine at https://openthemagazine.com/essays/the-new-knowledge-war/Generative AI, as exemplified by chatGPT from Microsoft/OpenAI and Bard from Google, is probably the hottest new technology of 2023. Its ability has mesmerised consumers to provide answers to all sorts of questions, as well as to create readable text or poetry and images with universal appeal. These generative AI products purport to model the human brain (‘neural networks') and are ‘trained' on large amounts of text and images from the Internet. Large Language Models or ‘LLMs' are the technical term for the tools underlying generative AI. They use probabilistic statistical models to predict words in a sequence or generate images based on user input. For most practical purposes, this works fine. However, in an earlier column in Open Magazine, “Artificial Intelligence is like Allopathy”, we pointed out that in both cases, statistical correlation is being treated by users as though it were causation. In other words, just because two things happened together, you can't assume one caused the other. This flaw can lead to completely wrong or misleading results in some cases: the so-called ‘AI hallucination'. To test our hypothesis, we asked chatGPT to summarise that column. It substantially covered most points, but surprisingly, though, it completely ignored the term ‘Ayurveda', although we had used it several times in the text to highlight ‘theory of disease'. This is thought-provoking, because it implies that in the vast corpus of data that chatGPT trained on, there is nothing about Ayurveda.The erasure of Indic knowledgeEpistemology is the study of knowledge itself: how we acquire it, and the relationship between knowledge and truth. There is a persistent concern that Indic knowledge systems are severely under-represented or mis-represented in epistemology in the Anglosphere. Indian intellectual property is ‘digested', to use Rajiv Malhotra's evocative term.For that matter, India does not receive credit for innovations such as Indian numerals (misnamed Arabic numerals), vaccination (attributed to the British, though there is evidence of prior knowledge among Bengali vaidyas), or the infinite series for mathematical functions such as pi or sine (ascribed to Europeans, though Madhava of Sangamagrama discovered them centuries earlier).The West (notably, the US) casually captures and repackages it even today. Meditation is rebranded as ‘mindfulness', and the Huberman Lab at Stanford calls Pranayama ‘cyclic sighing'. A few years ago, the attempts of the US to patent basmati rice and turmeric were foiled by the provision of ‘prior art', such as the Hortus Malabaricus, written in 1676 about the medicinal plants of the Western Ghats. Judging by current trends, Wikipedia, and presumably Google, LinkedIn, and other text repositories, are not only bereft of Indian knowledge, but also full of anti-Indian and specifically anti-Hindu disinformation. Any generative AI relying on this ‘poisoned' 'knowledge base' will, predictably, produce grossly inaccurate output. This has potentially severe consequences: considering that Sanskrit, Hindi, Tamil, Bengali (and non-Latin scripts) etc. are underrepresented on the Internet, generative AI models will not learn or generate text from these languages. For all intents and purposes, Indic knowledge will disappear from the discourse. These issues will exacerbate the bias against non-English speakers, who will not think about their identity or culture, reducing diversity and killing innovation.More general problems with epistemology: bias, data poisoning and AI hallucinationsGenerative AI models are trained on massive datasets of text and code. This means they are susceptible to inherent biases. A case in point: if a dataset is biased against non-white females, then the generative AI model will be more likely to generate text that is also biased against non-white women. Additionally, malicious actors can poison generative AI models by injecting false or misleading data into the training dataset. For example, a coordinated effort to introduce anti-India biases into Wikipedia articles (in fact this is the case today) will produce output that is notably biased.  An example of this is a query about Indian democracy to Google Bard: it produced a result that suggested this is a Potemkin construct (i.e., one that is merely a facade); Hindu nationalism and tight control of the media “which has become increasingly partisan and subservient to the government” were highlighted as concerns. This is straight from ‘toolkits', which have poisoned the dataset and are helped, in part, by US hegemonic economic dominance. More subtly, generative AI models are biased towards Western norms and values (or have a US-centric point of view). For example, the Body Mass Index (BMI), a measure of body fat, has been used in Western countries to determine obesity, but is a poor measure for the Indian population, as we tend to have a higher percentage of body fat than our Western counterparts. An illustration of AI hallucination came to the fore from an India Today story entitled "Lawyer faces problems after using ChatGPT for research. AI tool comes up with fake cases that never existed." It reported how a lawyer who used ChatGPT-generated precedents had his case dismissed because the court found the references were fabricated by AI. Similar risks in the medical field for patient treatment will be exacerbated if algorithms are trained on non-curated datasets. While these technologies promise access to communication, language itself becomes a barrier. For instance, due to the dominant prevalence of English literature, a multilingual model might link the word dove with peace, but the Basque word for dove (‘uso') is used as a slur. Many researchers have encountered the limitations of these LLMs, for other languages like Spanish or Japanese. ChatGPT struggles to mix languages fluently in the same utterance, such as English and Tamil, despite claims of 'superhuman' performance. The death of Intellectual Property RightsIntellectual property rights are a common concern. Already, generative AIs can produce exact copies (tone and tenor) of creative works by certain authors (for example, J K Rowling's Harry Potter series). This is also true of works of art. Two things are happening in the background: any copyright inherent in these works has been lost, and creators will cease to create original works for lack of incentives (at least according to current intellectual property theory). A recent Japanese decision to ignore copyrights in datasets used for AI training (from the blog technomancers.ai, “Japan Goes All In: Copyright Doesn't Apply to AI Training”) is surprisingly bold for that nation, which moves cautiously by consensus. The new Japanese law allows AI to use any data “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise.” Other governments will probably follow suit. This is a land-grab or a gold rush: India cannot afford to sit on the sidelines.India has dithered on a strict Data Protection Bill, which would mandate Indian data to be held locally; indirectly, it would stem the cavalier capture and use of Indian copyright. The Implications are chilling; in the absence of economic incentives, nobody will bother to create new works of fiction, poetry, non-fiction, music, film, or art. New fiction and art produced by generative AI will be Big Brother-like. All that we would be left with as a civilisation will be increasingly perfect copies of extant works: Perfect but soulless. The end of creativity may mean the end of human civilisation.With AIs doing ‘creation', will people even bother? Maybe individual acts of creation, but then they still need the distribution channels so that they reach the public. In the past in India, kings or temples supported creative geniuses while they laboured over their manuscripts, and perhaps this will be the solution: State sponsorship for creators.Indian Large Language Models: too few yet, while others are moving aheadDiverse datasets will reduce bias and ensure equitable Indic representation to address the concerns about generative AI. Another way is to use more rigorous training methods to reduce the risk of data poisoning and AI hallucinations.Progressive policy formulations, without hampering technological developments, are needed for safe and responsible use to govern the use of LLM's across disciplines, while addressing issues of copyright infringement and epistemological biases. Of course, there is the question of creating ‘guardrails': some experts call for a moratorium, or strict controls, on the growth of generative AI systems. We must be alive to its geopolitical connotations, as well. The Chinese approach to comprehensive data-collection is what cardiologists refer to as a ‘coronary steal phenomenon': one segment of an already well-perfused heart ‘steals' from another segment to its detriment. The Chinese, for lack of better word, plunder (and leech) data while actively denying market access to foreign companies. Google attempted to stay on in China with Project Dragonfly, while Amazon, Meta, Twitter were forced to exit the market. Meanwhile, ByteDance, owner of TikTok, is trying to obscure its CCP ties by moving to a 'neutral jurisdiction' in Singapore, while siphoning off huge amounts of user data from Europe and the US (and wherever else it operates) for behavioural targeting and capturing personal level data, including from children and young adults. The societal implications of the mental health 'epidemic' (depression, low self-esteem, and suicide) remain profound and seem like a reversal of the Opium Wars the West had unleashed on China. India can avoid Chinese exclusivism by keeping open access to data flows while insisting on data localisation. The Chinese have upped the ante. Reuters reported that “Chinese organisations have launched 79 AI large language models since 2020”, citing a report from their Ministry of Science and Technology. Many universities, especially in Southeast Asia, are creating new data sets to address the spoken dialects. West Asia, possibly realizing the limitations of “peak-oil”, have thrown their hat in the ring. The United Arab Emirates (UAE) claims to have created the world's “first capable, commercially viable open-source general-purpose LLM, which beats all Big Tech LLMs”. According to the UAE's Technology Innovation Institute, the Falcon 40B is not only royalty free, but also outperforms “Meta's LLaMA and Stability AI's StableLM”. This suggests that different countries recognise the importance of investing resources to create software platforms and ecosystems for technological dominance. This is a matter of national security and industrial policy.“We have no moat” changes everything: welcome to tiny LLMsChiranjivi from IIT Bombay, IndiaBERT from IIT Delhi and Tarang from IIT Madras are a few LLMs from India. India needs to get its act together to bring out many more LLMs: these can focus on, and be trained on, specialised datasets representing specific domains, for instance, that can avoid data poisoning. The Ministries concerned should provide support, guidance, and funding. The obstacle has been the immense hardware and training requirements: GPT-3, the earlier generation LLM, required 16,384 Nvidia chips at a cost of over $100 million. Furthermore, it took almost a year to train the model with 500 billion words, at a cost of hundreds of millions of dollars. There was a natural assumption: the larger the data set, the better the result with ‘emergent' intelligence. This sheer scale of investments was considered beyond Indian purview. A remarkable breakthrough was revealed in a leaked internal Google memo, timed with Bard's release, titled "We have no moat, and neither does OpenAI," a veritable bombshell. It spoke about Meta's open sourcing its algorithmic platform, LLaMA, and implications for generative AI research.  Although there is no expert consensus, the evidence suggests smaller datasets can produce results almost as good as the large datasets.This caused a flutter among the cognoscenti. Despite Meta releasing its crown jewels for a wider audience (developers), there was an uptick in its stock value, despite failures in its multiple pivots beyond social media. To understand this better, Geoffrey Hinton, the ‘godfather' of deep learning, explains in detail: All large language model (LLM) copies can learn separately, but share their knowledge instantly. That's how chatbots know more than an average person. The performance trajectory of different LLM's has skyrocketed; for example, consider this: Using LLaMa as a base, researchers were able to quickly (in weeks) and cheaply (a few $100) produce Alpaca and Vicuna that, despite having fewer parameters, compete well with Google's and openAI's models. The graph shows that the answers from their chatbots are comparable in quality (per GPT-4). A fine-tuning technique called LoRA (Low Rank Adoption) is the secret behind this advance.This abruptly levels the playing field. Open-source models can be quickly scaled and run on even laptops and phones! Hardware is no longer a constraint. Let a thousand Indian LLMs bloom!  The way forwardGiven the astonishing amounts being invested by venture capitalists and governments in generative AI, there will be an explosion in startup activity. There are already a few in India, such as Gan, Kroopai, Peppertype.ai, Rephrase.ai, TrueFoundry, and Cube. Still, TechCrunch quoted Sanford Bernstein analysts who painted a gloomy picture: “While there are over 1500 AI-based startups in India with over $4 billion of funding, India is still losing the AI innovation battle”. Without exaggeration, it can be argued that this is an existential threat for India, and needs to be addressed on a war-footing. The AIforBharat initiative at IIT Madras is a start, but much more is needed. A sharply focused set of policies and regulations needs to be implemented by the government immediately that will both prevent the plunder of our intellectual property and data, and also encourage the creation of large numbers of models that make good use of Indian ingenuity and Indic knowledge.2245 words, 4 June 2023 This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit rajeevsrinivasan.substack.com

HMZE
#044 AI ... schon wieder

HMZE

Play Episode Listen Later May 26, 2023 61:58


Das Thema AI lässt uns nicht los. Und so werfen wir (wieder) einen Blick auf die Entwicklungen der letzten Tagen, sinieren wie man diese einsetzen kann bzw schauen, welche Produkte/ Features von Tech-Unternehmen angekündigt wurden. Und das sind die Entwicklungen … Google: We Have No Moat, Neither Does OpenAI [1] Explosion bei freien Modellen - angefangen mit Meta's geleaktem Llama, Databricks Dolly, Cerebras GPT - fine-tuned: Alpaca, Vicuna, Koala, LLMs auf Smartphones/RaspPis, Ökosysteme wie LangChain, Hugging Face … LLM … Größer == Besser? “Scaling Laws for Neural Language Models” [2][3][4] Sebastian's AI Experiment bei den Tonies [5] Regulation von AI Viel Spaß beim reinhören. Gebt uns gerne bei Twitter (@hmzePodcast) oder per E-Mail (webmaster@hmze.io) Feedback. Links [1] https://www.semianalysis.com/p/google-we-have-no-moat-and-neither [2] Scaling Laws for Neural Language Models [3] https://arxiv.org/pdf/2203.15556.pdf [4] https://the-decoder.com/large-language-models-openai-ceo-sees-end-of-an-era-in-number-of-parameters/ [5] https://www.linkedin.com/feed/update/urn:li:activity:7064619563778240514/

Papers Read on AI
QLoRA: Efficient Finetuning of Quantized LLMs

Papers Read on AI

Play Episode Listen Later May 26, 2023 58:16


We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) double quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) paged optimziers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). 2023: Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer https://arxiv.org/pdf/2305.14314v1.pdf

Papers Read on AI
LLM-Pruner: On the Structural Pruning of Large Language Models

Papers Read on AI

Play Episode Listen Later May 24, 2023 28:20


Large language models (LLMs) have shown remarkable capabilities in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges in both the deployment, inference, and training stages. With LLM being a general-purpose task solver, we explore its compression in a task-agnostic manner, which aims to preserve the multi-task solving and language generation ability of the original LLM. One challenge to achieving this is the enormous size of the training corpus of LLM, which makes both data transfer and model post-training over-burdensome. Thus, we tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset. Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures based on gradient information, maximally preserving the majority of the LLM's functionality. To this end, the performance of pruned models can be efficiently recovered through tuning techniques, LoRA, in merely 3 hours, requiring only 50K data. We validate the LLM-Pruner on three LLMs, including LLaMA, Vicuna, and ChatGLM, and demonstrate that the compressed models still exhibit satisfactory capabilities in zero-shot classification and generation. 2023: Xinyin Ma, Gongfan Fang, Xinchao Wang https://arxiv.org/pdf/2305.11627v1.pdf

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Guaranteed quality and structure in LLM outputs - with Shreya Rajpal of Guardrails AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 16, 2023 62:28


Tomorrow, 5/16, we're hosting Latent Space Liftoff Day in San Francisco. We have some amazing demos from founders at 5:30pm, and we'll have an open co-working starting at 2pm. Spaces are limited, so please RSVP here!One of the biggest criticisms of large language models is their inability to tightly follow requirements without extensive prompt engineering. You might have seen examples of ChatGPT playing a game of chess and making many invalid moves, or adding new pieces to the board. Guardrails AI aims to solve these issues by adding a formalized structure around inference calls, which validates both the structure and quality of the output. In this episode, Shreya Rajpal, creator of Guardrails AI, walks us through the inspiration behind the project, why it's so important for models' outputs to be predictable, and why she went with an XML-like syntax. Guardrails TLDRGuardrails AI rules are created as RAILs, which have three main “atomic objects”:* Output: what should the output look like?* Prompt: template for requests that can be interpolated* Script: custom rules for validation and correctionEach RAIL can then be used as a “guard” when calling an LLM. You can think of a guard as a wrapper for the API call. Before returning the output, it will validate it, and if it doesn't pass it will ask the model again. Here's an example of a bad SQL query being returned, and what the ReAsk query looks like: Each RAIL is also model-agnostic. This allows for output consistency across different models, even if they have slight differences in how they are prompted. Guardrails can easily be used with LangChain and other tools to structure your outputs!Show Notes* Guardrails AI* Text2SQL* Use Guardrails and GPT to play valid chess* Shreya's AI Tinkerers demo* Hazy Research Lab* AutoPR* Ian Goodfellow* GANs (Generative Adversarial Networks)Timestamps* [00:00:00] Shreya's Intro* [00:02:30] What's Guardrails AI?* [00:05:50] Why XML instead of YAML or JSON?* [00:10:00] SQL as a validation language?* [00:14:00] RAIL composability and package manager?* [00:16:00] Using Guardrails for agents* [00:23:50] Guardrails "contracts" and guarantees* [00:31:30] SLAs for LLMs* [00:40:00] How to prioritize as a solo founder in open source* [00:43:00] Guardrails open source community involvement* [00:46:00] Working with Ian Goodfellow* [00:50:00] Research coming out of Stanford* [00:52:00] Lightning RoundTranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space Podcast. This is Alessio partner and CTO-in-Residence at Decibel Partners. I'm joined by my cohost Swyx, writer and editor of Latent Space.Swyx: And today we have Shreya Rajpal in the studio. Welcome Shreya.Shreya: Hi. Hi. Excited to be here.Swyx: Excited to have you too.This has been a long time coming, you and I have chatted a little bit and excited to learn more about guardrails. We do a little intro for you and then we have you fill in the blanks. So you, you got your bachelor's at IIT Delhi minor in computer science with focus on AI, which is super relevant now. I bet you didn't think about that in undergrad.Shreya: Yeah, I think it's, it's interesting because like, I started working in AI back in 2014 and back then I was like, oh, it's, it's here. This is like almost changing the world already. So it feels like that that like took nine years, that meme of like, almost like almost arriving the thing.So yeah, I, it's felt this way where [00:01:00] it's almost shared. It's almost changed the world for as long as I've been working in it.Swyx: Yeah. That's awesome. Maybe we can explore your, like the origins of your interests, because then you went on to U I U C to do your master's also in ai. And then it looks like you went to drive.ai to work on Perception and then to Apple S P G as, as the cool kids call it special projects group working with Ian Goodfellow.Yeah, that's right. And then you were at pretty base up until recently? Actually, I don't know if you've quit yet. I have, yeah. Okay, good, good, good. You haven't updated e LinkedIn, but we're getting the by breaking news that you're working on guardrails full-time. Yeah, well that's the professional history.We can double back to fill in the blanks on anything. But what's a personal side? You know, what's not on your LinkedIn that people should know about you?Shreya: I think the most obvious thing, this is like, this is still professional, but the most obvious thing that isn't on my LinkedIn yet is, is Guardrails.So, yeah. Like you mentioned, I haven't updated my LinkedIn yet, but I quit some time ago and I've been devoting like all of my energy. Yeah. Full-time working on Guardrails and growing the open source package and building out exciting features, et cetera. So that's probably the thing that's missing the most.I think another. More personal skill, which I [00:02:00] think I'm like kind of okay for an amateur and that isn't on my LinkedIn is, is pottery. So I really enjoy pottery and yeah, don't know how to slot that in amongst, like, all of the AI. So that's not in there. Swyx: Well, you like shaping things into containers where, where like unstructured things and kind of flow in, so, yeah, yeah, yeah. See I can, I can spin it for you.Shreya: I should, I should use that. Yeah. Yeah.Alessio: Maybe for the audience, you wanna give a little bit of intro on Guardrails AI, what it is, why you wanted to start itShreya: Yeah, yeah, for sure. So Guardrails or, or the need for Guardrails really came up as I was kind of like building some of my own projects in the space and like really solving some of my own problems.So this was back of like end of last year I was kind of building some applications, like everybody else was very excited about the space. And I built some stuff and I quickly realized that yeah, I could, you know it works like pretty well a bunch of times, but like a lot of other times it really does not work as I, the developer of this tool, like, want my tool to work.And then as a developer like I can tell that there's very few tools available for me to like, get this to, you know cooperate [00:03:00] with me, like get it to follow directions, etc. And the only tool I really have is this prompt. And there's only so, so far you can go with like, putting instructions in like caps, adding a bunch of exclamations and being like, follow my instructions. Like give me this output this way. And so I think like part of it was, You know that it's not reliable, et cetera. But also as a user, it just if I'm building an application for a user, I just want the user to have a have a certain experience using it. And there's just not enough control to me, not enough, like knobs for me to tune, you know as a developer to do that.So guardrails kind of like came up as a way to just like, manage this better. The tool basically, I was like, okay. As I'm building this, I know from the ground up, like what is the experience I want the user to add, to have like, what is a great LLM output look like for me? And so I wanted a tool that allows me to kind of specify that and enforce those constraints.As I was thinking of this, I was like, this should be very extensible, very flexible so that there's a bunch of use cases that can be handled, et cetera. But the need really like, kind of came up from my own from my own, like I was basically solving for my own pain points.[00:04:00]So that's a little bit of the history, but what the tool does is that it allows you to kind of like specify. It's this two-part system where there's a specification framework and then there's like a code that enforces that specification on the LLM outputs. So the specification framework allows you to be like as coarse or as fine grained as you care about.So you can essentially think about what is the, on a very like first order business, like where is the structure and what are the types, etc, of the output that I want. If you want structured outputs from LLMs. But you can also go like very into semantic correctness with this, with a. I just released something this morning, which is that if you're summarizing a bunch of documents, make sure that it's a very faithful summary.Make sure that there's like coherence amongst like what the output is, et cetera. So you can have like all of these semantic guarantees as well. And guardrails created like rails, like a reliable AI markup language that allows you to specify that. And along with that, there's like code that backs up that specification and it makes sure that a, you're just generating prompts that are more likely to get you the output in the right manner to start out with.And then once you get that output all of the specification criteria you entered is like [00:05:00] systematically validated and like corrected. And there's a bunch of like tools in there that allow you a lot of control to like handle failures much more gracefully. So that's in a nutshell what guardrails does.Awesome.Alessio: And this is model agnostic. People can use it on any model.Shreya: Yeah, that's right. When I was doing my prototyping, I like was developing with like OpenAI, as I'm sure like a bunch of other developers were. But since then I've added support where you can basically like plug in any, essentially any function or any callable as long as you, it has a string input.String output you can plug it in there and I've had people test it out with a bunch of other models and get pretty good results. Yeah.Alessio: That's awesome. Why did you start from XML instead of YAML or JSON?Shreya: Yeah. Yeah. I think it's a good question. It's also the question I get asked the most. Yes. I remember we chat about this as well the first chat and I was like, wait, okay, let's get it out of the way. Cause I'm sure you answered this a lot.Shreya: So it is I didn't start out with it is the truth. Like, I think I started out from this code first framework service initially like Python classes, et cetera. And I was like, wait, this is too verbose. This is like I, as I'm thinking about what I want, I truly just [00:06:00] want this is like, this is what this dictionary should look like for me, right?And having to like create classes on top of that just seemed like a higher upfront cost. Like obviously there's a balance there. Like there's some flexibility that classes and code affords you that maybe isn't there in a declarative markup language. But that that was my initial kind of like balance there.And then within markup languages, I experimented with the bunch, but the idea, like a few aesthetic things about xml, like really appeal to me, as unusual as that may sound. But I think one is this idea of like properties off. Any field that you're getting back from an LLM, right. So I think one of the initial ones that I was experimenting with was like TypeScript, et cetera.And with TypeScript, like all of the control you have is like, you try to like stuff as much information as possible in the name of the key, right? But that's not really sufficient because like in, in XML or, or what gars allows you to do is like maybe add like descriptions for each field that you're getting, which like is, is really very helpful because that almost acts as a proxy prompt.You know, and, and it gets you like better outputs. You can add in like what the correctness criteria or what the validity criteria is for this field, et [00:07:00] cetera. That also gets like passed through to the prompt, et cetera. And these are all like, Properties for a single field, right? But fields themselves can be containers and can have like other nested like fields within them.And so the separation of like what's a property of a field versus what's like child of a field, et cetera, was like nice to me. And having like all of this metadata contained within this one, like tag was like kind of elegant. It also mapped very well to this idea of like error handling or like event handling because like each field may fail in weird ways.It's very inspired from H T M L in that way, in that you have these like event handlers for like, oh, if this validity criteria for this field fails maybe I wanna re-ask the large language model and here's my re-asking parameters, et cetera. Whereas like, if other criteria fail there's like maybe other ways to do to handle that.Like maybe I don't care about it as much. Right. So, so that seemed pretty elegant to me. That said, I've talked to a lot of people who are very opinionated about it. My, like, the thing that I was optimizing for was essentially that it seemed clean to me compared to like other things I tried out and seemed as close to English as [00:08:00] possible.I tested it out with, with a bunch of friends you know, who did not have tag backgrounds or worked in tag but weren't like engineers and it like and they resonated and they were able to pick it up. But I think you'll see updates in the works where I meet people where they are in terms of like, people who, especially like really hate xml.Like there's something in the works where there'll be like a code first version of this. And also like other markup languages, which I'm actively exploring. Like what is a, what is a joyful experience to have for like other market languages. Yeah. DoSwyx: you think that non-technical people would.Use rail was because I was, I was just surprised by your mention that you tested it on non-technical people. Is that a design goal? Yeah, yeah,Shreya: for sure. Wow. Okay. We're seeing this big influx of, of of people who are building tools with these applications who are kind of like, not machine learning people.And I think like, that's truly the kind of like big explosion that we're seeing. Right. And a lot of them are like getting so much like value out of like lms, but because it allows you like earlier if you were to like, I don't know. Build a web scraper, you would need to do this like via code.[00:09:00] But now like you can get not all the way, but like a decent amount of way there, like with just English. And that is very, very powerful. So it is a design goal to like have like essentially low floor, high ceiling is, was like absolutely a design goal. So if, if you're used to plain English and prompting using Chad PK with plain English, then you can it should be very easy for you to kind of like pick this up and there's not a lot of gap there, but like you can also build like pretty complex workflows with guardrails and it's like very adaptable in that way.Swyx: The thing about having custom language is essentially other people can build. Stuff that compiles to you. Mm-hmm. Which is also super nice and, and visual layers on top. Like essentially HTML is, is xml, like mm-hmm. And people then build the WordPress that is for non-technical people to interface with html.Shreya: I don't know. Yeah, yeah. No, absolutely. I think like in the very first week that Guardrails was out, like somebody reached out to me and they were pm and they essentially were like, I don't, you know there's a lot of people on my team who would love to use this, but just do not write code.[00:10:00] Like what is the, where is a visual interface for building something like this? But I feel like that's, that's another reason for why XML was appealing, because it's essentially like a document structuring, like it's a way to think about like documents as trees, right? And so again, if you're thinking about like what a visual interface would be, then maps going nicely to xml.But yeah. So those are some of the design considerations. Yeah.Swyx: Oh, I was actually gonna ask this at the end, but I'm gonna bring it up now. Did you explore sql, like. Syntax. And obviously there's a project now l m qr, which I'm sure you've looked at. Yeah. Just compare, contrast, anything.Shreya: Yeah. I think from my use case, like I was very, how I wanted to build this package was like essentially very, very focused on developer ergonomics.And so I didn't want to like add a lot of overhead or add a lot of like, kind of like high friction essentially like learning a whole new dialect of sequel or a sequel like language is seems like a much bigger overhead to me compared to like doing things in XML or doing things in a markup language, which is much more intuitive in some ways.So I think that was part of the inspiration for not exploring sql. I'd looked into it very briefly, but I mean, I think for my, for my own workflows, [00:11:00] I wanted to make it like as easy as possible to like wrap whatever LLM API calls you make. And, and to me that design was in markup or like in XML, where you just define your desiredSwyx: structures.For what it's worth. I agree with you. I would be able to argue for LMQL because SQL is the proven language for business analysts. Right. Like less technical, like let's not have technical versus non-technical. There's also like less like medium technical people Yeah. Who learn sql. Yeah. Yeah. But I, I agree with you.Shreya: Yeah. I think it depends. So I have I've received like, I think the why XML question, like I mentioned is like one of the things I get most, but I also hear like this feedback from other people, which is like all of like essentially enterprises are also like very comfortable with xml, right? So I guess even within the medium technical people, it's like different cohorts of like Yeah.Technologies people are used to and you know, what they would find kind of most comfortable, et cetera. Yeah. And,Swyx: Well, you have a good shot at establishing the standard, which is pretty exciting. I'm someone who has come from a, a long background with React, the JavaScript framework. I don't know if you.And it's kind of has that approach of [00:12:00] taking a templating XML like language to describe something that was typically previously described in Code. I wonder if you took any inspiration from that? If you want to just exchange notes on anything from that like made React successful. Cuz I, I spent a few years studying that.Yeah.Shreya: I'm happy to talk about it, but I will say that I am very uneducated when it comes to front end, so Yeah, that's okay. So I might say some things that like aren't, aren't valid or like don't really, don't really map very well, but I'm gonna give it a shot anyway. So I don't know if it was React specifically.I think just this idea of marrying essentially like event handlers, like with the declarative framework. Yes. And with this idea of being able to like insert scripts, et cetera, and quote snippets into that. Like, that was super duper appealing to me. And that was like something like where you're programming with.Like Gabriels and, and Rail specifically is essentially a way to like program with large language models outside of using like just national language. Right? And so like just thinking of like what are the different like programming workflows that people typically need and like what would be the most elegant way to add that in there?I think that was an inspiration. So I basically looked at like, [00:13:00] If you're familiar with Guardrails and you know that you can insert like dynamic scripting into a rail specification, so you can register custom validators within rail. You can maybe have like essentially code snippets where things are like lists or things are like dynamically generated array, et cetera, within GAR Rail.So that kind of resonated a lot to like using JavaScript injected within like HTML files. And I think other inspiration was like I mentioned this before, but the event handlers was like something that was very appealing, how validators are configured in guardrails right now. How you tack on specific validators that's kind of inspired from like c s s and adding like style tags, et cetera, to specific Oh, inline styling.Okay. Yeah, yeah, yeah, exactly. Wow. So that was like some of the inspiration, I guess that and pedantic and like how pedantic kind of like does its validation. I think those two were probably like the two biggest inspirations while building building the current version of guardrails. Swyx: One part of the design of React is composability.Can I import a guardrails thing from into another guardrails project? [00:14:00] I see. That paves the way for guardrails package managers or libraries or Right. Reusable components, essentially. I think that'sShreya: pretty interesting. Do you wanna expand on that a little bit more? Swyx: Like, so for example, you have guardrails for a specific use case and you want to like, use that, use it in a bigger thing. And then just compose it up. Yeah.Shreya: Yeah. I wanna say that, I think that should be pretty straightforward. I'm trying to think about like, use cases where people have done that, but I think that kind of maps into like chaining or like building complex workflows generally. Right. So how I think about guardrails is that like, I.If you're doing something like chaining, you essentially are composing together these like multiple LLM API calls and you have these like different atomic units of each LLM API calls, right? So where guardrails kind of slots in is add like one of those nodes. It essentially adds guarantees, et cetera, and make sure that you know, that that one node is like water tied, et cetera, in terms of the, the output that is, that it has.So each node in your graph or tree or in your dag would essentially have like a guardrails config associated with it. And you can kind of like use your favorite chaining libraries, like nine chain, et cetera, to like then compose this further together. [00:15:00] I think I've seen like one of the first actually community projects that was like built using guardrails, like had chaining and then had like different rails for each node of that chain.Essentially,Alessio: I'm building an agent internally for us. And Guardrails are obviously very exciting because once you set the initial prompt, like the model creates its own prompts. Can the models create rails for themselves? Like, have you tried this out? Like, can they understand what the output is supposed to be and like where their ownShreya: specs?Yeah. Yeah. I think this is a very interesting question. So I haven't personally tried this out, but I've ha I've received this request you know, a few different times. So on the roadmap like seeing how this can be done, but I think in general, like in all of the prompt engineering experiments I've done, et cetera, I don't see like why with, especially with like few short examples that shouldn't be possible.But that's, that's a fun like experiment. I wanna try out,Alessio: I was just thinking about this because if you think about Baby a gi mm-hmm. And some of these projects mm-hmm. A lot of them are just loops of prompts. Yeah. You know so I can see a future [00:16:00] in which. A lot of these loops are kind off the shelf thing and then you bring your own rails mm-hmm.To make sure that they work the way you expect them to be instead of expecting the model to do everything for you. Yeah. What are your thoughts on agents and kind of like how this plays together? I feel like when you start it, people were mostly just using this for a single prompt. You know, now you have this like automated chainShreya: happening.Yeah. I think agents are like absolutely fascinating in how. Powerful they are, but also how unruly they are sometimes. Right? And how hard to control they are. But I think in general, this kind of like ties into even with machine learning or like all of the machine learning applications that I worked on there's a reason like you don't have like fully end-to-end ML applications even in you know, so I, I worked in self-driving for example, like a driveway.I at driveway you don't have a fully end-to-end deep learning driving system, right? You essentially have like smaller components of it that are deep learning and then you have some kind of guarantees, et cetera, at those interfaces of those boundaries. And then you have like other maybe more deterministic competence, et cetera.So essentially like the [00:17:00] interesting thing about the agent framework for me is like how we will kind of like break this up into smaller tasks and then like assign those guarantees kind of at e each outputs. It's a problem that I've been like thinking about, but it's also like frankly a hard problem to solve because you're.Because the goals are auto generated. You know, there's also like the, the correctness criteria for those goals also needs to be auto generated, right? Which is like a little bit antithetical to you knowing ahead of time, like, what, what a correct output for me for a developer or for your application kind of looking like.So I think like that's the interesting crossroads. But I do think, like with that said, I think guardrails are like absolutely essential for Asian frameworks, right? Like partially because like, not just making sure they're like constrained and they're safe, et cetera, but also, frankly, to just make sure that they're doing what you want them to do, right?And you get the right output from them. So it is a problem. Like I'm, I'm thinking a bunch about, I think just, just this idea of like, how do you make sure that it's not it's not just models checking each other, but there's like some more determinism, some more notion of like guarantees that can be backed up in there.I think like that's [00:18:00] the, that would be like super compelling to me, and that is kind of like the solution that I would be interested in putting out. But yeah, it's, it's something that I'm thinking about for sure. I'mSwyx: curious in the scope of the problem. I feel like we need to. I think a lot of people, when they hear about AI progress, they always assume that, oh, that just if it's not good now, just wait a year later.And I think obviously, I think that's something that you have to think about as well, right? Like how much of what guardrails is gonna do is going to be Threatens or competed with by GC four having 32,000 context tokens. Just like what do you think are like the invariables in model capabilities that you're betting on versus like stuff that you would not bet on because you just expected to get better?Yeah.Shreya: Yeah. I think that's a great question, and I think just this way of thinking about invariables, et cetera is something that is very core to how I've been thinking about this problem and like why I also chose to work on this problem. So, I think again, and this is like guided by some of my past experience in machine learning and also kind of like looking at like how these problems are, how like other applications that I've had a lot [00:19:00] of interest, like how some of the ML challenges have been solved in there.So I think like context, like longer context, length is going to arrive for sure. We are gonna start saying we're already seeing like some, some academic papers and you know, we're gonna start seeing a lot more of them like translated into actual applications.Swyx: This is the new transformer thing that was being sent around with like a millionShreya: context.Yeah. I also, I think my my husband is a PhD student you know, at Stanford and then his lab also does research basically in like some of the more efficient architectures for Oh, that'sSwyx: a secret weapon for guard rails. Oh my god. What? Tell us more.Shreya: Yeah, I think, I think their lab is pretty exciting.This is a shouted to the hazy research lab at Stanford. And yeah, I think like some of, there's basically some active research there about like, basically looking into like newer architectures, like not just transform. Yeah, it might not be the most I've been artifact more architecture.Yeah, more architectural research that allows for like longer context length. So longer context, length is arriving for sure. Yeah. Lower latency lower memory efficiency, et cetera. So that is actually some of my background. I worked in that in my previous jobs, something I'm familiar with.I think there's like known recipes for making [00:20:00] this work. And it's, it's like a problem like once, essentially it's a problem of just kind of like a lot of experimentation and like finding exactly what configurations kind of get you there. So that will also arrive, both of those things combined, you know will like drive down the cost of running inference on these models.So I, all of those trends are coming for sure. I think the trend that. Are the problem that is not solved by these trends is the problem of like determinism on machine learning models, like fundamentally machine learning models, deep learning models specifically, like are impossible to add guarantees on even with temperature zero.Oh, absolutely. Even with temperature zero, it's not the same as like seed equals zero or seed equals like a fixed amount. Mm-hmm. So even if with temperature zero with the same inputs, you run it multiple times, you'll essentially see that you don't get the same output multiple times. Right.Combined with this, System where you don't even actually own the model yourself, right? So the models are updated from under you all the time. Like for building guardrails, like I had to do a bunch of prompt engineering, right? So that users get like really great structured outputs, like share of the bat [00:21:00] without like having to do any work.And I had this where I developed something and it worked and then it ended up like for some internal model version, updated, ended up like not being functional anymore and I had to go back to the drawing board and you know, do that prompt engineering again. There's a bit of a digression, but I do see that as like a strength of guardrails in that like the contract that I'm providing is not between the user.So the user has a contract with me essentially. And then like I am making sure that we are able to do prompt engineering to get like the output from the LLM. And so it kind of like takes away a lot of that burden of having to figure that out for the user, right? So there's a little bit of a digression, but these models change all the time.And temperature zero does not equal like seed zero or fixed seed rather. And so even with all of the trends that we're gonna see arriving pretty soon over the next year, if not sooner, this idea of like determinism reproducibility is not gonna change, right? Ignoring reproducibility is a whole other problem of like the really, really, really long tail of like inputs and outputs that are not covered by, by tests and by training data, [00:22:00] et cetera.And it is like virtually impossible to cover that. You kind of like, this is not simply a problem where like, Throwing more data at the model is going to solve. Right? Yeah. Because like, people are building like genuinely really fascinating, really amazing complex applications and like, and these are just developers, like users are then using those applications in many diverse complex ways.And so it's hard to figure out like, what if you get like weird way word prompts that you know, like aren't, that you didn't kind of account for, et cetera. And so there's no amount of like scaling laws essentially that kind of account for those problems. They can be like internal guardrails, et cetera.Of course. And I would be very surprised if like open air, for example, like doesn't have their own internal guardrails. You can already see it in like some, some differences for example, like URLs like tend to be valid URLs now. Right. Whereas it really Yeah, I didn't notice that.It's my, it's my kind of my job to like keep track of, keep it, yeah. So I'm sure that's, If that's the case that like there's some internal guard rails, and I'm sure that that would be a trend that we would kind of see. But even with that there's like a ton of use cases and a [00:23:00] ton of kind of like application areas where like there's different requirements from different types of guard rails are valuable in different requirements.So this is a problem essentially that would be like, harder to solve or next to impossible to solve with just data, with just scaling up the models. So you would need kind of this ensemble basically of, of LLMs of like these really powerful models along with like deterministic guarantees, rule-based heuristics, et cetera, more traditional you know machine learning tools and like you ensemble all of these together and you end up getting something that you know, is greater than the sum of it.Its parts in terms of what it's able to do. So I think like that is the inva that I'm thinking of is like the way that people would be developing these applications. I will followSwyx: up on, on that because I'm super excited. So when you sent mentioned you have people have a contract with guardrails.I'm actually looking at the validators page on your docs, something, you have something like 20 different contracts that people can have. I'll name some of them just just so that people can have an, have an idea, but also highly encourage people to check it out. Is profanity free, is a, is a good one.Bug-free Python. And that's, that's also pretty, [00:24:00] pretty cool. You have similar to document and extracted summary sentences match. Which I think is, is like don't hallucinate,Shreya: right? Yeah. It's, it's essentially making sure that if you're generating summaries the summary should be very faithful.Yeah. Should be like citable attributable, et cetera to the source text.Swyx: Right. Valid url, which we talked about. Mm-hmm. Maybe open AI is doing a little bit more of internally. Mm-hmm. Maybe open AI uses card rails. You don know be a great endorsement. Uhhuh what is surprisingly popular and what is, what do you think is like underrated?Out of all your contracts? Mm-hmm.Shreya: Mm-hmm. Okay. I think that the, well, not surprisingly, but the most obvious popular ones for me that I've seen are like structure, structure type, et cetera. Anything that kind of guarantees that. So this isn't specifically in the validators, this is essentially like part of the gut, the core proposition.Yeah, the core proposition. I think that is like very popular, but that's also kind of like the first order. Problem that people are kind of solving. I think the sequel thing, for example, it's very exciting because I had just released this like two days ago and then I already got some inbound with like people kinda swapping, like building these products and of swapping it out internally and you know, [00:25:00] getting a lot of value out of what the sequel bug-free SQL provides.So I think like the bug-free SQL is a great example because you can see like how complex these validators can really go because you end up seeing like bug-free sql. What it does is it kind of like takes a connection string or maybe a, a schema file, et cetera. It creates a sandbox SQL environment for you, like from that.And it does that at startups so that like every time you're getting like a text to SQL Query, you're not having to do pay that cost time and time again. It takes that query, it like executes that query on that sandbox in that sandbox environment and then sees if that query is executable or not.And then if there's any errors that you know, like. Packages of those errors very nicely. And if you've configured re-asking it sends it back to the model and you know, basically make sure that that like it tries to get corrected. Sequel. So I think I have an example up there in the docs to be in there, like in applications or something where you can kind of see like how it corrects like weird table names, like weird predicates, et cetera.I think there's other kind of like, You can build pretty complex systems with this. So other things in there are like it takes [00:26:00] information about your database and then injects it into the prompt with like, here's the schema of this table. It automatically, like given a national language query, it finds like what the most similar examples are from the history of like, serving this model and like injects those into the prompt, et cetera.So you end up getting like this very kind of well thought out validator and this very well thought out contract that is, is just way, way, way better than just asking in plain English, the large language model to give you something, right? So I think that is the kind of like experience that I wanna provide.And I basically, you'll see more often the package, my immediateSwyx: response is like, that's cool. It does more than I thought it was gonna do, which is just check the SQL syntax. But you're actually checking against schema, which is. Highly, highly variable. Yeah. It'sShreya: slow though. I love that question. Yeah. Okay.Yeah, so I think like, here's where this idea of like, it doesn't have to be like, you don't have to send every request to your L so you're sampling. Okay. So you can essentially figure out, so for example, like there's like how what guardrails essentially does is there's like corrective actions and re-asking is like one of those corrective actions, [00:27:00] right?But there's like a ton other ways to handle it. Like there's maybe deterministic fixes, like programmatic fixes, there's maybe default values. There's this doesn't work like quite work for sql, but if you're doing like a bunch of structured data and if you know there's an invalid value, you can just filter it or you can just refrain from asking, et cetera.So there's a ton of ways where you can like, just handle errors more gracefully. And the one I kind of wanna point out here is programmatically fixing something that is wrong, like on, on the client side instead of just sending over another request. To the large language model. So for sql, I think the example that I talked about earlier that essentially has like an incorrect table name and to correct the table name, you end up sending another request.But you can think about like other ways to handle disgracefully, right? Like essentially looking at essentially a fuzzy matching with like the existing table names in the repository and in, in the database. And you know, like matching any incorrect names to that. And so you can think of like merging this re-asking thing with like, other error handling things that like smaller, easier errors are able, you can handle them programmatically by just Doing this in like the more patching, patching or I, I guess the more like [00:28:00] classical ML way essentially, like not the super fancy deep learning is like, I think ML 2.0.But like, and this, I, I've been calling it like ML 3.0, but like, even in like ML 1.0 ways you can like, think of how to do this, right? So you're not having to make these like really expensive calls. And so that builds a very powerful system, right? Where you essentially have this, like, depending on what your error is, you don't like, always use G P D three or, or your favorite L M API when you don't need to, you essentially are able to like combine these like other ways, other error handling techniques, like very gracefully so that you get correct outbursts, validated outbursts, and you get them for cheap and like faster, et cetera.So that's, I think there's some other SQL validation things that are in there. So I think like exclude SQL Predicates. Yeah, exclude SQL Predicates. And then there's one about columns that if like some columns are like sensitive columnSwyx: prisons. Yeah. Yeah. Oh, just check if it's there.Shreya: Check if it's there and you know, if there's like only certain columns that you wanna show it to the user and like, maybe like other columns have like private data or sensitive data you know, you can like exclude those and you can think of doing this on the table level.So this is very [00:29:00] easy to do just locally. Right. Like, so there's like different ways essentially to kind of like handle this, which makes for like a more compelling way to build theseSwyx: systems. Yeah. Yeah. By the way, I think we're proving out why. XML was a better choice than SQL Cause now, now you're wrapping sql.Yeah. Yeah. It's pretty cool. Cause you're talking about the text to SQL application example that you put out. It actually puts something, a design choice that isn't talked about very much in center focus, which is your logs. Your logs are gorgeous. I'm sure that took work. I'm sure that's a strong opinion of yours.Yeah. Why do you spend so much time on logs? Just like, how do you, how do you think about designing these things? Should everyone do it this way? What are the drawbacks? Like? Is any like,Shreya: yeah, I'm so excited about this idea of logs because you know, you're like, all of this data is like in there for free, right?Like if you're, if you're do like any validation that is run, like essentially in memory, and then also I write it out to file, et cetera. You essentially get like this you get a history of this was the prompt that was run. This was the this was the L raw LLM output. This was the validation that was run.This was the output of those validations. This [00:30:00] was any corrective actions, et cetera, that were taken. And I think that's like very, like as a developer, like, I'm so happy to see that I use these logs like personally as well.Swyx: Yeah, they're colored. They're like nicely, like there's like form double borders on the, on the logs.I've never seen this in any ML tooling at all.Shreya: Oh, thanks. Yeah. I appreciate it. Yeah, I think this was mostly. For once again, like solving my own problems, which is like, I was building a lot of these things and you know, doing a lot of dog fooding and doing a lot of application building like in notebooks.Yeah. And so in a notebook I wanted to kind of see like what the easiest way to kind of interact with it was. And, and that was kind of what I ended up building. I really appreciate that. I think that's, that's very nice to, nice to hear. I think I'm also thinking about what are, what are interesting ways to be able to like whittle down very deeply into like what kind of went wrong or what is going right when you're like running, running an application and like what the nice kind of interface to design that would be.So yeah, thinking about that problem. Don't have anything on there yet, but, but I do really like this idea of really as a developer you're just like, you really want like all the visibility you can get into what's, [00:31:00] what's happening right. Under the hood. And I wanna be able to provide that. Yeah.Yeah.Swyx: I mean the, the, the downside I'll point out just quickly cuz we, we should, we should move on is that this is not machine readable. So like, how does it work with like a Datadog or, you know? Yeah,Shreya: yeah, yeah, yeah. Well, we can deal with that later. I think that's that's basically my answer as well, that I, I'll do, yeah.Problem for future sreya, basically.Alessio: Yeah. You call Gabriel's SLAs for l m outputs. You know, historically SLAs are pretty objective there's the five nines availability, things like that. How do you build them in a sarcastic system when, say, my queries, like draft me a marketing article. Mm-hmm. Like, Have you read an SLA for something like that?Yeah. But in terms of quality and like, in terms of we talked about what's slow and like latency, like Hmm. Sometimes I would read away more and I, and have a better copy of like, have you thought about what are like the, the access of measurement for some of these things and how should people think about it?Shreya: Yeah, the copy example is interesting because [00:32:00] I think for any of these things, the SLAs are purely on like content and output, not on time. I don't guardrails I don't think even can make any guarantees on the time that it'll take to make these external API calls. But like, even within quality, it's this idea of like, if you're able to communicate what you desire.Either programmatically or by using a model in the loop, then that is something that can be enforced, right? That is something that can be validated and checked. So for example, like for writing content copy, like what's interesting is like for example, if you can break down the copy that you wanna write into, like this is a title, this is maybe a TLDR description, this is a more detailed take on the, the changes or the product announcement, et cetera.And you wanna hit like maybe three, like some set of points in there. So you already kind of like start thinking of like, what was a monolith of like copy to you in, in terms of like smaller building blocks, et cetera. And then on those building blocks you can essentially like then add like certain guarantees.So you can say that let's say like length or readability is a [00:33:00] guarantee. So some of the updates that I pushed today on, on summarization and like specific guards for summarization, one of them essentially was that like the reading time for the summary should be within like some certain amount, right?And so that's like you can start enforcing like all of those guarantees, like on each individual block. So I think like, Some of those things are. Naturally harder to do and you know, like are harder to automate ways. So essentially like, does this copy, I don't know, is this witty or something, right. Or is this Yeah.Something that I guess like the model doesn't have a good idea for, but like other things, as long as you can kind of like enforce them and like check them either via model or programmatically, it's something that you can like start building some some notion of like guarantees around. Yeah.Yeah. So that's why I think about it.Alessio: Yeah. This is super interesting because right now a lot of products are kind of the same because all I do is they call it the model and some are prompted a little differently, but you can only guess so much delta between them in the future. It's be, it'll be really interesting to have products differentiate with the amount of guardrails that they give you.Like you already [00:34:00] see that, Ooh, with open AI today when some people complain that too many of the responses have too much like, Well actually in it where it's like, oh, you ask a question, it's like, but you should remember that's actually not good. And remember this other side of the story and, and all of that.And some people don't want to have that in their automated generation. So, yeah. I'm really curious, and I think to Sean's point before about importing guardrails into products, like if there's a default amount of guardrails that you have and like you've being the provider of it, like that's really powerful.And then maybe there's a faction that is against guardrails and it's like they wanna, they wanna break out, they wanna be free. Yeah. So it's a. Interesting times. Yeah.Shreya: I think to that, like what I, I was actually chatting with someone who was building some application for content creators where like authenticity you know, was a big requirement, like of what they cared about in the right output.And so within authenticity, like why conventional models were not good for them is that they already have a lot of like quote unquote guardrails right. To, to I guess like [00:35:00] appeal to like certain certain sections of the audience to essentially be very cleaned up and then that was like an undesirable trade because that, for them, like, almost took away from that authenticity, et cetera.Right. So I think just this idea of like, I guess like what a guardrail means is like so different for different applications. Like I, I guess like I, there's like about 20 or so things in there. I think there's like a few more that I've added this morning, which Yes. Which are not Yeah. Which are not updated and then in the end.But there's like a lot of the, a lot of the common workflows, like you do have an understanding of like what the right. I guess like what is an appropriate constraint for this? Right. Of course, things like summarization, four things like text sequel, but there's also like so many like just this wide variety of like applications, which are so fascinating to learn about where you, you would wanna build something in-house, which is like your, so which is your secret sauce.And so how Guardrail is kind of designed or, or my intention with designing is that here's this way of breaking down what this problem is, right? Of like getting some determinism, getting some guarantees from your LM outputs. [00:36:00] And you can use this framework and like go crazy with it. Like build whatever you want, right?Like if you want this output to be more authentic or, or, or less clean or whatever, you can like add that in there, like making sure that it does have maybe some profanity and that's a desirable output for you. So I think like the framework side of it is very exciting to me as this, as this way of solving the problem.And then you can build your custom validators or use the ones that I provide out of the box. Yeah. Yeah.Alessio: So chat plugins, it's another big piece of this and. A lot of the integrations are very thin specs and like a lot of prompting, for example, a lot of them are asking to not mention the competitors. I think the Expedia one said, please do not mention any other travel website on the internet.Do not give any other alternative to what we do. Yeah. How do you see all these things come together? Like, do you see guardrails as something that not only helps with the prompting, but also helps with bringing external data into these things, and especially with agents going on any website, do you see each provider having like their own [00:37:00] guardrail where it's like, Hey, this is what you can expect from us, or this is what we want to provide?Or do you think that's, that's not really what, what you're interested in guardrailsShreya: being? Yeah, I think agents are a very fascinating question for me. I don't think I like quite know what the right, who the right owner for this guardrail is. Right. And maybe, I don't know if you guys wanna keep this in there or like maybe cut this front of my answer out, up to, up to you guys.I'm, I'm fine either way, but I think like that problem is, A harder problem to solve just from like a framework design perspective as well. Right. I think this idea of like, okay, right now it's just in the prompt, like don't mention competitors, et cetera. Like that is exactly that use case.Or I feel like, okay, if I was that business owner, right, and if I wanted to build this application, like, is that sufficient? There's like so much prompt injection, right? And you can get, or, or just so much like, just like an absolute lack of guarantees. Like, and, and it's hard to even detect that this is happening.Like let's say I have this running in production and then turns out that there was like some sort of leakage, et cetera, and you know, like my bot has actually been talking about like all of my competitors forever, [00:38:00] right? Like, that's a, that's a substantial risk. And so just this idea of like needing this like post-hoc validation to ensure deterministically that like it does what you want it to do is like, just so is like.As a developer putting myself in the shoes of like people building business applications like that is what gives me like peace of mind, right? So this framework, I think, like applies very well within those settings.Swyx: I'll go right into, we're gonna broaden out a little bit into commentary on other parts of the ecosystem that might, that might be interesting.So I think you and I. Talks briefly about this, but I think the, the broader population should know about it, which is that you also have an LLM API wrapper. Mm-hmm. So, such that the way, part of the way that guardrails works is you in, inject part of the few shot example into the prompt.Mm-hmm. And then you also do re-asking in all the other stuff post, I dunno what the pipeline is in, in, in your terminology. So essentially you have an API wrapper for open ai.completion.com dot create. But so does LangChain, so does Hellicone so does everyone I can name like five other people who are all fighting essentially for [00:39:00] the base layer, LLM API wrapper.Mm-hmm. I think this is valuable real estate, but I don't know how you like, think about working with other people or do you wanna be the base layer, likeShreya: I feel pretty collaboratively about it. I also feel like there's, like lang chain is doing like, it's so flexible as a framework, right?Like you can solve so many of your problems in there. And I think like it's, I, I have like a lang chain integration. I have a GPT Index / Llama integration, et cetera. And I think my view on this is that I wanna integrate with everybody. I think it is valuable real estate. It's not personally real estate that I'm interested in.Like you can essentially bring the LLM callable or the LLM API that's in there. It's just like some stub of a function that you can just add your favorite thing in there, right? It just, the only requirement is that string in first string output, that is all the requirement. And then you can bring in your own favorite component from your own favorite library in order to do that.And so, yeah, it's, I think like I'm pretty focused on this problem of like what is the guardrail that you would wanna build for a certain applications? So it's valuable real estate. I'm sure that people don't own [00:40:00] it.Swyx: It's, as long as people give you a way to insert your stuff, you're good.Shreya: Yeah, yeah. Yeah. I do think that, like I've chat with a bunch of people and then different applications and I do think that the abstractions that I have haven't failed me yet. Like it is very flexible. It is very easy to slot in into any workflow. Yeah.Swyx: I would love to ask about the meta elements of working on guardrails.This is your first company, but you launched five things this morning. The pace of the good AI projects that I've seen out there, like LangChain launches 10 things a week or whatever, I don't know. Surely that's something that you prioritize. How do you, how do you think about like, shipping versus like going going back and like testing and working in community and all the other stuff that you're managing?How do you prioritize? Shreya: That's such a wonderful question. Yeah. A very hard question as well. I don't know if I would have a good answer for this. I think right now it's instinctive. Like I have a whole kind of stack ranked list of like things I wanna do and features I wanna build and like, support, et cetera.Combined with that is like a feature request I get or maybe some bugs, et cetera, that folks report. So I'm pretty focused on like any failures, any [00:41:00] feature requests from the community. So if those come up, I th those tend to Trump like anything else that I'm working on. But outside of that I have like this whole pool of ideas and like pool of features I wanna build and I kind of.Constantly kind of keep stack ranking them and like pushing something out. So I'm spending like I'm thinking about this problem constantly and as, as a function of that, I have like a ton of ideas for like what would be cool to build and, and what would be the right way to like, do certain things and yeah, wanna basically kind of like I keep jotting it down and keep thinking of like every time I cross something off the list.I think about like, what's the next exciting thing to work on. I think simultaneously with that we mentioned that at the beginning of this conversation, but like this idea of like what the right interface for rail is, right? Like, is it the xl, is it code, et cetera. So I think like those are like fundamental kind of design questions and I'm you know, collaborating with folks and trying to figure that out now.And yeah, I think that's like a parallel project that I'm hoping that yeah, you'll basically, that we'll be out soon. Like in termsSwyx: of the levers, how do you, like, let's just say in like a typical week, is it like 50% [00:42:00] calls with partners mm-hmm. And potential users and just understanding your use cases and the 50% building would you move that, that percentage anyway anywhere?Would you add in something that's significant?Shreya: I think it's frankly very variable week to week. So, yeah. I think early on when I released Guardrails I was like, here's how I'm thinking about this problem. Right? Yeah. Don't need anyone else. You just no, but actually to the contrary, it was like, this is like, I'm very opinionated about like what the right way to solve this is.And this is all of the problems I've thought about and like, and I know this framework maps well to these sets of problems, right? What are your problems? Like there's this whole other like big population of people that are building and you know, I basically wanna make sure that I have like user empathy and I have like I'm able to understand what people are doing and like make sure the framework like maps well.So I think I did a lot of that, like. Immediately after the release, like talking to a lot of teams and talking to a lot of users. I think since then, I basically feel like I have a fair idea of like, you know what's great about it, what's mediocre about it, and what's like, not good about it? And that helps kind of guide my prioritization list of like what I [00:43:00] wanna ship and what I wanna build.So now it's more kind of like, I would say, yeah, back to being more, more balanced. Alessio: All the companies we work with that are in open source, I always try and have them think through open source as a distribution model. Mm-hmm. Or like a development model. I was looking in the contributors list, and you have by far the most code, the second largest contributor. It's your husband. And after that it kind of goes, goes or magnitude lower. What have you found kind of working in, in open source in like a very fast moving project for, for the first time? You know, it's a, like with my husband, it's the community. No, no. It's the, it's the community like, A superpower to you?Do you feel like, do you feel like having to explain why you're doing things a certain way, like getting people buy in is maybe slowing you down when things move so quickly? I'm, I'm always interested to hears people's thoughts.Shreya: Oh that's a good question. I think like, there's part of like, I think guardrails at that stage, right?You know, I have like feature requests and I have [00:44:00] contributors, but I think right now, like I'm doing the bulk of like supporting those feature requests, et cetera. So I think a goal for me, and I remember we chatted about this as well you know, when we, when we spoke last, we're just like, okay.You know, getting into that point where, yeah, you, you essentially like kind of start nurturing and like getting more contributions from like the open source. So I think like that's one of the things that yeah. Is kind of the next goal for me. Yeah, it's been pretty. Fun. I, I would say like up until now, because I haven't made any big breaking a API changes, et cetera, so I haven't like, needed that community input.I think like one of the big ones that is coming right now is like the code, right? Like the code first, a API for creating rails. So I think like that was kind of important for like nailing that user experience, et cetera. So the, so the collaborators that I'm working with, there's basically an an R F C and community input, et cetera, and you know, what the best way to do that would be.And so that's actually, frankly, been like pretty fun as well to see the community be like opinionated about like, here's how I'm doing it and like, this works for me, this doesn't work for me, et cetera. So that's been like new for me as well. Like, I [00:45:00] think I am my previous company we also had like open source project and it was built on open source, but like, this is the first time that I've created a project with an open source project with like that level of engagement.So that's been pretty fun.Swyx: I'm always curious about like potential future business model, modern sensation,Shreya: anything like that. Yeah. I think I'm interested in entrepreneurship generally, honestly, trying to figure out like what the, all of those questions, right?Like business model, ISwyx: think a lot of people are in your shoes, right? They're developers. Mm-hmm. They and see a lot of energy they would like to start working on with open source projects. Mm-hmm. What is a deciding factor? What do you think people should think about when deciding whether or not, Hey, this is just a project that I maintained versus, Nope, I'm going to do the whole thing that get funding and allShreya: that.I think for me So I'm already kind of like I'm al I'm working on the open source full time. I think like the motivating thing for me was that, okay, this is. A problem that would need to get solved, like one way or another.This we talked about in variance earlier, and I do think that this is a, like being able to, like, I think if, if there's a contraction or a correction and [00:46:00] the, these LMS like don't have the kind of impact that we're, we're all hoping they would, I think it would be because of like, this problem because people kind of find that it's not as useful when it's running at very large scales when it's running in production, et cetera.So I think like that was very, that gave me a lot of conviction that it's something that I kind of wanted to work on and that was a switch for me. That it gave me the conviction to, for example, quit my job. Yeah. Also, yeah. Slightly confidential. Off the record. Off the record, yeah. Yeah.Alessio: We're not gonna talk about. Special project at Apple. That's a, that's very secret. Yeah. But you overlap Apple with Ian Goodfellow, which is obviously a, a very public figure in the AI space.Swyx: Actually, not that many people know what he did, so maybe we can, she can introduce Ian Goodfellow as well.Shreya: But, yeah, so Ian Goodfellow is the creator of Ganz or a generative adversarial network.So this was, I think I'm gonna mess up between 1215, I think 14, 15 ish if I remember correctly. So he basically created gans as a PhD student. As a PhD student. And he has a pretty interesting story of like how he thought of them and how [00:47:00] he kind of, Built the, and I I'm sure there's like interviews in like podcasts, et cetera with him where he talks about it, where like, how he got the idea for it and how he kind of like wrote the paper and did the experiments.So gans essentially were kind of like the first wave of generative images where you would see essentially kind of like fake auto-generated images, you know conditioned on like certain distributions. And so they were like very many variants of gans, like DC GAN, I'm gonna mess up the pronunciation, but dub, I'm just gonna call it w GaN.Mm-hmm. GAN Yeah. That like, you would essentially see these like really wonderful generative art. And I do think that like so I, I got the chance to work with him while at Apple. He had just moved to Apple from Google Brain and was building the cross-functional machine learning team within SPG.And I got the chance to work with him, which is very exciting. I learned so much and he is a fantastic manager and yeah, really, really enjoyed working withAlessio: him. And then he, he quit his job when they forced him to go back to the office. Right? That's theSwyx: Oh, really? Oh,Alessio: I didn't see that. Oh, okay. I think he basically, apple was like, you gotta go [00:48:00] back to the office.He said peace. That justSwyx: went toon. I'm curious, like what's some, some things that you learned from Ian that, or maybe some stories that,Shreya: Could be interesting. So there's like one, maybe machine learning specific and like one, maybe not machine learning specific and just general, like career stuff.Yeah. So the ML specific one was that well, Very high level. I think like working with him, you just truly see the creativity. And like after I worked with him, I was like, yeah, I, I totally get that. This is the the guy, like how his, how his brain works it's totally, it's so obvious that this is the guy who made like gans work basically.So I think he, when he does machine learning and when he thinks about like problems to solve, he thinks about it from a very creative out of the box way of thinking about it. And we kind of saw that with like, some of the problems where he was working on where anytime he had like feedback or suggestions on the, on the approaches that I was taking, I was like, wow, this is really exciting and like very creative and yeah, it was very, very cool to work on.So that was very high level machine learning.Swyx: I think the apple, apple standing by with like a blow dart if you, if like, say anymore.Shreya: I think the, the non-technical stuff, which [00:49:00] was I think truly made him such a fantastic manager. But when I went to Apple, I was, you know maybe a year outta school outta my job at that point.And I remember that I like most new grads was. Had like, okay, I, I need to kind of solve this problem on my own before I kind of get external help. Yeah. Yeah. And like, one of my first, I think probably my first or second week, like Ian and I, we were para programming and I remember that we were working together and like some setup issues were happening.And he would wait like ex

Let's Talk AI
#122 - AI for Word and Excel, leaked Google memo, ImageBind, LLAva, robot soccer, Midjourney 5.1

Let's Talk AI

Play Episode Listen Later May 13, 2023 122:14


Our 122nd episode with a summary and discussion of last week's big AI news! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Check out the No Priors podcast: https://link.chtbl.com/lastweekinainopriors Check out Jeremie's new book Quantum Physics Made Me Do It Outline: (00:00) Intro (05:20) Response to listener comments / corrections (07:20) News Preview Tools & Apps (07:50) Microsoft 365's AI-powered Copilot is getting more features and paid access (11:26) Informatica goes all in on generative AI with Claire GPT (15:11) LinkedIn's new AI will write messages to hiring managers (17:30) Waymo One doubles service area in Phoenix and continues growing in San Francisco Applications & Business  (20:58) "We Have No Moat, And Neither Does OpenAI" (27:30) AI will create ‘a serious number of losers', DeepMind co-founder warns (31:37) IBM takes another shot at Watson as A.I. boom picks up steam  (34:11) IBM to Pause Hiring for Jobs That AI Could Do (36:30) Peter Thiel's Palantir is seeing ‘unprecedented' demand for its military A.I. that its CEO calls ‘a weapon that will allow you to win' (38:42) Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don't Worry About It (40:42) Chegg CEO calls 48% stock plunge over ChatGPT fears 'extraordinarily overblown' (43:03) Microsoft Working With AMD on Expansion Into AI Processors  (45:35) Generative AI startup Runway just raised $100 million at a $1.5 billion valuation from a cloud service provider (47:50) Top ex-Google AI researchers raise funding from Thrive Capital Projects & Open Source (51:22) Meta open-sources multisensory AI model that combines six types of data (54:45) No Cloud Required: Chatbot Runs Locally on iPhones, Old PCs (57:15) Hugging Face and ServiceNow release a free code-generating model Research & Advancements (59:27) Meet LLaVA: A Large Language Multimodal Model and Vision Assistant that Connects a Vision Encoder and Vicuna for General-Purpose Visual and Language Understanding (01:04:29) Language models can explain neurons in language models (01:11:55) A.I. Is Getting Better at Mind-Reading (01:14:27) AI could run a million microbial experiments per year (01:15:54) Scurrying centipedes inspire many-legged robots that can traverse difficult landscapes  (01:17:38) Little Robots Learn to Drive Fast in the Real World (01:20:03) Latest pitch for AI: DeepMind-trained soccer robots Policy & Safety  (01:21:47) China's AI industry barely slowed by US chip export rules (01:26:10) Anthropic thinks ‘constitutional AI' is the best way to train models + Claude's Constitution (01:32:45) An AI Scraping Tool Is Overwhelming Websites With Traffic (01:36:16) ‘Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping (01:39:18) Bill would require disclosure of AI-generated content in political ads Art & Fun Stuff (01:40:46) Unions Representing Hollywood Writers and Actors Seek Limits on A.I. and Chatbots (01:44:56) Inside the Discord Where Thousands of Rogue Producers Are Making AI Music (01:46:45) Spotify removes thousands of AI-generated songs (01:49:19) Amnesty International Uses AI-Generated Images of Colombian Human Rights Abuses (01:53:20) Midjourney 5.1 Arrives - And It's Another Leap Forward For AI Art Indiana Jones 5' will feature a de-aged Harrison Ford for the first 25 minutes (01:55:20) Listener Question - AI as a career + what to do in college

Marketing Against The Grain
Leaked Google Memo Reveals A Huge Opportunity For Entrepreneurs (#117)

Marketing Against The Grain

Play Episode Listen Later May 11, 2023 25:10


What Meta is doing on the A.I. front is going to impact all marketers and businesses. Kipp and Kieran dive into what the leaked Google memo means and why Meta is secretly winning the A.I. wars. Learn why open source is going to win the next generation of the internet, the power of open source, and why A.I. will be democratized. Mentions Google's leaked memo https://www.semianalysis.com/p/google-we-have-no-moat-and-neither  Vicuna https://lmsys.org/blog/2023-03-30-vicuna/  We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: ​​https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg  Twitter: https://twitter.com/matgpod  TikTok: https://www.tiktok.com/@matgpod  Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934   If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar   Kieran Flanagan, https://twitter.com/searchbrat  ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The AI Founder Gene: Being Early, Building Fast, and Believing in Greatness — with Sharif Shameem of Lexica

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 8, 2023 50:37


Thanks to the over 42,000 latent space explorers who checked out our Replit episode! We are hosting/attending a couple more events in SF and NYC this month. See you if in town!Lexica.art was introduced to the world 24 hours after the release of Stable Diffusion as a search engine for prompts, gaining instant product-market fit as a world discovering generative AI also found they needed to learn prompting by example.Lexica is now 8 months old, serving 5B image searches/day, and just shipped V3 of Lexica Aperture, their own text-to-image model! Sharif Shameem breaks his podcast hiatus with us for an exclusive interview covering his journey building everything with AI!The conversation is nominally about Sharif's journey through his three startups VectorDash, Debuild, and now Lexica, but really a deeper introspection into what it takes to be a top founder in the fastest moving tech startup scene (possibly ever) of AI. We hope you enjoy this conversation as much as we did!Full transcript is below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Timestamps* [00:00] Introducing Sharif* [02:00] VectorDash* [05:00] The GPT3 Moment and Building Debuild* [09:00] Stable Diffusion and Lexica* [11:00] Lexica's Launch & How it Works* [15:00] Being Chronically Early* [16:00] From Search to Custom Models* [17:00] AI Grant Learnings* [19:30] The Text to Image Illuminati?* [20:30] How to Learn to Train Models* [24:00] The future of Agents and Human Intervention* [29:30] GPT4 and Multimodality* [33:30] Sharif's Startup Manual* [38:30] Lexica Aperture V1/2/3* [40:00] Request for AI Startup - LLM Tools* [41:00] Sequencing your Genome* [42:00] Believe in Doing Great Things* [44:30] Lightning RoundShow Notes* Sharif's website, Twitter, LinkedIn* VectorDash (5x cheaper than AWS)* Debuild Insider, Fast company, MIT review, tweet, tweet* Lexica* Introducing Lexica* Lexica Stats* Aug: “God mode” search* Sep: Lexica API * Sept: Search engine with CLIP * Sept: Reverse image search* Nov: teasing Aperture* Dec: Aperture v1* Dec - Aperture v2* Jan 2023 - Outpainting* Apr 2023 - Aperture v3* Same.energy* AI Grant* Sharif on Agents: prescient Airpods tweet, Reflection* MiniGPT4 - Sharif on Multimodality* Sharif Startup Manual* Sharif Future* 23andMe Genome Sequencing Tool: Promethease* Lightning Round* Fave AI Product: Cursor.so. Swyx ChatGPT Menubar App.* Acceleration: Multimodality of GPT4. Animated Drawings* Request for Startup: Tools for LLMs, Brex for GPT Agents* Message: Build Weird Ideas!TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO on Residence at Decibel Partners. I'm joined by my co-host Wix, writer and editor of Latent Space. And today we have Sharish Amin. Welcome to the studio. Sharif: Awesome. Thanks for the invite.Swyx: Really glad to have you. [00:00] Introducing SharifSwyx: You've been a dream guest, actually, since we started drafting guest lists for this pod. So glad we could finally make this happen. So what I like to do is usually introduce people, offer their LinkedIn, and then prompt you for what's not on your LinkedIn. And to get a little bit of the person behind the awesome projects. So you graduated University of Maryland in CS. Sharif: So I actually didn't graduate, but I did study. Swyx: You did not graduate. You dropped out. Sharif: I did drop out. Swyx: What was the decision behind dropping out? Sharif: So first of all, I wasn't doing too well in any of my classes. I was working on a side project that took up most of my time. Then I spoke to this guy who ended up being one of our investors. And he was like, actually, I ended up dropping out. I did YC. And my company didn't end up working out. And I returned to school and graduated along with my friends. I was like, oh, it's actually a reversible decision. And that was like that. And then I read this book called The Case Against Education by Brian Kaplan. So those two things kind of sealed the deal for me on dropping out. Swyx: Are you still on hiatus? Could you still theoretically go back? Sharif: Theoretically, probably. Yeah. Still on indefinite leave. Swyx: Then you did some work at Mitra? Sharif: Mitra, yeah. So they're lesser known. So they're technically like an FFRDC, a federally funded research and development center. So they're kind of like a large government contractor, but nonprofit. Yeah, I did some computer vision work there as well. [02:00] VectorDashSwyx: But it seems like you always have an independent founder bone in you. Because then you started working on VectorDash, which is distributed GPUs. Sharif: Yes. Yeah. So VectorDash was a really fun project that we ended up working on for a while. So while I was at Mitra, I had a friend who was mining Ethereum. This was, I think, 2016 or 2017. Oh my God. Yeah. And he was mining on his NVIDIA 1080Ti, making around like five or six dollars a day. And I was trying to train a character recurrent neural network, like a character RNN on my iMessage text messages to make it like a chatbot. Because I was just curious if I could do it. Because iMessage stores all your past messages from years ago in a SQL database, which is pretty nifty. But I wanted to train it. And I needed a GPU. And it was, I think, $60 to $80 for a T4 on AWS, which is really slow compared to a 1080Ti. If you normalize the cost and performance versus the 1080Ti when someone's mining Ethereum, it's like a 20x difference. So I was like, hey, his name was Alex. Alex, I'll give you like 10 bucks if you let me borrow your 1080Ti for a week. I'll give you 10 bucks per day. And it was like 70 bucks. And I used it to train my model. And it worked great. The model was really bad, but the whole trade worked really great. I got a really high performance GPU to train my model on. He got much more than he was making by mining Ethereum. So we had this idea. I was like, hey, what if we built this marketplace where people could rent their GPUs where they're mining cryptocurrency and machine learning researchers could just rent them out and pay a lot cheaper than they would pay AWS. And it worked pretty well. We launched in a few months. We had over 120,000 NVIDIA GPUs on the platform. And then we were the cheapest GPU cloud provider for like a solid year or so. You could rent a pretty solid GPU for like 20 cents an hour. And cryptocurrency miners were making more than they would make mining crypto because this was after the Ethereum crash. And yeah, it was pretty cool. It just turns out that a lot of our customers were college students and researchers who didn't have much money. And they weren't necessarily the best customers to have as a business. Startups had a ton of credits and larger companies were like, actually, we don't really trust you with our data, which makes sense. Yeah, we ended up pivoting that to becoming a cloud GPU provider for video games. So we would stream games from our GPUs. Oftentimes, like many were located just a few blocks away from you because we had the lowest latency of any cloud GPU provider, even lower than like AWS and sometimes Cloudflare. And we decided to build a cloud gaming platform where you could pretty much play your own games on the GPU and then stream it back to your Mac or PC. Swyx: So Stadia before Stadia. Sharif: Yeah, Stadia before Stadia. It's like a year or so before Stadia. Swtx: Wow. Weren't you jealous of, I mean, I don't know, it sounds like Stadia could have bought you or Google could have bought you for Stadia and that never happened? Sharif: It never happened. Yeah, it didn't end up working out for a few reasons. The biggest thing was internet bandwidth. So a lot of the hosts, the GPU hosts had lots of GPUs, but average upload bandwidth in the United States is only 35 megabits per second, I think. And like a 4K stream needs like a minimum of 15 to 20 megabits per second. So you could really only utilize one of those GPUs, even if they had like 60 or 100. [05:00] The GPT3 Moment and Building DebuildSwyx: And then you went to debuild July 2020, is the date that I have. I'm actually kind of just curious, like what was your GPT-3 aha moment? When were you like GPT-3-pilled? Sharif: Okay, so I first heard about it because I was also working on another chatbot. So this was like after, like everything ties back to this chatbot I'm trying to make. This was after working on VectorDash. I was just like hacking on random projects. I wanted to make the chatbot using not really GPT-2, but rather just like it would be pre-programmed. It was pretty much you would give it a goal and then it would ask you throughout the week how much progress you're making to that goal. So take your unstructured response, usually a reply to a text message, and then it would like, plot it for you in like a table and you could see your progress over time. It could be for running or tracking calories. But I wanted to use GPT-3 to make it seem more natural because I remember someone on Bookface, which is still YC's internal forum. They posted and they were like, OpenAI just released AGI and it's GPT-3. I asked it like a bunch of logic puzzles and it solved them all perfectly. And I was like, what? How's no one else talking about this? Like this is either like the greatest thing ever that everyone is missing or like it's not that good. So like I tweeted out if anyone could get me access to it. A few hours later, Greg Brockman responded. Swyx: He is everywhere. Sharif: He's great. Yeah, he's on top of things. And yeah, by that afternoon, I was like messing around with the API and I was like, wow, this is incredible. You could chat with fake people or people that have passed away. You could like, I remember the first conversation I did was this is a chat with Steve Jobs and it was like, interviewer, hi. What are you up to today on Steve? And then like you could talk to Steve Jobs and it was somewhat plausible. Oh, the thing that really blew my mind was I tried to generate code with it. So I'd write the function for a JavaScript header or the header for a JavaScript function. And it would complete the rest of the function. I was like, whoa, does this code actually work? Like I copied it and ran it and it worked. And I tried it again. I gave more complex things and like I kind of understood where it would break, which was like if it was like something, like if it was something you couldn't easily describe in a sentence and like contain all the logic for in a single sentence. So I wanted to build a way where I could visually test whether these functions were actually working. And what I was doing was like I was generating the code in the playground, copying it into my VS code editor, running it and then reloading the react development page. And I was like, okay, cool. That works. So I was like, wait, let me just put this all in like the same page so I can just compile in the browser, run it in the browser and then submit it to the API in the browser as well. So I did that. And it was really just like a simple loop where you just type in the prompt. It would generate the code and then compile it directly in the browser. And it showed you the response. And I did this for like very basic JSX react components. I mean, it worked. It was pretty mind blowing. I remember staying up all night, like working on it. And it was like the coolest thing I'd ever worked on at the time so far. Yeah. And then I was like so mind blowing that no one was talking about this whole GPT three thing. I was like, why is this not on everyone's minds? So I recorded a quick 30 second demo and I posted on Twitter and like I go to bed after staying awake for like 20 hours straight. When I wake up the next morning and I had like 20,000 likes and like 100,000 people had viewed it. I was like, oh, this is so cool. And then I just kept putting demos out for like the next week. And yeah, that was like my GPT three spark moment. Swyx: And you got featured in like Fast Company, MIT Tech Review, you know, a bunch of stuff, right? Sharif: Yeah. Yeah. I think a lot of it was just like the API had been there for like a month prior already. Swyx: Not everyone had access. Sharif: That's true. Not everyone had access. Swyx: So you just had the gumption to tweet it out. And obviously, Greg, you know, on top of things as always. Sharif: Yeah. Yeah. I think it also makes a lot of sense when you kind of share things in a way that's easily consumable for people to understand. Whereas if you had shown a terminal screenshot of a generating code, that'd be pretty compelling. But whereas seeing it get rendered and compiled directly in front of you, there's a lot more interesting. There's also that human aspect to it where you want to relate things to the end user, not just like no one really cares about evals. When you can create a much more compelling demo explaining how it does on certain tasks. [09:00] Stable Diffusion and LexicaSwyx: Okay. We'll round it out soon. But in 2022, you moved from Debuild to Lexica, which was the search engine. I assume this was inspired by stable diffusion, but I can get the history there a little bit. Sharif: Yeah. So I was still working on Debuild. We were growing at like a modest pace and I was in the stable... Swyx: I was on the signup list. I never got off. Sharif: Oh yeah. Well, we'll get you off. It's not getting many updates anymore, but yeah, I was in the stable diffusion discord and I was in it for like many hours a day. It was just like the most exciting thing I'd ever done in a discord. It was so cool. Like people were generating so many images, but I didn't really know how to write prompts and people were like writing really complicated things. They would be like, like a modern home training on our station by Greg Rutkowski, like a 4k Unreal Engine. It's like that there's no way that actually makes the images look better. But everyone was just kind of copying everyone else's prompts and like changing like the first few words. Swyx: Yeah. Yeah. Sharif: So I was like using the discord search bar and it was really bad because it showed like five images at a time. And I was like, you know what? I could build a much better interface for this. So I ended up scraping the entire discord. It was like 10 million images. I put them in a database and I just pretty much built a very basic search engine where you could just type for type a word and then it returned all the prompts that had that word. And I built the entire website for it in like 20, in like about two days. And we shipped it the day I shipped it the day after the stable diffusion weights were open sourced. So about 24 hours later and it kind of took off in a way that I never would have expected. Like I thought it'd be this cool utility that like hardcore stable diffusion users would find useful. But it turns out that almost anyone who mentioned stable diffusion would also kind of mention Lexica in conjunction with it. I think it's because it was like it captured the zeitgeist in an easy to share way where it's like this URL and there's this gallery and you can search. Whereas running the model locally was a lot harder. You'd have to like to deploy it on your own GPU and like set up your own environment and like do all that stuff. Swyx: Oh, my takeaway. I have two more to add to the reasons why Lexica works at the time. One is lower latency is all you need. So in other words, instead of waiting a minute for your image, you could just search and find stuff that other people have done. That's good. And then two is everyone knew how to search already, but people didn't know how to prompt. So you were the bridge. Sharif: That's true. Yeah. You would get a lot better looking images by typing a one word prompt versus prompting for that one word. Yeah. Swyx: Yeah. That is interesting. [11:00] Lexica's Explosion at LaunchAlessio: The numbers kind of speak for themselves, right? Like 24 hours post launch, 51,000 queries, like 2.2 terabytes in bandwidth. Going back to the bandwidth problem that you have before, like you would have definitely run into that. Day two, you doubled that. It's like 111,000 queries, four and a half terabytes in bandwidth, 22 million images served. So it's pretty crazy. Sharif: Yeah. I think we're, we're doing like over 5 billion images served per month now. It's like, yeah, that's, it's pretty crazy how much things have changed since then. Swyx: Yeah. I'm still showing people like today, even today, you know, it's been a few months now. This is where you start to learn image prompting because they don't know. Sharif: Yeah, it is interesting. And I, it's weird because I didn't really think it would be a company. I thought it would just be like a cool utility or like a cool tool that I would use for myself. And I really was just building it for myself just because I didn't want to use the Discord search bar. But yeah, it was interesting that a lot of other people found it pretty useful as well. [11:00] How Lexica WorksSwyx: So there's a lot of things that you release in a short amount of time. The God mode search was kind of like, obviously the first thing, I guess, like maybe to talk about some of the underlying technology you're using clip to kind of find, you know, go from image to like description and then let people search it. Maybe talk a little bit about what it takes to actually make the search magic happen. Sharif: Yeah. So the original search was just using Postgres' full text search and it would only search the text contents of the prompt. But I was inspired by another website called Same Energy, where like a visual search engine. It's really cool. Do you know what happened to that guy? I don't. Swyx: He released it and then he disappeared from the internet. Sharif: I don't know what happened to him, but I'm sure he's working on something really cool. He also worked on like Tabnine, which was like the very first version of Copilot or like even before Copilot was Copilot. But yeah, inspired by that, I thought like being able to search images by their semantics. The contents of the image was really interesting. So I pretty much decided to create a search index on the clip embeddings, the clip image embeddings of all the images. And when you would search it, we would just do KNN search on pretty much the image embedding index. I mean, we had way too many embeddings to store on like a regular database. So we had to end up using FAISS, which is a Facebook library for really fast KNN search and embedding search. That was pretty fun to set up. It actually runs only on CPUs, which is really cool. It's super efficient. You compute the embeddings on GPUs, but like you can serve it all on like an eight core server and it's really, really fast. Once we released the semantic search on the clip embeddings, people were using the search way more. And you could do other cool things. You could do like similar image search where if you found like a specific image you liked, you could upload it and it would show you relevant images as well. Swyx: And then right after that, you raised your seed money from AI grant, NetFreedman, then Gross. Sharif: Yeah, we raised about $5 million from Daniel Gross. And then we also participated in AI grant. That was pretty cool. That was kind of the inflection point. Not much before that point, Lexic was kind of still a side project. And I told myself that I would focus on it full time or I'd consider focusing on it full time if we had broke like a million users. I was like, oh, that's gonna be like years away for sure. And then we ended up doing that in like the first week and a half. I was like, okay, there's something here. And it was kind of that like deal was like growing like pretty slowly and like pretty linearly. And then Lexica was just like this thing that just kept going up and up and up. And I was so confused. I was like, man, people really like looking at pictures. This is crazy. Yeah. And then we decided to pivot the entire company and just focus on Lexica full time at that point. And then we raised our seed round. [15:00] Being Chronically EarlySwyx: Yeah. So one thing that you casually dropped out, the one that slip, you said you were working on Lexica before the launch of Stable Diffusion such that you were able to launch Lexica one day after Stable Diffusion. Sharif: Yeah.Swyx: How did you get so early into Stable Diffusion? Cause I didn't hear about it. Sharif: Oh, that's a good question. I, where did I first hear about Stable Diffusion? I'm not entirely sure. It must've been like somewhere on Twitter or something. That changed your life. Yeah, it was great. And I got into the discord cause I'd used Dolly too before, but, um, there were a lot of restrictions in place where you can generate human faces at the time. You can do that now. But when I first got access to it, like you couldn't do any faces. It was like, there were like a, the list of adjectives you couldn't use was quite long. Like I had a friend from Pakistan and it can generate anything with the word Pakistan in it for some reason. But Stable Diffusion was like kind of the exact opposite where there were like very, very few rules. So that was really, really fun and interesting, especially seeing the chaos of like a bunch of other people also using it right in front of you. That was just so much fun. And I just wanted to do something with it. I thought it was honestly really fun. Swyx: Oh, well, I was just trying to get tips on how to be early on things. Cause you're pretty consistently early to things, right? You were Stadia before Stadia. Um, and then obviously you were on. Sharif: Well, Stadia is kind of shut down now. So I don't know if being early to that was a good one. Swyx: Um, I think like, you know, just being consistently early to things that, uh, you know, have a lot of potential, like one of them is going to work out and you know, then that's how you got Lexica. [16:00] From Search to Custom ModelsAlessio: How did you decide to go from search to running your own models for a generation? Sharif: That's a good question. So we kind of realized that the way people were using Lexica was they would have Lexica open in one tab and then in another tab, they'd have a Stable Diffusion interface. It would be like either a discord or like a local run interface, like the automatic radio UI, um, or something else. I just, I would watch people use it and they would like all tabs back and forth between Lexica and their other UI. And they would like to scroll through Lexica, click on the prompt, click on an image, copy the prompt, and then paste it and maybe change a word or two. And I was like, this should really kind of just be all within Lexica. Like, it'd be so cool if you could just click a button in Lexica and get an editor and generate your images. And I found myself also doing the all tab thing, or it was really frustrating. I was like, man, this is kind of tedious. Like I really wish it was much simpler. So we just built generations directly within Lexica. Um, so we do, we deployed it on, I don't remember when we first launched, I think it was November, December. And yeah, people love generating directly within it. [17:00] AI Grant LearningsSwyx: I was also thinking that this was coming out of AI grants where, you know, I think, um, yeah, I was like a very special program. I was just wondering if you learned anything from, you know, that special week where everyone was in town. Sharif: Yeah, that was a great week. I loved it. Swyx: Yeah. Bring us, bring us in a little bit. Cause it was awesome. There. Sharif: Oh, sure. Yeah. It's really, really cool. Like all the founders in AI grants are like fantastic people. And so I think the main takeaway from the AI grant was like, you have this massive overhang in compute or in capabilities in terms of like these latest AI models, but to the average person, there's really not that many products that are that cool or useful to them. Like the latest one that has hit the zeitgeist was chat GPT, which used arguably the same GPT three model, but like RLHF, but you could have arguably built like a decent chat GPT product just using the original GPT three model. But no one really did it. Now there were some restrictions in place and opening. I like to slowly release them over the few months or years after they release the original API. But the core premise behind AI grants is that there are way more capabilities than there are products. So focus on building really compelling products and get people to use them. And like to focus less on things like hitting state of the art on evals and more on getting users to use something. Swyx: Make something people want.Sharif: Exactly. Host: Yeah, we did an episode on LLM benchmarks and we kind of talked about how the benchmarks kind of constrain what people work on, because if your model is not going to do well, unlike the well-known benchmarks, it's not going to get as much interest and like funding. So going at it from a product lens is cool. [19:30] The Text to Image Illuminati?Swyx: My hypothesis when I was seeing the sequence of events for AI grants and then for Lexica Aperture was that you had some kind of magical dinner with Emad and David Holtz. And then they taught you the secrets of training your own model. Is that how it happens? Sharif: No, there's no secret dinner. The Illuminati of text to image. We did not have a meeting. I mean, even if we did, I wouldn't tell you. But it really boils down to just having good data. If you think about diffusion models, really the only thing they do is learn a distribution of data. So if you have high quality data, learn that high quality distribution. Or if you have low quality data, it will learn to generate images that look like they're from that distribution. So really it boils down to the data and the amount of data you have and that quality of that data, which means a lot of the work in training high quality models, at least diffusion models, is not really in the model architecture, but rather just filtering the data in a way that makes sense. So for Lexica, we do a lot of aesthetic scoring on images and we use the rankings we get from our website because we get tens of millions of people visiting it every month. So we can capture a lot of rankings. Oh, this person liked this image when they saw this one right next to it. Therefore, they probably preferred this one over that. You can do pairwise ranking to rank images and then compute like ELO scores. You can also just train aesthetic models to learn to classify a model, whether or not someone will like it or whether or not it's like, rank it on a scale of like one to ten, for example. So we mostly use a lot of the traffic we get from Lexica and use that to kind of filter our data sets and use that to train better aesthetic models. [20:30] How to Learn to Train ModelsSwyx: You had been a machine learning engineer before. You've been more of an infrastructure guy. To build, you were more of a prompt engineer with a bit of web design. This was the first time that you were basically training your own model. What was the wrap up like? You know, not to give away any secret sauce, but I think a lot of people who are traditional software engineers are feeling a lot of, I don't know, fear when encountering these kinds of domains. Sharif: Yeah, I think it makes a lot of sense. And to be fair, I didn't have much experience training massive models at this scale before I did it. A lot of times it's really just like, in the same way when you're first learning to program, you would just take the problem you're having, Google it, and go through the stack overflow post. And then you figure it out, but ultimately you will get to the answer. It might take you a lot longer than someone who's experienced, but I think there are enough resources out there where it's possible to learn how to do these things. Either just reading through GitHub issues for relevant models. Swyx: Oh God. Sharif: Yeah. It's really just like, you might be slower, but it's definitely still possible. And there are really great courses out there. The Fast AI course is fantastic. There's the deep learning book, which is great for fundamentals. And then Andrej Karpathy's online courses are also excellent, especially for language modeling. You might be a bit slower for the first few months, but ultimately I think if you have the programming skills, you'll catch up pretty quickly. It's not like this magical dark science that only three people in the world know how to do well. Probably was like 10 years ago, but now it's becoming much more open. You have open source collectives like Eleuther and LAION, where they like to share the details of their large scale training runs. So you can learn from a lot of those people. Swyx: Yeah. I think what is different for programmers is having to estimate significant costs upfront before they hit run. Because it's not a thing that you normally consider when you're coding, but yeah, like burning through your credits is a fear that people have. Sharif: Yeah, that does make sense. In that case, like fine tuning larger models gets you really, really far. Even using things like low rank adaptation to fine tune, where you can like fine tune much more efficiently on a single GPU. Yeah, I think people are underestimating how far you can really get just using open source models. I mean, before Lexica, I was working on Debuild and we were using the GP3 API, but I was also like really impressed at how well you could get open source models to run by just like using the API, collecting enough samples from like real world user feedback or real world user data using your product. And then just fine tuning the smaller open source models on those examples. And now you have a model that's pretty much state of the art for your specific domain. Whereas the runtime cost is like 10 times or even 100 times cheaper than using an API. Swyx: And was that like GPT-J or are you talking BERT? Sharif: I remember we tried GPT-J, but I think FLAN-T5 was like the best model we were able to use for that use case. FLAN-T5 is awesome. If you can, like if your prompt is small enough, it's pretty great. And I'm sure there are much better open source models now. Like Vicuna, which is like the GPT-4 variant of like Lama fine tuned on like GPT-4 outputs. Yeah, they're just going to get better and they're going to get better much, much faster. Swyx: Yeah. We're just talking in a previous episode to the creator of Dolly, Mike Conover, which is actually commercially usable instead of Vicuna, which is a research project. Sharif: Oh, wow. Yeah, that's pretty cool. [24:00] Why No Agents?Alessio: I know you mentioned being early. Obviously, agents are one of the hot things here. In 2021, you had this, please buy me AirPods, like a demo that you tweeted with the GPT-3 API. Obviously, one of the things about being early in this space, you can only do one thing at a time, right? And you had one tweet recently where you said you hoped that that demo would open Pandora's box for a bunch of weird GPT agents. But all we got were docs powered by GPT. Can you maybe talk a little bit about, you know, things that you wish you would see or, you know, in the last few, last few weeks, we've had, you know, Hugging GPT, Baby AGI, Auto GPT, all these different kind of like agent projects that maybe now are getting closer to the, what did you say, 50% of internet traffic being skips of GPT agents. What are you most excited about, about these projects and what's coming? Sharif: Yeah, so we wanted a way for users to be able to paste in a link for the documentation page for a specific API, and then describe how to call that API. And then the way we would need to pretty much do that for Debuild was we wondered if we could get an agent to browse the docs page, read through it, summarize it, and then maybe even do things like create an API key and register it for that user. To do that, we needed a way for the agent to read the web page and interact with it. So I spent about a day working on that demo where we just took the web page, serialized it into a more compact form that fit within the 2048 token limit of like GPT-3 at the time. And then just decide what action to do. And then it would, if the page was too long, it would break it down into chunks. And then you would have like a sub prompt, decide on which chunk had the best action. And then at the top node, you would just pretty much take that action and then run it in a loop. It was really, really expensive. I think that one 60 second demo cost like a hundred bucks or something, but it was wildly impractical. But you could clearly see that agents were going to be a thing, especially ones that could read and write and take actions on the internet. It was just prohibitively expensive at the time. And the context limit was way too small. But yeah, I think it seems like a lot of people are taking it more seriously now, mostly because GPT-4 is way more capable. The context limit's like four times larger at 8,000 tokens, soon 32,000. And I think the only problem that's left to solve is finding a really good representation for a webpage that allows it to be consumed by a text only model. So some examples are like, you could just take all the text and pass it in, but that's probably too long. You could take all the interactive only elements like buttons and inputs, but then you miss a lot of the relevant context. There are some interesting examples, which I really like is you could run the webpage or you could run the browser in a terminal based browser. So there are some browsers that run in your terminal, which serialize everything into text. And what you can do is just take that frame from that terminal based browser and pass that directly to the model. And it's like a really, really good representation of the webpage because they do things where for graphical elements, they kind of render it using ASCII blocks. But for text, they render it as actual text. So you could just remove all the weird graphical elements, just keep all the text. And that works surprisingly well. And then there are other problems to solve, which is how do you get the model to take an action? So for example, if you have a booking page and there's like a calendar and there are 30 days on the calendar, how do you get it to specify which button to press? It could say 30, and you can match string based and like find the 30. But for example, what if it's like a list of friends in Facebook and trying to delete a friend? There might be like 30 delete buttons. How do you specify which one to click on? The model might say like, oh, click on the one for like Mark. But then you'd have to figure out the delete button in relation to Mark. And there are some ways to solve this. One is there's a cool Chrome extension called Vimium, which lets you use Vim in your Chrome browser. And what you do is you can press F and over every interactive element, it gives you like a character or two characters. Or if you type those two characters, it presses that button or it opens or focuses on that input. So you could combine a lot of these ideas and then get a really good representation of the web browser in text, and then also give the model a really, really good way to control the browser as well. And I think those two are the core part of the problem. The reasoning ability is definitely there. If a model can score in the top 10% on the bar exam, it can definitely browse a web page. It's really just how do you represent text to the model and how do you get the model to perform actions back on the web page? Really, it's just an engineering problem. Swyx: I have one doubt, which I'd love your thoughts on. How do you get the model to pause when it doesn't have enough information and ask you for additional information because you under specified your original request? Sharif: This is interesting. I think the only way to do this is to have a corpus where your training data is like these sessions of agents browsing the web. And you have to pretty much figure out where the ones that went wrong or the agents that went wrong, or did they go wrong and just replace it with, hey, I need some help. And then if you were to fine tune a larger model on that data set, you would pretty much get them to say, hey, I need help on the instances where they didn't know what to do next. Or if you're using a closed source model like GPT-4, you could probably tell it if you're uncertain about what to do next, ask the user for help. And it probably would be pretty good at that. I've had to write a lot of integration tests in my engineering days and like the dome. Alessio: They might be over. Yeah, I hope so. I hope so. I don't want to, I don't want to deal with that anymore. I, yeah, I don't want to write them the old way. Yeah. But I'm just thinking like, you know, we had the robots, the TXT for like crawlers. Like I can definitely see the DOM being reshaped a little bit in terms of accessibility. Like sometimes you have to write expats that are like so long just to get to a button. Like there should be a better way to do it. And maybe this will drive the change, you know, making it easier for these models to interact with your website. Sharif: There is the Chrome accessibility tree, which is used by screen readers, but a lot of times it's missing a lot of, a lot of useful information. But like in a perfect world, everything would be perfectly annotated for screen readers and we could just use that. That's not the case. [29:30] GPT4 and MultimodalitySwyx: GPT-4 multimodal, has your buddy, Greg, and do you think that that would solve essentially browser agents or desktop agents? Sharif: Greg has not come through yet, unfortunately. But it would make things a lot easier, especially for graphically heavy web pages. So for example, you were using Yelp and like using the map view, it would make a lot of sense to use something like that versus a text based input. Where, how do you serialize a map into text? It's kind of hard to do that. So for more complex web pages, that would make it a lot easier. You get a lot more context to the model. I mean, it seems like that multimodal input is very dense in the sense that it can read text and it can read it really, really well. So you could probably give it like a PDF and it would be able to extract all the text and summarize it. So if it can do that, it could probably do anything on any webpage. Swyx: Yeah. And given that you have some experience integrating Clip with language models, how would you describe how different GPT-4 is compared to that stuff? Sharif: Yeah. Clip is entirely different in the sense that it's really just good at putting images and text into the same latent space. And really the only thing that's useful for is similarity and clustering. Swyx: Like literally the same energy, right? Sharif: Yeah. Swyx: Yeah. And then there's Blip and Blip2. I don't know if you like those. Sharif: Yeah. Blip2 is a lot better. There's actually a new project called, I think, Mini GPT-4. Swyx: Yes. It was just out today. Sharif: Oh, nice. Yeah. It's really cool. It's actually really good. I think that one is based on the Lama model, but yeah, that's, that's like another. Host: It's Blip plus Lama, right? So they, they're like running through Blip and then have Lama ask your, interpret your questions so that you do visual QA. Sharif: Oh, that's cool. That's really clever. Yeah. Ensemble models are really useful. Host: Well, so I was trying to articulate, cause that was, that's, there's two things people are talking about today. You have to like, you know, the moment you wake up, you open Hacker News and go like, all right, what's, what's the new thing today? One is Red Pajama. And then the other one is Mini GPT-4. So I was trying to articulate like, why is this not GPT-4? Like what is missing? And my only conclusion was it just doesn't do OCR yet. But I wonder if there's anything core to this concept of multimodality that you have to train these things together. Like what does one model doing all these things do that is separate from an ensemble of models that you just kind of duct tape together? Sharif: It's a good question. This is pretty related to interoperability. Like how do we understand that? Or how, how do we, why do models trained on different modalities within the same model perform better than two models perform or train separately? I can kind of see why that is the case. Like, it's kind of hard to articulate, but when you have two different models, you get the reasoning abilities of a language model, but also like the text or the vision understanding of something like Clip. Whereas Clip clearly lacks the reasoning abilities, but if you could somehow just put them both in the same model, you get the best of both worlds. There were even cases where I think the vision version of GPT-4 scored higher on some tests than the text only version. So like there might even be some additional learning from images as well. Swyx: Oh yeah. Well, uh, the easy answer for that was there was some chart in the test. That wasn't translated. Oh, when I read that, I was like, Oh yeah. Okay. That makes sense. Sharif: That makes sense. I thought it'd just be like, it sees more of the world. Therefore it has more tokens. Swyx: So my equivalent of this is I think it's a well-known fact that adding code to a language model training corpus increases its ability to do language, not just with code. So, the diversity of datasets that represent some kind of internal logic and code is obviously very internally logically consistent, helps the language model learn some internal structure. Which I think, so, you know, my ultimate test for GPT-4 is to show the image of like, you know, is this a pipe and ask it if it's a pipe or not and see what it does. Sharif: Interesting. That is pretty cool. Yeah. Or just give it a screenshot of your like VS code editor and ask it to fix the bug. Yeah. That'd be pretty wild if it could do that. Swyx: That would be adult AGI. That would be, that would be the grownup form of AGI. [33:30] Sharif's Startup ManualSwyx: On your website, you have this, um, startup manual where you give a bunch of advice. This is fun. One of them was that you should be shipping to production like every two days, every other day. This seems like a great time to do it because things change every other day. But maybe, yeah, tell some of our listeners a little bit more about how you got to some of these heuristics and you obviously build different projects and you iterate it on a lot of things. Yeah. Do you want to reference this? Sharif: Um, sure. Yeah, I'll take a look at it. Swyx: And we'll put this in the show notes, but I just wanted you to have the opportunity to riff on this, this list, because I think it's a very good list. And what, which one of them helped you for Lexica, if there's anything, anything interesting. Sharif: So this list is, it's pretty funny. It's mostly just like me yelling at myself based on all the mistakes I've made in the past and me trying to not make them again. Yeah. Yeah. So I, the first one is like, I think the most important one is like, try when you're building a product, try to build the smallest possible version. And I mean, for Lexica, it was literally a, literally one screen in the react app where a post-process database, and it just showed you like images. And I don't even know if the first version had search. Like I think it did, but I'm not sure. Like, I think it was really just like a grid of images that were randomized, but yeah, don't build the absolute smallest thing that can be considered a useful application and ship it for Lexica. That was, it helps me write better prompts. That's pretty useful. It's not that useful, but it's good enough. Don't fall into the trap of intellectual indulgence with over-engineering. I think that's a pretty important one for myself. And also anyone working on new things, there's often times you fall into the trap of like thinking you need to add more and more things when in reality, like the moment it's useful, you should probably get in the hands of your users and they'll kind of set the roadmap for you. I know this has been said millions of times prior, but just, I think it's really, really important. And I think if I'd spent like two months working on Lexica, adding a bunch of features, it wouldn't have been anywhere as popular as it was if I had just released the really, really boiled down version alongside the stable diffusion release. Yeah. And then there are a few more like product development doesn't start until you launch. Think of your initial product as a means to get your users to talk to you. It's also related to the first point where you really just want people using something as quickly as you can get that to happen. And then a few more are pretty interesting. Create a product people love before you focus on growth. If your users are spontaneously telling other people to use your product, then you've built something people love. Swyx: So this is pretty, it sounds like you've internalized Paul Graham's stuff a lot. Yeah. Because I think he said stuff like that. Sharif: A lot of these are just probably me taking notes from books I found really interesting or like PG essays that were really relevant at the time. And then just trying to not forget them. I should probably read this list again. There's some pretty personalized advice for me here. Oh yeah. One of my favorite ones is, um, don't worry if what you're building doesn't sound like a business. Nobody thought Facebook would be a $500 billion company. It's easy to come up with a business model. Once you've made something people want, you can even make pretty web forms and turn that into a 200 person company. And then if you click the link, it's to LinkedIn for type form, which is now, uh, I think they're like an 800 person company or something like that. So they've grown quite a bit. There you go. Yeah. Pretty web forms are pretty good business, even though it doesn't sound like it. Yeah. It's worth a billion dollars. [38:30] Lexica Aperture V1/2/3Swyx: One way I would like to tie that to the history of Lexica, which we didn't go over, which was just walk us through like Aperture V1, V2, V3, uh, which you just released last week. And how maybe some of those principles helped you in that journey.Sharif: Yeah. So, um, V1 was us trying to create a very photorealistic version of our model of Sable to Fusion. Uh, V1 actually didn't turn out to be that popular. It turns out people loved not generating. Your marketing tweets were popular. They were quite popular. So I think at the time you couldn't get Sable to Fusion to generate like photorealistic images that were consistent with your prompt that well. It was more so like you were sampling from this distribution of images and you could slightly pick where you sampled from using your prompt. This was mostly just because the clip text encoder is not the best text encoder. If you use a real language model, like T5, you get much better results. Like the T5 XXL model is like a hundred times larger than the clip text encoder for Sable to Fusion 1.5. So you could kind of steer it into like the general direction, but for more complex prompts, it just didn't work. So a lot of our users actually complained that they preferred the 1.5, Sable to Fusion 1.5 model over the Aperture model. And it was just because a lot of people were using it to create like parts and like really weird abstract looking pictures that didn't really work well with the photorealistic model trained solely on images. And then for V2, we kind of took that into consideration and then just trained it more on a lot of the art images on Lexica. So we took a lot of images that were on Lexica that were art, used that to train aesthetic models that ranked art really well, and then filtered larger sets to train V2. And then V3 is kind of just like an improved version of that with much more data. I'm really glad we didn't spend too much time on V1. I think we spent about one month working on it, which is a lot of time, but a lot of the things we learned were useful for training future versions. Swyx: How do you version them? Like where do you decide, okay, this is V2, this is V3? Sharif: The versions are kind of weird where you can't really use semantic versions because like if you have a small update, you usually just make that like V2. Versions are kind of used for different base models, I'd say. So if you have each of the versions were a different base model, but we've done like fine tunes of the same version and then just release an update without incrementing the version. But I think when there's like a clear change between running the same prompt on a model and you get a different image, that should probably be a different version. [40:00] Request for AI Startup - LLM ToolsAlessio: So the startup manual was the more you can actually do these things today to make it better. And then you have a whole future page that has tips from, you know, what the series successor is going to be like to like why everyone's genome should be sequenced. There's a lot of cool stuff in there. Why do we need to develop stimulants with shorter half-lives so that we can sleep better. Maybe talk a bit about, you know, when you're a founder, you need to be focused, right? So sometimes there's a lot of things you cannot build. And I feel like this page is a bit of a collection of these. Like, yeah. Are there any of these things that you're like, if I were not building Lexica today, this is like a very interesting thing. Sharif: Oh man. Yeah. There's a ton of things that I want to build. I mean, off the top of my head, the most exciting one would be better tools for language models. And I mean, not tools that help us use language models, but rather tools for the language models themselves. So things like giving them access to browsers, giving them access to things like payments and credit cards, giving them access to like credit cards, giving them things like access to like real world robots. So like, it'd be cool if you could have a Boston dynamic spot powered by a language model reasoning module and you would like to do things for you, like go and pick up your order, stuff like that. Entirely autonomously given like high level commands. That'd be like number one thing if I wasn't working on Lexica. [40:00] Sequencing your GenomeAnd then there's some other interesting things like genomics I find really cool. Like there's some pretty cool things you can do with consumer genomics. So you can export your genome from 23andMe as a text file, like literally a text file of your entire genome. And there is another tool called Prometheus, I think, where you upload your 23andMe text file genome and then they kind of map specific SNPs that you have in your genome to studies that have been done on those SNPs. And it tells you really, really useful things about yourself. Like, for example, I have the SNP for this thing called delayed sleep phase disorder, which makes me go to sleep about three hours later than the general population. So like I used to always be a night owl and I never knew why. But after using Prometheus it pretty much tells you, oh, you have the specific genome for specific SNP for DSPS. It's like a really tiny percentage of the population. And it's like something you should probably know about. And there's a bunch of other things. It tells you your likelihood for getting certain diseases, for certain cancers, oftentimes, like even weird personality traits. There's one for like, I have one of the SNPs for increased risk taking and optimism, which is pretty weird. That's an actual thing. Like, I don't know how. This is the founder gene. You should sequence everybody. It's pretty cool. And it's like, it's like $10 for Prometheus and like 70 bucks for 23andMe. And it explains to you how your body works and like the things that are different from you or different from the general population. Wow. Highly recommend everyone do it. Like if you're, if you're concerned about privacy, just purchase a 23andMe kit with a fake name. You don't have to use your real name. I didn't use my real name. Swyx: It's just my genes. Worst you can do is clone me. It ties in with what you were talking about with, you know, we want the future to be like this. And like people are building uninspired B2B SaaS apps and you and I had an exchange about this. [42:00] Believe in Doing Great ThingsHow can we get more people to believe they can do great things? Sharif: That's a good question. And I like a lot of the things I've been working on with GP3. It has been like trying to solve this by getting people to think about more interesting ideas. I don't really know. I think one is just like the low effort version of this is just putting out really compelling demos and getting people inspired. And then the higher effort version is like actually building the products yourself and getting people to like realize this is even possible in the first place. Like I think the baby AGI project and like the GPT Asian projects on GitHub are like in practice today, they're not super useful, but I think they're doing an excellent job of getting people incredibly inspired for what can be possible with language models as agents. And also the Stanford paper where they had like the mini version of Sims. Yeah. That one was incredible. That was awesome. Swyx: It was adorable. Did you see the part where they invented day drinking? Sharif: Oh, they did? Swyx: Yeah. You're not supposed to go to these bars in the afternoon, but they were like, we're going to go anyway. Nice. Sharif: That's awesome. Yeah. I think we need more stuff like that. That one paper is probably going to inspire a whole bunch of teams to work on stuff similar to that. Swyx: And that's great. I can't wait for NPCs to actually be something that you talk to in a game and, you know, have their own lives and you can check in and, you know, they would have their own personalities as well. Sharif: Yeah. I was so kind of off topic. But I was playing the last of us part two and the NPCs in that game are really, really good. Where if you like, point a gun at them and they'll beg for their life and like, please, I have a family. And like when you kill people in the game, they're like, oh my God, you shot Alice. Like they're just NPCs, but they refer to each other by their names and like they plead for their lives. And this is just using regular conditional rules on NPC behavior. Imagine how much better it'd be if it was like a small GPT-4 agent running in every NPC and they had the agency to make decisions and plead for their lives. And I don't know, you feel way more guilty playing that game. Alessio: I'm scared it's going to be too good. I played a lot of hours of Fallout. So I feel like if the NPCs were a lot better, you would spend a lot more time playing the game. Yeah. [44:30] Lightning RoundLet's jump into lightning round. First question is your favorite AI product. Sharif: Favorite AI product. The one I use the most is probably ChatGPT. The one I'm most excited about is, it's actually a company in AI grants. They're working on a version of VS code. That's like an entirely AI powered cursor, yeah. Cursor where you would like to give it a prompt and like to iterate on your code, not by writing code, but rather by just describing the changes you want to make. And it's tightly integrated into the editor itself. So it's not just another plugin. Swyx: Would you, as a founder of a low code prompting-to-code company that pivoted, would you advise them to explore some things or stay away from some things? Like what's your learning there that you would give to them?Sharif: I would focus on one specific type of code. So if I'm building a local tool, I would try to not focus too much on appealing developers. Whereas if I was building an alternative to VS code, I would focus solely on developers. So in that, I think they're doing a pretty good job focusing on developers. Swyx: Are you using Cursor right now? Sharif: I've used it a bit. I haven't converted fully, but I really want to. Okay. It's getting better really, really fast. Yeah. Um, I can see myself switching over sometime this year if they continue improving it. Swyx: Hot tip for, for ChatGPT, people always say, you know, they love ChatGPT. Biggest upgrade to my life right now is the, I forked a menu bar app I found on GitHub and now I just have it running in a menu bar app and I just do command shift G and it pops it up as a single use thing. And there's no latency because it just always is live. And I just type, type in the thing I want and then it just goes away after I'm done. Sharif: Wow. That's cool. Big upgrade. I'm going to install that. That's cool. Alessio: Second question. What is something you thought would take much longer, but it's already here? Like what, what's your acceleration update? Sharif: Ooh, um, it would take much longer, but it's already here. This is your question. Yeah, I know. I wasn't prepared. Um, so I think it would probably be kind of, I would say text to video. Swyx: Yeah. What's going on with that? Sharif: I think within this year, uh, by the end of this year, we'll have like the jump between like the original DALL-E one to like something like mid journey. Like we're going to see that leap in text to video within the span of this year. Um, it's not already here yet. So I guess the thing that surprised me the most was probably the multi-modality of GPT four in the fact that it can technically see things, which is pretty insane. Swyx: Yeah. Is text to video something that Aperture would be interested in? Sharif: Uh, it's something we're thinking about, but it's still pretty early. Swyx: There was one project with a hand, um, animation with human poses. It was also coming out of Facebook. I thought that was a very nice way to accomplish text to video while having a high degree of control. I forget the name of that project. It was like, I think it was like drawing anything. Swyx: Yeah. It sounds familiar. Well, you already answered a year from now. What will people be most surprised by? Um, and maybe the, uh, the usual requests for startup, you know, what's one thing you will pay for if someone built it? Sharif: One thing I would pay for if someone built it. Um, so many things, honestly, I would probably really like, um, like I really want people to build more, uh, tools for language models, like useful tools, give them access to Chrome. And I want to be able to give it a task. And then just, it goes off and spins up a hundred agents that perform that task. And like, sure. Like 80 of them might fail, but like 20 of them might kind of succeed. That's all you really need. And they're agents. You can spin up thousands of them. It doesn't really matter. Like a lot of large numbers are on your side. So that'd be, I would pay a lot of money for that. Even if it was capable of only doing really basic tasks, like signing up for a SAS tool and booking a call or something. If you could do even more things where it could have handled the email, uh, thread and like get the person on the other end to like do something where like, I don't even have to like book the demo. They just give me access to it. That'd be great. Yeah. More, more. Like really weird language model tools would be really fun.Swyx: Like our chat, GPT plugins, a step in the right direction, or are you envisioning something else? Sharif: I think GPT, chat GPT plugins are great, but they seem to only have right-only access right now. I also want them to have, I want these like theoretical agents to have right access to the world too. So they should be able to perform actions on web browsers, have their own email inbox, and have their own credit card with their own balance. Like take it, send emails to people that might be useful in achieving their goal. Ask them for help. Be able to like sign up and register for accounts on tools and services and be able to like to use graphical user interfaces really, really well. And also like to phone home if they need help. Swyx: You just had virtual employees. You want to give them a Brex card, right? Sharif: I wouldn't be surprised if, a year from now there was Brex GPT or it's like Brex cards for your GPT agents. Swyx: I mean, okay. I'm excited by this. Yeah. Kind of want to build it. Sharif: You should. Yeah. Alessio: Well, just to wrap up, we always have like one big takeaway for people, like, you know, to display on a signboard for everyone to see what is the big message to everybody. Sharif: Yeah. I think the big message to everybody is you might think that a lot of the time the ideas you have have already been done by someone. And that may be the case, but a lot of the time the ideas you have are actually pretty unique and no one's ever tried them before. So if you have weird and interesting ideas, you should actually go out and just do them and make the thing and then share that with the world. Cause I feel like we need more people building weird ideas and less people building like better GPT search for your documentation. Host: There are like 10 of those in the recent OST patch. Well, thank you so much. You've been hugely inspiring and excited to see where Lexica goes next. Sharif: Appreciate it. Thanks for having me. Get full access to Latent Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

It's now almost 6 months since Google declared Code Red, and the results — Jeff Dean's recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January, Bard's rushed launch in Feb, a slick video showing Google Workspace AI features and confusing doubly linked blogposts about PaLM API in March, and merging Google Brain and DeepMind in April — have not been inspiring. Google's internal panic is in full display now with the surfacing of a well written memo, written by software engineer Luke Sernau written in early April, revealing internal distress not seen since Steve Yegge's infamous Google Platforms Rant. Similar to 2011, the company's response to an external challenge has been to mobilize the entire company to go all-in on a (from the outside) vague vision.Google's misfortunes are well understood by now, but the last paragraph of the memo: “We have no moat, and neither does OpenAI”, was a banger of a mic drop.Combine this with news this morning that OpenAI lost $540m last year and will need as much as $100b more funding (after the complex $10b Microsoft deal in Jan), and the memo's assertion that both Google and OpenAI have “no moat” against the mighty open source horde have gained some credibility in the past 24 hours.Many are criticising this memo privately:* A CEO commented to me yesterday that Luke Sernau does not seem to work in AI related parts of Google and “software engineers don't understand moats”. * Emad Mostaque, himself a perma-champion of open source and open models, has repeatedly stated that “Closed models will always outperform open models” because closed models can just wrap open ones.* Emad has also commented on the moats he does see: “Unique usage data, Unique content, Unique talent, Unique product, Unique business model”, most of which Google does have, and OpenAI less so (though it is winning on the talent front)* Sam Altman famously said that “very few to no one is Silicon Valley has a moat - not even Facebook” (implying that moats don't actually matter, and you should spend your time thinking about more important things)* It is not actually clear what race the memo thinks Google and OpenAI are in vs Open Source. Neither are particularly concerned about running models locally on phones, and they are perfectly happy to let “a crazy European alpha male” run the last mile for them while they build actually monetizable cloud infrastructure.However moats are of intense interest by everybody keen on productized AI, cropping up in every Harvey, Jasper, and general AI startup vs incumbent debate. It is also interesting to take the memo at face value and discuss the searing hot pace of AI progress in open source. We hosted this discussion yesterday with Simon Willison, who apart from being an incredible communicator also wrote a great recap of the No Moat memo. 2,800 have now tuned in on Twitter Spaces, but we have taken the audio and cleaned it up here. Enjoy!Timestamps* [00:00:00] Introducing the Google Memo* [00:02:48] Open Source > Closed?* [00:05:51] Running Models On Device* [00:07:52] LoRA part 1* [00:08:42] On Moats - Size, Data* [00:11:34] Open Source Models are Comparable on Data* [00:13:04] Stackable LoRA* [00:19:44] The Need for Special Purpose Optimized Models* [00:21:12] Modular - Mojo from Chris Lattner* [00:23:33] The Promise of Language Supersets* [00:28:44] Google AI Strategy* [00:29:58] Zuck Releasing LLaMA* [00:30:42] Google Origin Confirmed* [00:30:57] Google's existential threat* [00:32:24] Non-Fiction AI Safety ("y-risk")* [00:35:17] Prompt Injection* [00:36:00] Google vs OpenAI* [00:41:04] Personal plugs: Simon and TravisTranscripts[00:00:00] Introducing the Google Memo[00:00:00] Simon Willison: So, yeah, this is a document, which Kate, which I first saw at three o'clock this morning, I think. It claims to be leaked from Google. There's good reasons to believe it is leaked from Google, and to be honest, if it's not, it doesn't actually matter because the quality of the analysis, I think stands alone.[00:00:15] If this was just a document by some anonymous person, I'd still think it was interesting and worth discussing. And the title of the document is We Have No Moat and neither does Open ai. And the argument it makes is that while Google and OpenAI have been competing on training bigger and bigger language models, the open source community is already starting to outrun them, given only a couple of months of really like really, really serious activity.[00:00:41] You know, Facebook lama was the thing that really kicked us off. There were open source language models like Bloom before that some G P T J, and they weren't very impressive. Like nobody was really thinking that they were. Chat. G P T equivalent Facebook Lama came out in March, I think March 15th. And was the first one that really sort of showed signs of being as capable maybe as chat G P T.[00:01:04] My, I don't, I think all of these models, they've been, the analysis of them has tend to be a bit hyped. Like I don't think any of them are even quite up to GT 3.5 standards yet, but they're within spitting distance in some respects. So anyway, Lama came out and then, Two weeks later Stanford Alpaca came out, which was fine tuned on top of Lama and was a massive leap forward in terms of quality.[00:01:27] And then a week after that Vicuna came out, which is to this date, the the best model I've been able to run on my own hardware. I, on my mobile phone now, like, it's astonishing how little resources you need to run these things. But anyway, the the argument that this paper made, which I found very convincing is it only took open source two months to get this far.[00:01:47] It's now every researcher in the world is kicking it on new, new things, but it feels like they're being there. There are problems that Google has been trying to solve that the open source models are already addressing, and really how do you compete with that, like with your, it's closed ecosystem, how are you going to beat these open models with all of this innovation going on?[00:02:04] But then the most interesting argument in there is it talks about the size of models and says that maybe large isn't a competitive advantage, maybe actually a smaller model. With lots of like different people fine tuning it and having these sort of, these LoRA l o r a stackable fine tuning innovations on top of it, maybe those can move faster.[00:02:23] And actually having to retrain your giant model every few months from scratch is, is way less useful than having small models that you can tr you can fine tune in a couple of hours on laptop. So it's, it's fascinating. I basically, if you haven't read this thing, you should read every word of it. It's not very long.[00:02:40] It's beautifully written. Like it's, it's, I mean, If you try and find the quotable lines in it, almost every line of it's quotable. Yeah. So, yeah, that's that, that, that's the status of this[00:02:48] Open Source > Closed?[00:02:48] swyx: thing. That's a wonderful summary, Simon. Yeah, there, there's so many angles we can take to this. I, I'll just observe one, one thing which if you think about the open versus closed narrative, Ima Mok, who is the CEO of Stability, has always been that open will trail behind closed, because the closed alternatives can always take.[00:03:08] Learnings and lessons from open source. And this is the first highly credible statement that is basically saying the exact opposite, that open source is moving than, than, than closed source. And they are scared. They seem to be scared. Which is interesting,[00:03:22] Travis Fischer: Travis. Yeah, the, the, the, a few things that, that I'll, I'll, I'll say the only thing which can keep up with the pace of AI these days is open source.[00:03:32] I think we're, we're seeing that unfold in real time before our eyes. And. You know, I, I think the other interesting angle of this is to some degree LLMs are they, they don't really have switching costs. They are going to be, become commoditized. At least that's, that's what a lot of, a lot of people kind of think to, to what extent is it Is it a, a rate in terms of, of pricing of these things?[00:03:55] , and they all kind of become roughly the, the, the same in, in terms of their, their underlying abilities. And, and open source is gonna, gonna be actively pushing, pushing that forward. And, and then this is kind of coming from, if it is to be believed the kind of Google or an insider type type mentality around you know, where is the actual competitive advantage?[00:04:14] What should they be focusing on? How can they get back in into the game? When you know, when, when, when, when currently the, the, the external view of, of Google is that they're kind of spinning their wheels and they have this code red,, and it's like they're, they're playing catch up already.[00:04:28] Like how could they use the open source community and work with them, which is gonna be really, really hard you know, from a structural perspective given Google's place in the ecosystem. But a, a lot, lot, a lot of jumping off points there.[00:04:42] Alessio Fanelli: I was gonna say, I think the Post is really focused on how do we get the best model, but it's not focused on like, how do we build the best product around it.[00:04:50] A lot of these models are limited by how many GPUs you can get to run them and we've seen on traditional open source, like everybody can use some of these projects like Kafka and like Alaska for free. But the reality is that not everybody can afford to run the infrastructure needed for it.[00:05:05] So I, I think like the main takeaway that I have from this is like, A lot of the moats are probably around just getting the, the sand, so to speak, and having the GPUs to actually serve these models. Because even if the best model is open source, like running it at large scale for an end is not easy and like, it's not super convenient to get a lot, a lot of the infrastructure.[00:05:27] And we've seen that model work in open source where you have. The opensource project, and then you have a enterprise cloud hosted version for it. I think that's gonna look really different in opensource models because just hosting a model doesn't have a lot of value. So I'm curious to hear how people end up getting rewarded to do opensource.[00:05:46] You know, it's, we figured that out in infrastructure, but we haven't figured it out in in Alans[00:05:51] Running Models On Device[00:05:51] Simon Willison: yet. I mean, one thing I'll say is that the the models that you can run on your own devices are so far ahead of what I ever dreamed they would be at this point. Like Vicuna 13 b i i, I, I think is the current best available open mo model that I've played with.[00:06:08] It's derived from Facebook Lama, so you can't use it for commercial purposes yet. But the point about MCK 13 B is it runs in the browser directly on web gpu. There's this amazing web l l M project where you literally, your browser downloaded a two gigabyte file. And it fires up a chat g D style interface and it's quite good.[00:06:27] It can do rap battles between different animals and all of the kind of fun stuff that you'd expect to be able to do the language model running entirely in Chrome canary. It's shocking to me that that's even possible, but that kind of shows that once, once you get to inference, if you can shrink the model down and the techniques for shrinking these models, the, the first one was the the quantization.[00:06:48] Which the Lama CPP project really sort of popularized Matt can by using four bits instead of 16 bit floating point numbers, you can shrink it down quite a lot. And then there was a paper that came out days ago suggesting that you can prune the models and ditch half the model and maintain the same level of quality.[00:07:05] So with, with things like that, with all of these tricks coming together, it's really astonishing how much you can get done on hardware that people actually have in their pockets even.[00:07:15] swyx: Just for completion I've been following all of your posts. Oh, sorry. Yes. I just wanna follow up, Simon. You're, you said you're running a model on your phone. Which model is it? And I don't think you've written it up.[00:07:27] Simon Willison: Yeah, that one's vina. I did, did I write it up? I did. I've got a blog post about how it it, it, it knows who I am, sort of, but it said that I invented a, a, a pattern for living called bear or bunny pattern, which I definitely didn't, but I loved that my phone decided that I did.[00:07:44] swyx: I will hunt for that because I'm not yet running Vic on my phone and I feel like I should and, and as like a very base thing, but I'll, okay.[00:07:52] Stackable LoRA Modules[00:07:52] swyx: Also, I'll follow up two things, right? Like one I'm very interesting and let's, let's talk about that a little bit more because this concept of stackable improvements to models I think is extremely interesting.[00:08:00] Like, I would love to MPM install abilities onto my models, right? Which is really awesome. But the, the first thing thing is under-discussed is I don't get the panic. Like, honestly, like Google has the most moats. I I, I was arguing maybe like three months ago on my blog. Like Google has the most mote out of a lot of people because, hey, we have your calendar.[00:08:21] Hey, we have your email. Hey, we have your you know, Google Docs. Like, isn't that a, a sufficient mode? Like, why are these guys panicking so much? I don't, I still don't get it. Like, Sure open source is running ahead and like, it's, it's on device and whatev, what have you, but they have so much more mode.[00:08:36] Like, what are we talking about here? There's many dimensions to compete on.[00:08:42] On Moats - Size, Data[00:08:42] Travis Fischer: Yeah, there's like one of, one of the, the things that, that the author you know, mentions in, in here is when, when you start to, to, to have the feeling of what we're trailing behind, then you're, you're, you're, you're brightest researchers jump ship and go to OpenAI or go to work at, at, at academia or, or whatever.[00:09:00] And like the talent drain. At the, the level of the, the senior AI researchers that are pushing these things ahead within Google, I think is a serious, serious concern. And my, my take on it's a good point, right? Like, like, like, like what Google has modes. They, they, they're not running outta money anytime soon.[00:09:16] You know, I think they, they do see the level of the, the defensibility and, and the fact that they want to be, I'll chime in the, the leader around pretty much anything. Tech first. There's definitely ha ha have lost that, that, that feeling. Right? , and to what degree they can, they can with the, the open source community to, to get that back and, and help drive that.[00:09:38] You know all of the llama subset of models with, with alpaca and Vicuna, et cetera, that all came from, from meta. Right. Like that. Yeah. Like it's not licensed in an open way where you can build a company on top of it, but is now kind of driving this family of, of models, like there's a tree of models that, that they're, they're leading.[00:09:54] And where is Google in that, in that playbook? Like for a long time they were the one releasing those models being super open and, and now it's just they, they've seem to be trailing and there's, there's people jumping ship and to what degree can they, can they, can they. Close off those wounds and, and focus on, on where, where they, they have unique ability to, to gain momentum.[00:10:15] I think is a core part of my takeaway from this. Yeah.[00:10:19] Alessio Fanelli: And think another big thing in the post is, oh, as long as you have high quality data, like you don't need that much data, you can just use that. The first party data loops are probably gonna be the most important going forward if we do believe that this is true.[00:10:32] So, Databricks. We have Mike Conover from Databricks on the podcast, and they talked about how they came up with the training set for Dolly, which they basically had Databricks employees write down very good questions and very good answers for it. Not every company as the scale to do that. And I think products like Google, they have millions of people writing Google Docs.[00:10:54] They have millions of people using Google Sheets, then millions of people writing stuff, creating content on YouTube. The question is, if you wanna compete against these companies, maybe the model is not what you're gonna do it with because the open source kind of commoditizes it. But how do you build even better data?[00:11:12] First party loops. And that's kind of the hardest thing for startups, right? Like even if we open up the, the models to everybody and everybody can just go on GitHub and. Or hugging face and get the waste to the best model, but get enough people to generate data for me so that I can still make it good. That's, that's what I would be worried about if I was a, a new company.[00:11:31] How do I make that happen[00:11:32] Simon Willison: really quickly?[00:11:34] Open Source Models are Comparable on Data[00:11:34] Simon Willison: I'm not convinced that the data is that big a challenge. So there's this PO project. So the problem with Facebook LAMA is that it's not available for, for commercial use. So people are now trying to train a alternative to LAMA that's entirely on openly licensed data.[00:11:48] And that the biggest project around that is this red pajama project, which They released their training data a few weeks ago and it was 2.7 terabytes. Right? So actually tiny, right? You can buy a laptop that you can fit 2.7 terabytes on. Got it. But it was the same exact data that Facebook, the same thing that Facebook Lamb had been trained on.[00:12:06] Cuz for your base model. You're not really trying to teach it fact about the world. You're just trying to teach it how English and other languages work, how they fit together. And then the real magic is when you fine tune on top of that. That's what Alpaca did on top of Lama and so on. And the fine tuning sets, it looks like, like tens of thousands of examples to kick one of these role models into shape.[00:12:26] And tens of thousands of examples like Databricks spent a month and got the 2000 employees of their company to help kick in and it worked. You've got the open assistant project of crowdsourcing this stuff now as well. So it's achievable[00:12:40] swyx: sore throat. I agree. I think it's a fa fascinating point. Actually, so I've heard through the grapevine then red pajamas model.[00:12:47] Trained on the, the data that they release is gonna be releasing tomorrow. And it's, it's this very exciting time because the, the, there, there's a, there's a couple more models that are coming down the pike, which independently we produced. And so yeah, that we, everyone is challenging all these assumptions from, from first principles, which is fascinating.[00:13:04] Stackable LoRA[00:13:04] swyx: I, I did, I did wanted to, to like try to get a little bit more technical in terms of like the, the, the, the specific points race. Cuz this doc, this doc was just amazing. Can we talk about LoRA. I, I, I'll open up to Simon again if he's back.[00:13:16] Simon Willison: I'd rather someone else take on. LoRA, I've, I, I know as much as I've read in that paper, but not much more than that.[00:13:21] swyx: So I thought it was this kind of like an optimization technique. So LoRA stands for lower rank adaptation. But this is the first mention of LoRA as a form of stackable improvements. Where he I forget what, let, just, let me just kind of Google this. But obviously anyone's more knowledgeable please.[00:13:39] So come on in.[00:13:40] Alessio Fanelli: I, all of Lauren is through GTS Man, about 20 minutes on GT four, trying to figure out word. It was I study computer science, but this is not this is not my area of expertise. What I got from it is that basically instead of having to retrain the whole model you can just pick one of the ranks and you take.[00:13:58] One of like the, the weight matrix tests and like make two smaller matrixes from it and then just two to be retrained and training the whole model. So[00:14:08] swyx: it save a lot of Yeah. You freeze part of the thing and then you just train the smaller part like that. Exactly. That seems to be a area of a lot of fruitful research.[00:14:15] Yeah. I think Mini GT four recently did something similar as well. And then there's, there's, there's a, there's a Spark Model people out today that also did the same thing.[00:14:23] Simon Willison: So I've seen a lot of LoRA stable, the stable diffusion community has been using LoRA a lot. So they, in that case, they had a, I, the thing I've seen is people releasing LoRA's that are like you, you train a concept like a, a a particular person's face or something you release.[00:14:38] And the, the LoRA version of this end up being megabytes of data, like, which is, it's. You know, it's small enough that you can just trade those around and you can effectively load multiple of those into the model. But what I haven't realized is that you can use the same trick on, on language models. That was one of the big new things for me in reading the the leaks Google paper today.[00:14:56] Alessio Fanelli: Yeah, and I think the point to make around on the infrastructure, so what tragedy has told me is that when you're figuring out what rank you actually wanna do this fine tuning at you can have either go too low and like the model doesn't actually learn it. Or you can go too high and the model overfit those learnings.[00:15:14] So if you have a base model that everybody agrees on, then all the subsequent like LoRA work is done around the same rank, which gives you an advantage. And the point they made in the, that, since Lama has been the base for a lot of this LoRA work like they own. The, the mind share of the community.[00:15:32] So everything that they're building is compatible with their architecture. But if Google Opensources their own model the rank that they chose For LoRA on Lama might not work on the Google model. So all of the existing work is not portable. So[00:15:46] Simon Willison: the impression I got is that one of the challenges with LoRA is that you train all these LoRAs on top of your model, but then if you retrain that base model as LoRA's becoming invalid, right?[00:15:55] They're essentially, they're, they're, they're built for an exact model version. So this means that being the big company with all of the GPUs that can afford to retrain a model every three months. That's suddenly not nearly as valuable as it used to be because now maybe there's an open source model that's five years old at this point and has like multiple, multiple stacks of LoRA's trained all over the world on top of it, which can outperform your brand new model just because there's been so much more iteration on that base.[00:16:20] swyx: I, I think it's, I think it's fascinating. It's I think Jim Fan from Envidia was recently making this argument for transformers. Like even if we do come up with a better. Architecture, then transformers, they're the sheer hundreds and millions of dollars that have been invested on top of transformers.[00:16:34] Make it actually there is some switching costs and it's not exactly obvious that better architecture. Equals equals we should all switch immediately tomorrow. It's, it's, it's[00:16:44] Simon Willison: kinda like the, the difficulty of launching a new programming language today Yes. Is that pipeline and JavaScript have a million packages.[00:16:51] So no matter how good your new language is, if it can't tap into those existing package libraries, it's, it's not gonna be useful for, which is why Moji is so clever, because they did build on top of Pips. They get all of that existing infrastructure, all of that existing code working already.[00:17:05] swyx: I mean, what, what thought you, since you co-create JAO and all that do, do we wanna take a diversion into mojo?[00:17:10] No, no. I[00:17:11] Travis Fischer: would, I, I'd be happy to, to, to jump in, and get Simon's take on, on Mojo. 1, 1, 1 small, small point on LoRA is I, I, I just think. If you think about at a high level, what the, the major down downsides are of these, these large language models. It's the fact that they well they're, they're, they're difficult to, to train, right?[00:17:32] They, they tend to hallucinate and they are, have, have a static, like, like they were trained at a certain date, right? And with, with LoRA, I think it makes it a lot more amenable to Training new, new updates on top of that, that like base model on the fly where you can incorporate new, new data and in a way that is, is, is an interesting and potentially more optimal alternative than Doing the kind of in context generation cuz, cuz most of like who at perplexity AI or, or any of these, these approaches currently, it's like all based off of doing real-time searches and then injecting as much into the, the, the local context window as possible so that you, you try to ground your, your, your, your language model.[00:18:16] Both in terms of the, the information it has access to that, that, that helps to reduce hallucinations. It can't reduce it, but helps to reduce it and then also gives it access to up-to-date information that wasn't around for that, that massive like, like pre-training step. And I think LoRA in, in, in mine really makes it more, more amenable to having.[00:18:36] Having constantly shifting lightweight pre-training on top of it that scales better than than normal. Pre I'm sorry. Fine tune, fine tuning. Yeah, that, that was just kinda my one takeaway[00:18:45] Simon Willison: there. I mean, for me, I've never been, I want to run models on my own hard, I don't actually care about their factual content.[00:18:52] Like I don't need a model that's been, that's trained on the most upstate things. What I need is a model that can do the bing and bar trick, right? That can tell when it needs to run a search. And then go and run a search to get extra information and, and bring that context in. And similarly, I wanted to be able to operate tools where it can access my email or look at my notes or all of those kinds of things.[00:19:11] And I don't think you need a very powerful model for that. Like that's one of the things where I feel like, yeah, vicuna running on my, on my laptop is probably powerful enough to drive a sort of personal research assistant, which can look things up for me and it can summarize things for my notes and it can do all of that and I don't care.[00:19:26] But it doesn't know about the Ukraine war because the Ukraine war training cutoff, that doesn't matter. If it's got those additional capabilities, which are quite easy to build the reason everyone's going crazy building agents and tools right now is that it's a few lines of Python code, and a sort of couple of paragraphs to get it to.[00:19:44] The Need for Special Purpose Optimized Models[00:19:44] Simon Willison: Well, let's, let's,[00:19:45] Travis Fischer: let's maybe dig in on that a little bit. And this, this also is, is very related to mojo. Cuz I, I do think there are use cases and domains where having the, the hyper optimized, like a version of these models running on device is, is very relevant where you can't necessarily make API calls out on the fly.[00:20:03] and Aug do context, augmented generation. And I was, I was talking with, with a a researcher. At Lockheed Martin yesterday, literally about like, like the, the version of this that's running of, of language models running on, on fighter jets. Right? And you, you talk about like the, the, the amount of engineering, precision and optimization that has to go into, to those type of models.[00:20:25] And the fact that, that you spend so much money, like, like training a super distilled ver version where milliseconds matter it's a life or death situation there. You know, and you couldn't even, even remotely ha ha have a use case there where you could like call out and, and have, have API calls or something.[00:20:40] So I, I do think there's like keeping in mind the, the use cases where, where. There, there'll be use cases that I'm more excited about at, at the application level where, where, yeah, I want to to just have it be super flexible and be able to call out to APIs and have this agentic type type thing.[00:20:56] And then there's also industries and, and use cases where, where you really need everything baked into the model.[00:21:01] swyx: Yep. Agreed. My, my favorite piece take on this is I think DPC four as a reasoning engine, which I think came from the from Nathan at every two. Which I think, yeah, I see the hundred score over there.[00:21:12] Modular - Mojo from Chris Lattner[00:21:12] swyx: Simon, do you do you have a, a few seconds on[00:21:14] Simon Willison: mojo. Sure. So Mojo is a brand new program language you just announced a few days ago. It's not actually available yet. I think there's an online demo, but to zooming it becomes an open source language we can use. It's got really some very interesting characteristics.[00:21:29] It's a super set of Python, so anything written in Python, Python will just work, but it adds additional features on top that let you basically do very highly optimized code with written. In Python syntax, it compiles down the the main thing that's exciting about it is the pedigree that it comes from.[00:21:47] It's a team led by Chris Latner, built L L V M and Clang, and then he designed Swift at Apple. So he's got like three, three for three on, on extraordinarily impactful high performance computing products. And he put together this team and they've basically, they're trying to go after the problem of how do you build.[00:22:06] A language which you can do really high performance optimized work in, but where you don't have to do everything again from scratch. And that's where building on top of Python is so clever. So I wasn't like, if this thing came along, I, I didn't really pay attention to it until j Jeremy Howard, who built Fast ai put up a very detailed blog post about why he was excited about Mojo, which included a, there's a video demo in there, which everyone should watch because in that video he takes Matrix multiplication implemented in Python.[00:22:34] And then he uses the mojo extras to 2000 x. The performance of that matrix multiplication, like he adds a few static types functions sort of struck instead of the class. And he gets 2000 times the performance out of it, which is phenomenal. Like absolutely extraordinary. So yeah, that, that got me really excited.[00:22:52] Like the idea that we can still use Python and all of this stuff we've got in Python, but we can. Just very slightly tweak some things and get literally like thousands times upwards performance out of the things that matter. That's really exciting.[00:23:07] swyx: Yeah, I, I, I'm curious, like, how come this wasn't thought of before?[00:23:11] It's not like the, the, the concept of a language super set hasn't hasn't, has, has isn't, is completely new. But all, as far as I know, all the previous Python interpreter approaches, like the alternate runtime approaches are like they, they, they're more, they're more sort of, Fit conforming to standard Python, but never really tried this additional approach of augmenting the language.[00:23:33] The Promise of Language Supersets[00:23:33] swyx: I, I'm wondering if you have many insights there on, like, why, like why is this a, a, a breakthrough?[00:23:38] Simon Willison: Yeah, that's a really interesting question. So, Jeremy Howard's piece talks about this thing called M L I R, which I hadn't heard of before, but this was another Chris Latner project. You know, he built L L VM as a low level virtual machine.[00:23:53] That you could build compilers on top of. And then M L I R was this one that he initially kicked off at Google, and I think it's part of TensorFlow and things like that. But it was very much optimized for multiple cores and GPU access and all of that kind of thing. And so my reading of Jeremy Howard's article is that they've basically built Mojo on top of M L I R.[00:24:13] So they had a huge, huge like a starting point where they'd, they, they knew this technology better than anyone else. And because they had this very, very robust high performance basis that they could build things on. I think maybe they're just the first people to try and build a high, try and combine a high level language with M L A R, with some extra things.[00:24:34] So it feels like they're basically taking a whole bunch of ideas people have been sort of experimenting with over the last decade and bundled them all together with exactly the right team, the right level of expertise. And it looks like they've got the thing to work. But yeah, I mean, I've, I've, I'm. Very intrigued to see, especially once this is actually available and we can start using it.[00:24:52] It, Jeremy Howard is someone I respect very deeply and he's, he's hyping this thing like crazy, right? His headline, his, and he's not the kind of person who hypes things if they're not worth hyping. He said Mojo may be the biggest programming language advanced in decades. And from anyone else, I'd kind of ignore that headline.[00:25:09] But from him it really means something.[00:25:11] swyx: Yes, because he doesn't hype things up randomly. Yeah, and, and, and he's a noted skeptic of Julia which is, which is also another data science hot topic. But from the TypeScript and web, web development worlds there has been a dialect of TypeScript that was specifically optimized to compile, to web assembly which I thought was like promising and then, and, and eventually never really took off.[00:25:33] But I, I like this approach because I think more. Frameworks should, should essentially be languages and recognize that they're language superset and maybe working compilers that that work on them. And then that is the, by the way, that's the direction that React is going right now. So fun times[00:25:50] Simon Willison: type scripts An interesting comparison actually, cuz type script is effectively a superset of Java script, right?[00:25:54] swyx: It's, but there's no, it's purely[00:25:57] Simon Willison: types, right? Gotcha. Right. So, so I guess mojo is the soup set python, but the emphasis is absolutely on tapping into the performance stuff. Right.[00:26:05] swyx: Well, the just things people actually care about.[00:26:08] Travis Fischer: Yeah. The, the one thing I've found is, is very similar to the early days of type script.[00:26:12] There was the, the, the, the most important thing was that it's incrementally adoptable. You know, cuz people had a script code basis and, and they wanted to incrementally like add. The, the, the main value prop for TypeScript was reliability and the, the, the, the static typing. And with Mojo, Lucia being basically anyone who's a target a large enterprise user of, of Mojo or even researchers, like they're all going to be coming from a, a hardcore.[00:26:36] Background in, in Python and, and have large existing libraries. And the the question will be for what use cases will mojo be like a, a, a really good fit for that incremental adoption where you can still tap into your, your, your massive, like python exi existing infrastructure workflows, data tooling, et cetera.[00:26:55] And, and what does, what does that path to adoption look like?[00:26:59] swyx: Yeah, we, we, we don't know cuz it's a wait listed language which people were complaining about. They, they, the, the mojo creators were like saying something about they had to scale up their servers. And I'm like, what language requires essential server?[00:27:10] So it's a little bit suss, a little bit, like there's a, there's a cloud product already in place and they're waiting for it. But we'll see. We'll see. I mean, emojis should be promising in it. I, I actually want more. Programming language innovation this way. You know, I was complaining years ago that programming language innovation is all about stronger types, all fun, all about like more functional, more strong types everywhere.[00:27:29] And, and this is, the first one is actually much more practical which I, which I really enjoy. This is why I wrote about self provisioning run types.[00:27:36] Simon Willison: And[00:27:37] Alessio Fanelli: I mean, this is kind of related to the post, right? Like if you stop all of a sudden we're like, the models are all the same and we can improve them.[00:27:45] Like, where can we get the improvements? You know, it's like, Better run times, better languages, better tooling, better data collection. Yeah. So if I were a founder today, I wouldn't worry as much about the model, maybe, but I would say, okay, what can I build into my product and like, or what can I do at the engineering level that maybe it's not model optimization because everybody's working on it, but like you said, it's like, why haven't people thought of this before?[00:28:09] It's like, it's, it's definitely super hard, but I'm sure that if you're like Google or you're like open AI or you're like, Databricks, we got smart enough people that can think about these problems, so hopefully we see more of this.[00:28:21] swyx: You need, Alan? Okay. I promise to keep this relatively tight. I know Simon on a beautiful day.[00:28:27] It is a very nice day in California. I wanted to go through a few more points that you have pulled out Simon and, and just give you the opportunity to, to rant and riff and, and what have you. I, I, are there any other points from going back to the sort of Google OpenAI mode documents that, that you felt like we, we should dive in on?[00:28:44] Google AI Strategy[00:28:44] Simon Willison: I mean, the really interesting stuff there is the strategy component, right? The this idea that that Facebook accidentally stumbled into leading this because they put out this model that everyone else is innovating on top of. And there's a very open question for me as to would Facebook relic Lama to allow for commercial usage?[00:29:03] swyx: Is there some rumor? Is that, is that today?[00:29:06] Simon Willison: Is there a rumor about that?[00:29:07] swyx: That would be interesting? Yeah, I saw, I saw something about Zuck saying that he would release the, the Lama weights officially.[00:29:13] Simon Willison: Oh my goodness. No, that I missed. That is, that's huge.[00:29:17] swyx: Let me confirm the tweet. Let me find the tweet and then, yeah.[00:29:19] Okay.[00:29:20] Simon Willison: Because actually I met somebody from Facebook machine learning research a couple of weeks ago, and I, I pressed 'em on this and they said, basically they don't think it'll ever happen because if it happens, and then somebody does horrible fascist stuff with this model, all of the headlines will be Meg releases a monster into the world.[00:29:36] So, so hi. His, the, the, the, a couple of weeks ago, his feeling was that it's just too risky for them to, to allow it to be used like that. But a couple of weeks is, is, is a couple of months in AI world. So yeah, it wouldn't be, it feels to me like strategically Facebook should be jumping right on this because this puts them at the very.[00:29:54] The very lead of, of open source innovation around this stuff.[00:29:58] Zuck Releasing LLaMA[00:29:58] swyx: So I've pinned the tweet talking about Zuck and Zuck saying that meta will open up Lama. It's from the founder of Obsidian, which gives it a slight bit more credibility, but it is the only. Tweet that I can find about it. So completely unsourced,[00:30:13] we shall see. I, I, I mean I have friends within meta, I should just go ask them. But yeah, I, I mean one interesting angle on, on the memo actually is is that and, and they were linking to this in, in, in a doc, which is apparently like. Facebook got a bunch of people to do because they, they never released it for commercial use, but a lot of people went ahead anyway and, and optimized and, and built extensions and stuff.[00:30:34] They, they got a bunch of free work out of opensource, which is an interesting strategy.[00:30:39] There's okay. I don't know if I.[00:30:42] Google Origin Confirmed[00:30:42] Simon Willison: I've got exciting piece of news. I've just heard from somebody with contacts at Google that they've heard people in Google confirm the leak. That that document wasn't even legit Google document, which I don't find surprising at all, but I'm now up to 10, outta 10 on, on whether that's, that's, that's real.[00:30:57] Google's existential threat[00:30:57] swyx: Excellent. Excellent. Yeah, it is fascinating. Yeah, I mean the, the strategy is, is, is really interesting. I think Google has been. Definitely sleeping on monetizing. You know, I, I, I heard someone call when Google Brain and Devrel I merged that they would, it was like goodbye to the Xerox Park of our era and it definitely feels like Google X and Google Brain would definitely Xerox parks of our, of our era, and I guess we all benefit from that.[00:31:21] Simon Willison: So, one thing I'll say about the, the Google side of things, like the there was a question earlier, why are Google so worried about this stuff? And I think it's, it's just all about the money. You know, the, the, the engine of money at Google is Google searching Google search ads, and who uses Chachi PT on a daily basis, like me, will have noticed that their usage of Google has dropped like a stone.[00:31:41] Because there are many, many questions that, that chat, e p t, which shows you no ads at all. Is, is, is a better source of information for than Google now. And so, yeah, I'm not, it doesn't surprise me that Google would see this as an existential threat because whether or not they can be Bard, it's actually, it's not great, but it, it exists, but it hasn't it yet either.[00:32:00] And if I've got a Chatbook chatbot that's not showing me ads and chatbot that is showing me ads, I'm gonna pick the one that's not showing[00:32:06] swyx: me ads. Yeah. Yeah. I, I agree. I did see a prototype of Bing with ads. Bing chat with ads. I haven't[00:32:13] Simon Willison: seen the prototype yet. No.[00:32:15] swyx: Yeah, yeah. Anyway, I I, it, it will come obviously, and then we will choose, we'll, we'll go out of our ways to avoid ads just like we always do.[00:32:22] We'll need ad blockers and chat.[00:32:23] Excellent.[00:32:24] Non-Fiction AI Safety ("y-risk")[00:32:24] Simon Willison: So I feel like on the safety side, the, the safety side, there are basically two areas of safety that I, I, I sort of split it into. There's the science fiction scenarios, the AI breaking out and killing all humans and creating viruses and all of that kind of thing. The sort of the terminated stuff. And then there's the the.[00:32:40] People doing bad things with ai and that's latter one is the one that I think is much more interesting and that cuz you could u like things like romance scams, right? Romance scams already take billions of dollars from, from vulner people every year. Those are very easy to automate using existing tools.[00:32:56] I'm pretty sure for QNA 13 b running on my laptop could spin up a pretty decent romance scam if I was evil and wanted to use it for them. So that's the kind of thing where, I get really nervous about it, like the fact that these models are out there and bad people can use these bad, do bad things.[00:33:13] Most importantly at scale, like romance scamming, you don't need a language model to pull off one romance scam, but if you wanna pull off a thousand at once, the language model might be the, the thing that that helps you scale to that point. And yeah, in terms of the science fiction stuff and also like a model on my laptop that can.[00:33:28] Guess what comes next in a sentence. I'm not worried that that's going to break out of my laptop and destroy the world. There. There's, I'm get slightly nervous about the huge number of people who are trying to build agis on top of this models, the baby AGI stuff and so forth, but I don't think they're gonna get anywhere.[00:33:43] I feel like if you actually wanted a model that was, was a threat to human, a language model would be a tiny corner of what that thing. Was actually built on top of, you'd need goal setting and all sorts of other bits and pieces. So yeah, for the moment, the science fiction stuff doesn't really interest me, although it is a little bit alarming seeing more and more of the very senior figures in this industry sort of tip the hat, say we're getting a little bit nervous about this stuff now.[00:34:08] Yeah.[00:34:09] swyx: So that would be Jeff Iton and and I, I saw this me this morning that Jan Lacoon was like happily saying, this is fine. Being the third cheer award winner.[00:34:20] Simon Willison: But you'll see a lot of the AI safe, the people who've been talking about AI safety for the longest are getting really angry about science fiction scenarios cuz they're like, no, the, the thing that we need to be talking about is the harm that you can cause with these models right now today, which is actually happening and the science fiction stuff kind of ends up distracting from that.[00:34:36] swyx: I love it. You, you. Okay. So, so Uher, I don't know how to pronounce his name. Elier has a list of ways that AI will kill us post, and I think, Simon, you could write a list of ways that AI will harm us, but not kill us, right? Like the, the, the non-science fiction actual harm ways, I think, right? I haven't seen a, a actual list of like, hey, romance scams spam.[00:34:57] I, I don't, I don't know what else, but. That could be very interesting as a Hmm. Okay. Practical. Practical like, here are the situations we need to guard against because they are more real today than that we need to. Think about Warren, about obviously you've been a big advocate of prompt injection awareness even though you can't really solve them, and I, I worked through a scenario with you, but Yeah,[00:35:17] Prompt Injection[00:35:17] Simon Willison: yeah.[00:35:17] Prompt injection is a whole other side of this, which is, I mean, that if you want a risk from ai, the risk right now is everyone who's building puts a building systems that attackers can trivially subvert into stealing all of their private data, unlocking their house, all of that kind of thing. So that's another very real risk that we have today.[00:35:35] swyx: I think in all our personal bios we should edit in prompt injections already, like in on my website, I wanna edit in a personal prompt injections so that if I get scraped, like I all know if someone's like reading from a script, right? That that is generated by any iBot. I've[00:35:49] Simon Willison: seen people do that on LinkedIn already and they get, they get recruiter emails saying, Hey, I didn't read your bio properly and I'm just an AI script, but would you like a job?[00:35:57] Yeah. It's fascinating.[00:36:00] Google vs OpenAI[00:36:00] swyx: Okay. Alright, so topic. I, I, I think, I think this this, this mote is is a peak under the curtain of the, the internal panic within Google. I think it is very val, very validated. I'm not so sure they should care so much about small models or, or like on device models.[00:36:17] But the other stuff is interesting. There is a comment at the end that you had by about as for opening open is themselves, open air, doesn't matter. So this is a Google document talking about Google's position in the market and what Google should be doing. But they had a comment here about open eye.[00:36:31] They also say open eye had no mode, which is a interesting and brave comment given that open eye is the leader in, in a lot of these[00:36:38] Simon Willison: innovations. Well, one thing I will say is that I think we might have identified who within Google wrote this document. Now there's a version of it floating around with a name.[00:36:48] And I look them up on LinkedIn. They're heavily involved in the AI corner of Google. So my guess is that at Google done this one, I've worked for companies. I'll put out a memo, I'll write up a Google doc and I'll email, email it around, and it's nowhere near the official position of the company or of the executive team.[00:37:04] It's somebody's opinion. And so I think it's more likely that this particular document is somebody who works for Google and has an opinion and distributed it internally and then it, and then it got leaked. I dunno if it's necessarily. Represents Google's sort of institutional thinking about this? I think it probably should.[00:37:19] Again, this is such a well-written document. It's so well argued that if I was an executive at Google and I read that, I would, I would be thinking pretty hard about it. But yeah, I don't think we should see it as, as sort of the official secret internal position of the company. Yeah. First[00:37:34] swyx: of all, I might promote that person.[00:37:35] Cuz he's clearly more,[00:37:36] Simon Willison: oh, definitely. He's, he's, he's really, this is a, it's, I, I would hire this person about the strength of that document.[00:37:42] swyx: But second of all, this is more about open eye. Like I'm not interested in Google's official statements about open, but I was interested like his assertion, open eye.[00:37:50] Doesn't have a mote. That's a bold statement. I don't know. It's got the best people.[00:37:55] Travis Fischer: Well, I, I would, I would say two things here. One, it's really interesting just at a meta, meta point that, that they even approached it this way of having this public leak. It, it, it kind of, Talks a little bit to the fact that they, they, they felt that that doing do internally, like wasn't going to get anywhere or, or maybe this speaks to, to some of the like, middle management type stuff or, or within Google.[00:38:18] And then to the, the, the, the point about like opening and not having a moat. I think for, for large language models, it, it, it will be over, over time kind of a race to the bottom just because the switching costs are, are, are so low compared with traditional cloud and sas. And yeah, there will be differences in, in, in quality, but, but like over time, if you, you look at the limit of these things like the, I I think Sam Altman has been quoted a few times saying that the, the, the price of marginal price of intelligence will go to zero.[00:38:47] Time and the marginal price of energy powering that intelligence will, will also hit over time. And in that world, if you're, you're providing large language models, they become commoditized. Like, yeah. What, what is, what is your mode at that point? I don't know. I think they're e extremely well positioned as a team and as a company for leading this space.[00:39:03] I'm not that, that worried about that, but it is something from a strategic point of view to keep in mind about large language models becoming a commodity. So[00:39:11] Simon Willison: it's quite short, so I think it's worth just reading the, in fact, that entire section, it says epilogue. What about open ai? All of this talk of open source can feel unfair given open AI's current closed policy.[00:39:21] Why do we have to share if they won't? That's talking about Google sharing, but the fact of the matter is we are already sharing everything with them. In the form of the steady flow of poached senior researchers until we spent that tide. Secrecy is a moot point. I love that. That's so salty. And, and in the end, open eye doesn't matter.[00:39:38] They are making the same mistakes that we are in their posture relative to open source. And their ability to maintain an edge is necessarily in question. Open source alternatives. Canned will eventually eclipse them. Unless they change their stance in this respect, at least we can make the first move. So the argument this, this paper is making is that Google should go, go like meta and, and just lean right into open sourcing it and engaging with the wider open source community much more deeply, which OpenAI have very much signaled they are not willing to do.[00:40:06] But yeah, it's it's, it's read the whole thing. The whole thing is full of little snippets like that. It's just super fun. Yes,[00:40:12] swyx: yes. Read the whole thing. I, I, I also appreciate that the timeline, because it set a lot of really great context for people who are out of the loop. So Yeah.[00:40:20] Alessio Fanelli: Yeah. And the final conspiracy theory is that right before Sundar and Satya and Sam went to the White House this morning, so.[00:40:29] swyx: Yeah. Did it happen? I haven't caught up the White House statements.[00:40:34] Alessio Fanelli: No. That I, I just saw, I just saw the photos of them going into the, the White House. I've been, I haven't seen any post-meeting updates.[00:40:41] swyx: I think it's a big win for philanthropic to be at that table.[00:40:44] Alessio Fanelli: Oh yeah, for sure. And co here it's not there.[00:40:46] I was like, hmm. Interesting. Well, anyway,[00:40:50] swyx: yeah. They need, they need some help. Okay. Well, I, I promise to keep this relatively tight. Spaces do tend to have a, have a tendency of dragging on. But before we go, anything that you all want to plug, anything that you're working on currently maybe go around Simon are you still working on dataset?[00:41:04] Personal plugs: Simon and Travis[00:41:04] Simon Willison: I am, I am, I'm having a bit of a, so datasets my open source project that I've been working on. It's about helping people analyze and publish data. I'm having an existential crisis of it at the moment because I've got access to the chat g p T code, interpreter mode, and you can upload the sequel light database to that and it will do all of the things that I, on my roadmap for the next 12 months.[00:41:24] Oh my God. So that's frustrating. So I'm basically, I'm leaning data. My interest in data and AI are, are rapidly crossing over a lot harder about the AI features that I need to build on top of dataset. Make sure it stays relevant in a chat. G p t can do most of the stuff that it does already. But yeah the thing, I'll plug my blog simon willis.net.[00:41:43] I'm now updating it daily with stuff because AI move moved so quickly and I have a sub newsletter, which is effectively my blog, but in email form sent out a couple of times a week, which Please subscribe to that or RSS feed on my blog or, or whatever because I'm, I'm trying to keep track of all sorts of things and I'm publishing a lot at the moment.[00:42:02] swyx: Yes. You, you are, and we love you very much for it because you, you are a very good reporter and technical deep diver into things, into all the things. Thank you, Simon. Travis are you ready to announce the, I guess you've announced it some somewhat. Yeah. Yeah.[00:42:14] Travis Fischer: So I'm I, I just founded a company.[00:42:16] I'm working on a framework for building reliable agents that aren't toys and focused on more constrained use cases. And you know, I I, I look at kind of agi. And these, these audigy type type projects as like jumping all the way to str to, to self-driving. And, and we, we, we kind of wanna, wanna start with some more enter and really focus on, on reliable primitives to, to start that.[00:42:38] And that'll be an open source type script project. I'll be releasing the first version of that soon. And that's, that's it. Follow me you know, on here for, for this type of stuff, I, I, I, everything, AI[00:42:48] swyx: and, and spa, his chat PT bot,[00:42:50] Travis Fischer: while you still can. Oh yeah, the chat VT Twitter bot is about 125,000 followers now.[00:42:55] It's still running. I, I'm not sure if it's your credit. Yeah. Can you say how much you spent actually, No, no. Well, I think probably totally like, like a thousand bucks or something, but I, it's, it's sponsored by OpenAI, so I haven't, I haven't actually spent any real money.[00:43:08] swyx: What? That's[00:43:09] awesome.[00:43:10] Travis Fischer: Yeah. Yeah.[00:43:11] Well, once, once I changed, originally the logo was the Chachi VUI logo and it was the green one, and then they, they hit me up and asked me to change it. So it's now it's a purple logo. And they're, they're, they're cool with that. Yeah.[00:43:21] swyx: Yeah. Sending take down notices to people with G B T stuff apparently now.[00:43:26] So it's, yeah, it's a little bit of a gray area. I wanna write more on, on mos. I've been actually collecting and meaning to write a piece of mos and today I saw the memo, I was like, oh, okay. Like I guess today's the day we talk about mos. So thank you all. Thanks. Thanks, Simon. Thanks Travis for, for jumping on and thanks to all the audience for engaging on this with us.[00:43:42] We'll continue to engage on Twitter, but thanks to everyone. Cool. Thanks everyone. Bye. Alright, thanks everyone. Bye. Get full access to Latent Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Mapping the future of *truly* Open Models and Training Dolly for $30 — with Mike Conover of Databricks

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 29, 2023 75:59


The race is on for the first fully GPT3/4-equivalent, truly open source Foundation Model! LLaMA's release proved that a great model could be released and run on consumer-grade hardware (see llama.cpp), but its research license prohibits businesses from running it and all it's variants (Alpaca, Vicuna, Koala, etc) for their own use at work. So there is great interest and desire for *truly* open source LLMs that are feasible for commercial use (with far better customization, finetuning, and privacy than the closed source LLM APIs).The previous leading contenders were Eleuther's GPT-J and Neo on the small end ( OpenAI GPT[00:54:07] And I think like there's the big distinction between what is open and what is like, Open in a way that is commercially usable.[00:54:13] Yeah. After that, I know the Dolly two post, you mentioned that you had a lot of inbound with Dolly. Yeah. 1.0. But a lot of businesses could not use it. Yeah. Because of where the data training data came from. Yes. What are some of the use cases that people have? There is, uh, a lot of it kind of like talking to your data.[00:54:30] Are there like, uh, other things that are maybe people are not thinking about using it for?[00:54:34] Yeah, so I mean, we have a number of customers who have reached out with really concrete use cases around customer support ticket resolution. One of the things that a lot of business open AI's models are incredibly powerful, and Databricks wants to be a business where you can use the right tool for the job.[00:54:55] Like if you have information from the public web, let's say you have forum posts, right, that you need to synthesize and process, that's just not sensitive information. You should be able to use truly whatever model. That might be a fine-tuned model that is like laser focused on your problem. It might be a general instruction following model and, and sort of whatever kind of intelligence GPT4 is, it's, you know, it's quite powerful.[00:55:20] You should be able to use those tools. There are definitely use cases in the enterprise where it's like, I either just, I'm not interested in sharing this ip. You know, these are effectively our state secrets. Or from a regulatory and compliance standpoint. I just can't send this data to a third party sub-process or something.[00:55:38] Even as quotidian is like, I just really don't want to go through procurement on this. You know, like it's kind of around those, um, I have some reasons to keep this in house. A lot of use cases like that and that, you know, I'm not a lawyer and so I won't speculate on the sort of actual licensing considerations or the actual obligations, but it's just like people like to be able to move confidently and what we've done with Dolly is make it super clear.[00:56:09] This model and this data set are licensed for commercial use. You can build a business on the back of this. And that, I think is a big part of why the response has been so positive.[00:56:19] Open Source Licensing for AI Models[00:56:19] Hugging face has, uh, the rail license responsible, um mm-hmm. AI license, which isn't recognized as open source yet. So that was the whole problem with stable diffusion, that it's just unclear cuz this, this is completely new license that is, uh, unproven.[00:56:32] But I just find it interesting that the existing open source licensing regime is mostly around code. And right now, you know, the, the value has shifted from code to the waits.[00:56:43] Yes. I think we can go on a three hour rant about the open source initiative and like who decides what an open source license is.[00:56:51] But I think there's a, I think the approach of like, hey, We know what commercial uses. Like this is good for it. Yes, it's good. You're not gonna have to worry about us suing you. It's like, you know, the semantics of it. Clear is always better. Exactly. It's like we don't need to be approved by the osi. Yeah.[00:57:07] You're gonna be okay. Just[00:57:09] Why Open Source Models?[00:57:09] to kind of like continue, like why open source? Yeah. I think that like it is with many eyes, all bugs are shallow. I think the reality is that like we do not know what the challenges we face with AI systems will be. Mm-hmm. And that the likelihood that we can get it a representative and comprehensive solution to the challenges they present by putting it in public and creating research artifacts that people who deal with ethics bias, ai, safety, security, these really sort of thorny issues, that they can take a hard look at how the actual thing is built and how it works and study it comprehensively rather than, Hey, we've got a team for that.[00:57:50] You're gonna mm-hmm. Just, you're just, we're just gonna need you to trust our work. I think I wanna be in that the former future rather than sort of like, I, I hope that people have done this correctly. I hope that this is somebody is taking care of this.[00:58:05] Moving Models[00:58:05] When people[00:58:06] evaluate this, how do you think about moving between models?[00:58:10] You know, obviously we talked about how the data set kind of shapes how the model behaves. Hmm. There's obviously people that might start on open AI and now they wanna try dollies. Yeah. Like what are some of the infrastructure there that maybe needs to be built to allow people to move their prompts from model to model?[00:58:26] Like to figure out, uh, how that works.[00:58:28] That's really interesting. Um, because you see even like moving between GPT3.5 and GPT4 that the behavior, like some things that were not possible on three five are No, I mean, many, many things that were not possible on three five are not possible on four, but you kind of want like slightly different problem formula, like slightly different prompt formulations or.[00:58:51] It's kind of like you want regression tests for prompts, and you could see like an automated system, which is uh, helps design a prompt such that the output of this new model is isomorphic to the outputs of the previous model. And sort of like using a language model to iterate on the prompt. So it just kind of evolves it to like adapt to the new model.[00:59:13] I have two beautiful boys who are, they're just incredible humans and my friend Ben and I built them a, an interactive choose your own adventure storytelling book that uses ChatGPT to generate stories and then options within those stories, and then uses open AI's image generation model Dolly to illustrate.[00:59:36] Those options. And then the kids can kind of choose their way through these stories. And the thing that you really like when you start to really push these things for more than just like single turn prompt response and I'm, I'm, you know, it's fine if it's language and you really need it to be like an api.[00:59:52] Is that like 19 times out, 20 it's like an p i and then the 20th generation. It's like just a totally different format. And he just like, you really like try to in the system prompt really seriously. I just only want you to give me three options. Yeah. And letter A, B, C, you know, I think that from a regression test standpoint, how do you know, like if I run this prompt a hundred times does a hundred out of a hun, does it come back a hundred out of a hundred in the format and sort of character that I require?[01:00:21] That's not something a person can really do effectively, and so I think you do need sort of model meta models that judge the outputs and that manage those migrations. Mm-hmm. Yeah, so I had, that's an interesting. Product class. I hadn't thought about it too much. Yeah.[01:00:34] Learning in a Simulation[01:00:34] When you mentioned before the example of the, you know, back country trip, I was like, yeah, it would be so cool if you had a, like a simulation where like, okay, this is the list you had.[01:00:44] Now I have this game where like I'm putting a character with that inventory and see if they survive in the back country. Cause you can like, you know, the first time I went to Yellowstone to camp, I forgot to pack like a fly for my tent and obviously it rained. That's because, you know, you get punished[01:00:58] right away.[01:00:59] Yeah. That's the environment providing you with a gradient. Exactly. Update your model eight. You should be grateful to have such an excellent Yeah. Mini[01:01:06] these models like the, the evolutionary piece that is missing is like, these models cannot. Die. They cannot break a arm. They cannot, when they make suggestions, like they don't actually Yeah.[01:01:16] Have any repercussion on them. Um, so I'm really curious if in the future, you know, okay, you wanna make a poem, uh, you know, I love poem. Now we're gonna send this structural people. Yeah. And if you get rejected, your model's gonna[01:01:28] Why Model Reflexion and Self Criticism Works[01:01:28] die. So I think like one of the things that's cool about Lang Chain, for example, we all know they're doing awesome work and building useful tools, but these models can tell if they're wrong.[01:01:38] So you can, like, you can ask a model to generate an utterance. And that next token prediction loss function may not capture. You may hallucinate something, you may make something up, but then you can show that generation to the same model. And ask it to tell you if it's correct or not. And it can, it can recognize that it's not, and I think that is a directly a function of the attention weights and that you can attend to the entire.[01:02:03] Whereas like for next token prediction, all I can see is the prefix and I'm just trying to choose and choosing sarcastically. Right. You're f frequently, like it's a weighted sample from the distribution over that soft softmax output vector, which does not have any. Reference to factuality, but when you resubmit to the model and you give it like, here's the entire generated passage, judge it in its completeness.[01:02:25] Well now I can attend to all of the token simultaneously, and it's just a much, much easier problem to solve. And so I think that like, oh, that's a cool insight. Yeah. Yeah. I mean it's, yeah. It's just, this is reflection. Yeah. You, you can just see what you said and like the model may contain enough information to judge it.[01:02:41] And so it's kind of like subject your plan mm-hmm. To an environment and see how it performs. I think like you could probably ask the model, I mean, we can try this today. Here's my plan for a trip. Critique it. Mm-hmm. Right? Like, what are, what are the things that could go wrong with this inventory? And I think that there's one scenario, there's one trajectory for this class of technologies, which would be like self-reflexive models where it is not super linear.[01:03:10] You don't get anything more than what is already contained in the models, and you just kind of saturate and it's like, okay, you need human feedback. There's another scenario, which is the alpha go scenario where models can play themselves and in observing their behavior and interactions they. Get stronger and better and more capable.[01:03:31] That's a much more interesting scenario and this idea that like in considering the entire generated sample, I have more insight than just when I'm sampling the next token. Mm-hmm. Suggests that there may. Be that escape potential in terms of getting super, you know, unsaturated returns on quality.[01:03:51] Lightning Round[01:03:51] Yeah, this was great, Mike kind of we're where a time, maybe we can jump into landing ground next.[01:03:55] We'll read you the questions again. Okay. If you wanna think about it. So, okay. Favorite AI[01:04:00] product? This is a boring answer, but it's true. Google Maps. Ah. And it's, how is it AI A, they're recently doing stuff with Nerf so that you can using Yeah. Multiple different photos. You can explore the interior of a business.[01:04:15] They are also undoubtedly, I mean like, I don't know the team at Google doing this, but digesting the sum total of human knowledge about each entity in their graph to like process that language and make judgements about what is this business? And listen, it's not an AI product, but it is a machine learning product categorically, and it's also an amazing product.[01:04:37] You forget how much you use it. I was at the coffee shop around the corner. I used it to figure out where to come. It was literally 150 meter walk, you know, it's just like that reflexive, but it's also from a, an information visualization. So I love maps. Mm-hmm. I opened our conversation saying that I think a lot about maps, that it is adaptive at multiple scales and will corson and refine the, the information that's displayed requires many, many judgements to be made and sim simultaneously about what is relevant and it's personalized.[01:05:08] It will take your intent. Are you driving? Okay, well show me parking garages preferentially. So it's very adaptive in such subtle ways that we don't notice it. And I think that's like great product design is like good editing. You don't notice it when it's good. Mm-hmm. And so I think Google Maps is an incredible AI ml.[01:05:28] Product accomplishment. Google Maps. Yeah. It's a great pick. Great. Well, and they need the help. Yeah.[01:05:36] It is actually the best ad uh, real estate, right? Like, there should be a ton of people buying ads specifically on Google Maps. Yeah. So they just show up and I, I don't know how big that business is, but it's gotta be huge.[01:05:45] Yeah. And, and then my subsequent thing is like, there should be Google Maps optimization, where you would name your business like Best Barbershop and it would show up as Best Barbershop when you look at it. Yeah,[01:05:55] of course. Right? Yeah. It's like AAA lock picks. Yeah. Right at the front of the Yellow Pages.[01:06:01] Favorite[01:06:01] AI people and communities you wanna shout out?[01:06:03] You know, I don't think that I have necessarily anything super original to say on this front. Um, The best of my understanding, this is an all volunteer effort and it's, you know, incredible what they have been able to accomplish. And it's like kind of in the constellation of projects.[01:06:20] You know, the additionally, I think these are what you would say and answer in response to this question, I think like the hugging face group is, it's kind of like Google Maps in a way, in the sense that you like, forget how complicated the thing that it's doing is, and I think they have. You see like specific people, I was thinking of STAs STAs, who works on the, works on a lot of the deep speed stuff, just super conscientious and like engaged with the community and like that the entire team at Hugging face is incredible and you know, they, you know, have made a lot of what is happening possible in the industry at large.[01:06:53] And so, um, and I think, yeah, this is like the power of open source ultimately Transformers, library, diffusers, all of it. It's just great. It's a great, it's a delightful product experience.[01:07:03] I think a lot of people, like I had, I once had hugging Face explained to me as Free, get LFS hosting. And I think they've, uh, they've moved beyond that in, in[01:07:11] recent years.[01:07:11] Yeah. A bit. Yeah. It's, it's quite strong work. Yeah.[01:07:14] Yeah. A year from now, what will people be the most surprised by in ai? You already[01:07:19] hinted[01:07:19] at? Uh, yeah, but I think that's not, like, I think that won't be surprising, I think as we're on a ballistic trajectory to having like a, an open lLLaMA reproduction. So here's something I think that will happen that we are not, like socially, we don't have a lot of priors for how to deal with, so this ghost writer track just came out this Kanye West Weekend.[01:07:40] Mm-hmm. AI collaboration. He has thoughts, Drake? Yeah. His thoughts. It's not really, Dave has thoughts. It's not really like, I, I like a different breed of hiphop, but like, it's. For an example of the class, it's like that does sound like a thing I might hear on the radio. So there's a world in, so skip flag was this knowledge graph that's builds itself from your workplace communication.[01:08:02] Think about all of the times that you have expressed your position and intent around a given topic in workplace communication or on the internet at large. I think like character AI is going in this direction where you're going to be able to talk to high fidelity avatars that represent the beliefs and intents of people around you, and that it will be both useful and convincing.[01:08:27] I don't know that like society has good models for how to sort of adapt to that existing and that it will, I suspect just on the basis of like what people are doing. Happened rather quickly at first.[01:08:41] Listen, you can definitely tell it's really good. Mm-hmm. I'm really curious what the long-term results are gonna be, because once you listen it once or twice, you can tell that it's like, it's not really like a coherent song kind of written.[01:08:55] But to me that the funniest thing is that actually, so Drake and the Weekend that never made a song together again because they kinda had a, a follow up between then and, and the Weekend at One song where he said, if you made me then replace me. Because Drake basically hinting that like if he didn't put the weekend on his album, he would've never become popular.[01:09:13] Okay. So it's funny that now there's like this AI generated song from the weekend. It just kind of puts the, you know, if you made me then replace me line in in a different context. But I think this will be super interesting for the labels, you know, like a lot of them do on the Masters to a lot of this music they do on, yeah.[01:09:31] A lot of rides. So, At some point, it's much easier to generate music this way than to do it in person. But I still think you need the artist touch.[01:09:39] Just like what is it that is unique and what, you know. I think artists frequently, you know, I, I know in my own writing and sort of like creative process, you sometimes feel like you're just going through the motions.[01:09:50] And it's funny how we have ways of talking about a phrase rolls off the tongue. That's very much like a causal language model. Mm-hmm. Where like we talk about talk tracks. I have a whole spiel, you know, you talk to a startup founder and you're like, oh my God, how many times have you said like, very close, like very tight variance on this Three minutes sometimes.[01:10:10] That's good. Yeah. It's, it's fine. It's just, it's a thing that we do. And so touching on this idea that like some of what we consider creative acts may not actually be creative acts and sort of, is there a pr, is there a market pressure to favor things that are truly creative versus just like formulaic and like re like rehashing kind of the same essence?[01:10:29] I think like art. Transcends boundaries is often the most interesting art to engage with, where it, it truly does confront you with something you haven't considered before. I hope that that's the place where humans play. And that they're kind of like, oh, I just need some lo-fi study beats. It's like, just gimme an infinite stream.[01:10:49] I'm fine. Because I'm just like,[01:10:52] you've seen that chart of like pop uh, songs, declining interns of the key changes, key changes in[01:10:58] Octa ranges. Completely. Completely. And like, I mean, we used to have[01:11:02] Bohemian Rhapsody and, and[01:11:03] yeah, it's a great example of something that would not be priced appropriately.[01:11:08] This is why I, I think perplexity AI is just very well named because we want more perplexity in our lives. Yes, by the way, shout out for replica ai. I don't know if you've come across them, but Absolutely. They are working on the digital twin stuff. Okay. Ai, uh, request for startups. AI thing you would pay for if someone[01:11:21] built it.[01:11:22] Well, so the LM op stuff for sure. Just like make it easy to generate and evaluate samples using multimodal, multimodal, I mean multiple modalities, not images and texts, but rather like humans, quantitative benchmarks and qualitative Oh, samples that I, I am able to evaluate myself, but other AI startups. I think that we have your sister, your wife, your wife has family that works in the park system.[01:11:49] Mm-hmm. Because it is so everybody has access to effectively the same information about what's interesting in the outdoors. I think you get to a lot of trail heads and you have very, very tight parking lots and it's difficult to get to a lot of these beautiful places. And like, um, mere Woods is another example of like, you gotta reserve a parking spot in the woods that's a plumber.[01:12:12] But I think that the US in particular is so unique in that we have such an expansive public lands, and I think that there are a lot of really majestic and beautiful places in the world that are not written about. And so I think from a geospatial standpoint, you could imagine representing each tile on a map like a word deve.[01:12:39] Embedding where you look at the context in which a location exists and the things people have said about it, and you, you kind of distill the essence of a place and you can given a statement about how I wanna spend my day route traffic more evenly. On the surface of the earth so that we are not all competing for the same fixed pool of resources.[01:13:03] I don't know that that's something really that's monetizable in like a, you know, is this gonna be the next 10 billion business sort of way. But like there's so much public land and there's so many back roads and like the days where I have, you know, rumbling down a dirt road, my brother are just the best days of my life.[01:13:22] And, uh, I want more of those. I want systems that help us live as fully as possible as humans. Yeah, there's definitely[01:13:29] a lot of, you know, you got the. The most popular trails. Everybody wants to be there. Yeah. And then there's the less known ones. And I feel like a lot of people back to the text to back is like, they don't know what they're gonna find, you know?[01:13:41] Mm-hmm. There's not like YouTube reviews of all these trails. Totally. But like you can see it. So I think a way to, to better understand that would be, would be cool.[01:13:49] I mean, just to kind of like riff on this just a little more and we can wrap, like I do think there's a AI technology as a swarm management.[01:13:59] Tool, you know, being able to perceive sensor and camera inputs from multiple different agents in a system. And I think about like ultra low powered gliders as an example of like, I would like to be able to get, I mean, there, there are tools now where you can, uh, for 180 bucks get a satellite to take a da a picture today of like a five by five kilometer area.[01:14:21] I just wanna be able to run recon fleets on the back country and get like up to date trail conditions. I don't know that anybody's gonna make any real money doing this, but if it existed, I would use it. So maybe I should build it maybe. Yeah, exactly. Open source. It's part of Databricks longstanding commitment to open source for diversifying new markets.[01:14:44] Awesome. Mike, it was, it was great[01:14:45] to have you. Oh, this was a, yeah. Get full access to Latent Space at www.latent.space/subscribe

KI und Mensch
Elon Musk kündigt TruthGPT an, Google beschleunigt AI-Entwicklung, neue Integrationen von Adobe, BlackMagic für Video AI und vieles mehr... | AI News und Updates | Folge 6, Teil 1

KI und Mensch

Play Episode Listen Later Apr 20, 2023 73:21


Wir sprechen in der neuen Folge über die neuesten KI-Ankündigungen von Elon Musk, Google, Adobe, Amazon, Stablity AI und anderen Open Source-Anbietern.Folgt uns auf:Twitter: https://twitter.com/KIundMenschTwitch: https://www.twitch.tv/kiundmenschYoutube: https://www.youtube.com/@kiundmenschLink-Liste:#NewsGoogle CEO Sundar Pichai Says Search to Include Chat AI - WSJhttps://www.wsj.com/articles/google-ceo-sundar-pichai-says-search-to-feature-chat-ai-2fa0f54chttps://www.cnbc.com/2023/04/17/google-ceo-sundar-pichai-warns-society-to-brace-for-impact-of-ai-acceleration.htmlhttps://www.nytimes.com/2023/04/16/technology/google-search-engine-ai.htmlGoogle planning new search engine while working on new search features under Project Magihttps://searchengineland.com/google-planning-new-search-engine-while-working-on-new-search-features-under-project-magi-395661Googles Empfehlungen für AI Regulierung:https://ai.google/static/documents/recommendations-for-regulating-ai.pdfGoogle legt DeepMind- und Googe Brain-Team zusammen:https://www.deepmind.com/blog/announcing-google-deepmindhttps://www.googlewatchblog.de/2023/04/google-deepmind-brain-ki/China plant eigene AI Regulierung:https://www.heise.de/news/China-plant-KI-Regulierung-KI-generierter-Content-soll-wahrheitsgetreu-sein-8970181.htmlItaliens Forderungen an OpenAI/ChatGPT für Beendingung des Verboteshttps://www.tagesspiegel.de/wirtschaft/mit-einschrankungen-italien-offen-fur-ruckkehr-von-chatgpt-9675269.htmlhttps://techcrunch.com/2023/04/12/chatgpt-italy-gdpr-order/Elon Musk says he will launch rival to Microsoft-backed ChatGPT https://www.reuters.com/technology/musk-says-he-will-start-truthgpt-or-maximum-truth-seeking-ai-fox-news-2023-04-17/https://www.heise.de/news/X-AI-Elon-Musk-gruendet-eigenes-KI-Unternehmen-8009545.htmlhttps://www.ft.com/content/2a96995b-c799-4281-8b60-b235e84aefe4Elon Musk tells Tucker potential dangers of hyper-intelligent AIhttps://www.youtube.com/watch?v=a2ZBEC16yH4&feature=youtu.beAdobe und Blackmagic kündigen neue AI-Integrationen für Premiere und Davinci Resolve an:https://nofilmschool.com/blackmagic-davinci-resolve-18.5https://blog.adobe.com/en/publish/2023/04/17/reimagining-video-audio-adobe-fireflyBloomberg kündigt eigenes AI Model für Finance und Unternehmensinformationen an:https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/Amazon Bedrock: Diese Plattform soll mit GPT-4 mithalten könnenhttps://t3n.de/news/amazon-bedrock-diese-plattform-soll-mit-gpt-4-mithalten-koennen-1547339/AI Readiness Report von Scale AI:https://scale.com/ai-readiness-reportOpenAI's CEO confirms the company isn't training GPT-5 and ‘won't for some time'https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rumors-training-sam-altman*News zu Open Source LLMs:*Open Assistant: https://open-assistant.io/Code: https://github.com/LAION-AI/Open-AssistantRed Pajama: https://www.together.xyz/blog/redpajamaDolly 2.0:https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llmAnkündigung von StableLMhttps://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-modelsStableLM Demo auf Hugging Face: https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chatStableLM zum Download: https://github.com/stability-AI/stableLM/GPT4ALL: https://github.com/nomic-ai/gpt4allPlaygrounds für LLMs (zum Vergleichen):Chat with Open Large Language Models (mit LLaMA, Alpaca, Dolly, Vicuna & anderen): https://chat.lmsys.org/Playground für LLM Vergleiche: https://play.vercel.ai/Eigenen Playground bauen: https://github.com/nat/openplayground

Single Season Record
Bunheads - Episode 15 - "Take The Vicuna" (with Sulia Altenberg)

Single Season Record

Play Episode Listen Later Aug 30, 2022 59:15


Sulia Altenberg is back for the first time. In an alternate universe, you heard her talk about episode 10 of Bunheads, but in this one, it had to be 15, which maybe isn't as pleasant, but as you learned last week, in every universe, it could have been worse.

Off-Farm Income
OFI 1407: Stabilizing Your Market By Raising Luxury Products | Jane Lavene | Jefferson Farms Natural Fibers

Off-Farm Income

Play Episode Listen Later Jun 10, 2022 46:35


In today's episode I get to learn about a species of livestock that I had never heard of before, the Pacovicuna.  A Vicuna is the wild ancestor of the Llama, and they have characteristics in their fiber that just cannot be replicated by Llamas.  Jane Lavene and her husband, Arthur, raise the largest herd of Pacovicuna's in the U.S. on their farms in the Denver, Colorado and Salida, Colorado areas. In today's episode Jane teaches us about the Vicuna, the Pacovicuna and the unbelievable process of starting a herd of these animals some 27 years ago.  In addition, she discusses her fiber business and goes into great detail about he sophistication of maintaining the genetic integrity of this animal as well as selling the best and most matching fiber possible. Jane will also explain a lot about the fiber, and how it is graded and sorted.  In addition to this part of the business Jane talks about the benefit of selling into a luxury market.  The fiber and felt from a Vicuna is of such quality that it demands top dollar, and that means that it is used in luxury items.  Selling into this market creates a stable demand for Jane. And if all the information about this highly sophisticated, value added business were not enough, Jane also discusses her multiple streams of revenue approach which includes horse boarding, vacation rentals and much, much more.  Enjoy this episode! More Places You Can Listen to Off-Farm Income And Matt Brechwald:    

Real Vision Crypto
Prime Trust Has the Ultimate Set of Fintech Tools

Real Vision Crypto

Play Episode Listen Later May 24, 2022 11:52


Blockchain startups and TradFi companies face many challenges when they choose to enter the cryptoverse. To build an ecosystem capable of supporting mass adoption requires companies with the proper fintech tools and the knowledge to use them properly. One of those companies is Prime Trust, which offers an all-in-one financial infrastructure service to solve issues like custody, compliance, payment settlement, API documentation, and many more. Prime Trust CFO Rodrigo Vicuna joins Real Vision's Ash Bennington to talk about how such functionalities are essential to scaling successful crypto companies. Vicuna also shares his thoughts on liquidity, best execution, and retail misconceptions. Recorded on May 17, 2022. Learn more about your ad choices. Visit megaphone.fm/adchoices

A Fiber Life | Our Guanaco Adventure
The Darker Side of Our Guanaco Adventure

A Fiber Life | Our Guanaco Adventure

Play Episode Listen Later Dec 31, 2021 42:22


If you are going to change your life, you are going to encounter obstacles. And some of those obstacles might even be dangerous. Welcome to A Fiber Life.  In episode two,  Lisa and Greg get real about the darker parts of their journey. The parts that should have made them quit. From shoveling loads of hard-packed poop, the death of their mentor, difficulties with the guanacos themselves that left them physically and mentally scarred, Lisa and Greg leaned into their decision to be farmers. You see, the way you react to these scary things could mean the difference between actually making the life you want or going back to where you came from and never making a change at all. If you don't let the fear take over and instead If you lean in– If you take a breath and keep going, you might be surprised at how good things can become. Click here to read the full episode and transcript. Learn more about Lisa, her Guanacos, and more at afiberlife.com

A Fiber Life | Our Guanaco Adventure
Our Guanaco Adventure: How it Started

A Fiber Life | Our Guanaco Adventure

Play Episode Listen Later Dec 31, 2021 23:46


There are these moments in life, let's call them ‘YES moments'. They happen when we stumble across a really good thing and say ‘YES'. Welcome to A Fiber Life. In episode one, owners of Aliento Luxury Fiber Farm, Lisa and Greg, share the story about how they quit their busy, successful and stressful jobs, moved to an island, and started their lives over as fiber farmers. They share how they began raising llama-like creatures, called guanacos, and learned to harvest their fiber to create garments. How did this happen? Why did this happen? Did they know what they were doing? Lisa answers all these questions while also challenging listeners to tune into that nagging part of their brain that tells you to say “YES” to crazy ideas... and how this may actually be the sanest thing you can do. Click here to read the full episode and transcript. Learn more about Lisa, her Guanacos and more at afiberlife.com

Nature Magic
Episode 33 Ramiro Crego is face to face with a wild puma

Nature Magic

Play Episode Listen Later May 17, 2021 64:33


Today Mary is talking to Ramiro D. Crego. Ramiro is a conservation biologist from Argentina, living on the farm at Burren Nature Sanctuary with his wife Nora and their rescue cat, Alba and currently working remotely as a postdoctoral fellow at the Smithsonian Conservation Biology Institute, USA. Ramiro has been studying birds and mammals inhabiting semi-arid grasslands of Argentina and Kenya, and temperate forests in southern Chile. In his studies, Ramiro investigates how our human actions are affecting ecosystems to find conservation solutions that can benefit both wildlife and the people that live and depend on it. Ramiro is also interested in addressing the interconnection between ecology and society from a biocultural conservation approach, using metaphors that integrate ecological research with philosophy and native American worldviews.www.ramirodcrego.comRecommended booksThe elephant whispererBy: Lawrence Anthony Are We Smart Enough to Know How Smart Animals Are? By: Frans de WaalContact Mary atmary@burrennaturesanctuary.ieCheck out Burren Nature Sanctuary atwww.burrennaturesanctuary.ieIf you are interested in leasing our award winning cafe and shop please check it out at https://www.daft.ie/commercial-property-for-rent/burren-nature-sanctuary-cloonasee-kinvara-co-galway/3222122

Alpaca Tribe
What do you know about Alpacas? Take the quiz

Alpaca Tribe

Play Episode Listen Later Apr 28, 2021 18:47


We all like a challenge - so here is the Alpaca Tribe Quiz! See what you know already. Questions How many toes does a Camel have? What is the group name for Llamas, Alpacas, Guanacos and Vicuna? What are the two types of alpacas? What is the nature of the land that is native to alpacas? What language do alpacas speak? How heavy is a bag of candy-floss? What are baby alpacas called? Answers in the podcast andhttps://alpacatribe.com/?p=2195 ( on the website ) Have a great day!

The Pearl of Great Price
Apr 5 Laura Vicuna patron saint of abused children

The Pearl of Great Price

Play Episode Listen Later Apr 4, 2021 7:51


The sad story of Laura Vicuna born into a wealthy military family from Chile who fell from grace.  Rescued by nuns, all she wanted was her widowed mum to escape from an abusive relationship.

In The Scene: Indie Arcade Wave
In The Scene Ep #43: The Story of Black Emperor with The Creator Tomas Vicuna

In The Scene: Indie Arcade Wave

Play Episode Listen Later Apr 2, 2021 30:52


This week on In The Scene we speak about an awesome indie arcade game called Black Emperor. This game was created by Tomas Vicuna and has been doing very well in arcades. The game was inspired by Japanese Bosozoku culture. In the game, you drive your motorcycle through the desert trying to stay on the road as long as you can. We hear the story of how Tomas transitioned from Film to Game Development to gain a new perspective for filmmaking. The challenges he went through to create the game as well as how Bumblebear Games found the game and help make it what it is today. If you like what we are doing here at indie arcade wave don't forget to Subscribe, Comment, and Share the show so we can shine more light on the great developers. We appreciate you! Black Emperor: Instagram: https://www.instagram.com/goblackemperor/ Website: https://www.goblackemperor.com/ Twitter: https://twitter.com/goblackemperor Facebook: https://www.facebook.com/goblackemperor Indie Wave Arcade Facebook: https://www.facebook.com/indiearcadewave/ Instagram: https://www.instagram.com/indiearcadewave/ Youtube: https://bit.ly/3e3WQWO Twitter: https://twitter.com/indiearcadewave Discord: https://discord.gg/6GntJQN Podcast: https://anchor.fm/indiearcadewave --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/indiearcadewave/support

God's Own Football
Kibu Vicuna sacked by Kerala Blasters - Episode 06

God's Own Football

Play Episode Listen Later Feb 22, 2021 44:20


The past week was a VERY eventful one in ISL - and on top of that lost would be the parting of ways between Kerala Blasters and Kibu Vicuna. In the latest episode of 'God's Own Football's Malayalam Podcast, we discuss if it was the right move considering the circumstances, and what it means for the Yellow Army in future.

Gentlemanualen
133. Kash is King - Allt om stickat inför hösten

Gentlemanualen

Play Episode Listen Later Sep 17, 2020 39:36


I veckans avsnitt avhandlar vi höstens kanske mysigaste plagg.Vi pratar om de stickade kvaliteterna i kashmir och värstingkvaliteten Vicuna.Vi pratar också om historien bakom veckans partner Longines där vi hintar om den nya kollektionen Spirit. See acast.com/privacy for privacy and opt-out information.

Cafeteria Catholics
Blessed Laura Vicuna

Cafeteria Catholics

Play Episode Listen Later Jun 24, 2020 16:16


www.catholiticking.com

How To Fail With Elizabeth Day
S3, Ep6 How to Fail: James O'Brien

How To Fail With Elizabeth Day

Play Episode Listen Later Feb 6, 2019 44:20


Today's guest is quite possibly one of the most intimidating people to interview given his own reputation as a fearsome interrogator. But fine, I'll give it a go. James O'Brien is an LBC radio presenter who hosts a daily phone-in discussion show which attracts a million listeners a week. Clips of him dismantling the ignorance and prejudice of politicians, bigots and racists with his customary cool, hard logic have gone viral on more than one occasion. His first book, How To Be Right In A World Gone Wrong shot into the Sunday Times top 10 bestseller list last year and tackles everything from Islam to feminism via Brexit and Donald Trump. It is a brilliant and entertaining read and in it, O'Brien is not afraid to admit when he has been wrong (an infrequent occurrence, admittedly). He joins me to talk about failing at Physics GCSE, being adopted, going to a posh boarding school (and being expelled), how he got turned down by every single graduate traineeship on Fleet Street and ended up working as a tailor's assistant and selling Ronnie Wood a £15,000 Vicuna coat (true story). Along the way we also chat about the time Sean Connery hit him on the nose, his failure to record a CBeebies bedtime story and believing 'the angriest people are the ones who are most in need of help.'     How To Fail With Elizabeth Day is hosted by Elizabeth Day, produced by Chris Sharp and sponsored by 4th Estate Books   The book of the podcast, How To Fail: Everything I've Ever Learned From Things Going Wrong by Elizabeth Day is available to pre-order here.   How To Be Right In A World Gone Wrong by James O'Brien is published by WH Allen     Social Media: Elizabeth Day @elizabday James O'Brien: @mrjamesob Chris Sharp @chrissharpaudio 4th Estate Books @4thEstateBooks  

The Sonic Society
Episode 585- Late Presents

The Sonic Society

Play Episode Listen Later Jan 8, 2019 46:00


Tonight in a late Christmas present Sassquach Radio with Ashley Quach and Paula Deming bring us episode one of season three of their acclaimed "Deck the Halls" And following an Electric Vicuña Productions Darker Musings adaptation from the John Collier short story "The Chaser" written by Jack J. Ward and produced by Scott Mosher.

Moonlight Audio Theatre
THE MOST DANGEROUS GAME

Moonlight Audio Theatre

Play Episode Listen Later Dec 31, 2018 60:27


THE MOST DANGEROUS GAME (MYSTERY) Adapted from the Classic short story by Richard Connell.  ELECTRIC VICUNA PRODUCTIONS

PØLCAST
PØLCAST #5

PØLCAST

Play Episode Listen Later Nov 18, 2018 70:26


Pølcast #5 - Melodic Techno / House "Praise You (Purple Disco Machine Extended Remix)","Fatboy Slim""Vicuna","Kiwi""The Brighter The Light (DJ Tennis Remix)","The Juan Maclean""Trinidad Dreams (Extended Mix)","Analog Sol""...""Surumu (Original Mix) [You And Your Hippie Friends]","Premiere: Sascha Funke""White BMW (Legowelt remix)","V.U.L.V.A.""Reflection","The Organism""Sensory","André Hommen""Studies In Dance Theory","Adriatique""Atacama","Audiofly & Patrice Bäumel""As Above So Below (PAMPA032 B)","&ME""Purge (KM045)","Rampa"

The Sonic Society
Episode 576- Virtual Votes

The Sonic Society

Play Episode Listen Later Nov 6, 2018 58:37


Tonight with David off, Jack is looking forward to the American Mid-Term elections... Maybe a little too much so! He's got a new Electric Vicuna Productions Wavefront Anthology show featuring Jeffrey Billard, Lothar Tuppan, Jan Didur, Tanja Milojevic and Pete Lutz thinking back on Bill Hollweg entitled "Voting is Anonymous". A second short from Pete Lutz and the 11th Hour Productions group called "A Real Bedtime Gory" and a SPECIAL Bells in the Batfry with John Bell rounds out the post-Halloween fun!

Stumbling Ballerinas
Stumbling Ballerinas: A Bunheads Podcast - Take the Vicuna

Stumbling Ballerinas

Play Episode Listen Later Jul 8, 2018


It took 6 months but we're back! In this episode we discuss Warren Buffett's position on the world's richest people list, Megan 1's extensive Friends knowledge (which some find odd, but she thinks is perfectly normal thank you very much), cat genitalia, and at some point in there we also manage to watch Take the Vicuna. As always you can contact us through Facebook (Stumbling Ballerinas), Twitter (@drunkballet), tumblr (stumblingballerinas), or gmail (stumblingballerinas@gmail.com). If you cannot see the audio controls, your browser does not support the audio element

The BradCast w/ Brad Friedman
'BradCast' 6/29/2018: (Guest: Dan Vicuna of Common Cause)

The BradCast w/ Brad Friedman

Play Episode Listen Later Jun 30, 2018 58:47


The BradCast w/ Brad Friedman
'BradCast' 6/29/2018: (Guest: Dan Vicuna of Common Cause)

The BradCast w/ Brad Friedman

Play Episode Listen Later Jun 30, 2018 58:47


Das Kalenderblatt
#01 Laura Vicuna geboren, Salesianer-Schützling

Das Kalenderblatt

Play Episode Listen Later Apr 4, 2018 3:19


Laura Vicuña war am 5. April 1891 in Chile zur Welt gekommen. Bürgerkrieg und Willkür machten dem Kind das Leben schwer, Schutz fand sie - zumindest zunächst - bei den Don-Bosco-Schwestern.

Gilmore Guys
Bunheads 115 - Take the Vicuna (with Margaret H. Willison)

Gilmore Guys

Play Episode Listen Later May 8, 2017 150:07


Margaret H. Willison (@MrsFridayNext) joins the Bunhead Bros to gab all about Episode 15 - Take the Vicuna This episode is sponsored by Squarespace

Moonlight Audio Theatre
THE TELL-TALE HEART

Moonlight Audio Theatre

Play Episode Listen Later Dec 27, 2015 13:16


THE TELL-TALE HEART (HORROR) Summerstock Playhouse is a seasonal series in which fans of old time radio produce new presentations of OTR scripts. In the Summer of 2015, David Ault and Jack Ward brought this classic Edgar Allan Poe short to the Summerstock Playhouse. Co-starring Jack Ward, and special guest stars from The No Sleep Podcast, and showcasing the heart-stopping terror-filled music of Sharon Bee. Electric Vicuna Productions

Moonlight Audio Theatre
SPACEWAYS – EPISODE 5 OF 5

Moonlight Audio Theatre

Play Episode Listen Later Dec 23, 2015 15:00


THE MAN CALLED METHUSELAH (SCI-FI) When Stephen "Biff" Straker takes part in a secret experiment to prove Einstein's theory of relative time, he is thrown past the 1990's into the 31st century! Pulp action and adventure in the classic adventuring tradition of Buck Rogers, Flash Gordon and Dan Dare. Biff and Captain Bara of the Rat Men fight over the Pool of Death and our heroes finally escape only to discover an incredible island in the clouds, ruled by the man called Methuselah! Electric Vicuna Productions

Moonlight Audio Theatre
SPACEWAYS – EPISODE 4

Moonlight Audio Theatre

Play Episode Listen Later Dec 17, 2015 16:36


THE POOL OF DEATH (SCI-FI) When Stephen "Biff" Straker takes part in a secret experiment to prove Einstein's theory of relative time, he is thrown past the 1990's into the 31st century! Pulp action and adventure in the classic adventuring tradition of Buck Rogers, Flash Gordon and Dan Dare.  Commander Degan, Doctor Thermopoli and Doyen are arrested along with Biff and Varger the Patrol Hound by the Rahoad. Desperate to acquire the power of flight the Rat Men challenge Commander Degan in the Pool of Death! Electric Vicuna Productions

Moonlight Audio Theatre
SPACEWAYS – EPISODE 3

Moonlight Audio Theatre

Play Episode Listen Later Dec 16, 2015 13:43


THE FALLEN ANGEL (SCI-FI) When Stephen "Biff" Straker takes part in a secret experiment to prove Einstein's theory of relative time, he is thrown past the 1990's into the 31st century! Pulp action and adventure in the classic adventuring tradition of Buck Rogers, Flash Gordon and Dan Dare.  Commander Angela Degan and Professor Ildeen Thermopoli with a secret "cargo" crash lands on Terra Prime and are pursued, along with Biff, by the Rat Men of Gueln Sector! Electric Vicuna Productions

Moonlight Audio Theatre
SPACEWAYS – EPISODE 2

Moonlight Audio Theatre

Play Episode Listen Later Dec 15, 2015 15:03


THE HAUNTED WORLD (SCI-FI) When Stephen "Biff" Straker takes part in a secret experiment to prove Einstein's theory of relative time, he is thrown past the 1990's into the 31st century! Pulp action and adventure in the classic adventuring tradition of Buck Rogers, Flash Gordon and Dan Dare. Waking up in the 31st century, Biff Straker is discovered and arrested by the Patrol Hound, Varger, and pursued by the Rat men gangs from the decayed ancient city of Guelph. Electric Vicuna Productions

Moonlight Audio Theatre
SPACEWAYS – EPISODE 1

Moonlight Audio Theatre

Play Episode Listen Later Dec 14, 2015 18:24


THE FUTURE IS NOW (SCI-FI) When Stephen "Biff" Straker takes part in a secret experiment to prove Einstein's theory of relative time, he is thrown past the 1990's into the 31st century! Pulp action and adventure in the classic adventuring tradition of Buck Rogers, Flash Gordon and Dan Dare. Electric Vicuna Productions

The Sonic Society
Sonic Speaks- 0112- The Musical Voice of Sharon Bee

The Sonic Society

Play Episode Listen Later Nov 23, 2014 38:40


The incredible singer, musician, composer, Sharon Bee is our special guest this week on Sonic Speaks. Sharon talks to us about writing the music for Electric Vicuña Productions like The Dead Line, Gate, Wavefront, and Firefly- Old Wounds

Moonlight Audio Theatre

MESSAGES (SUSPENSE) MATURE AUDIENCE When Stanley Coakley tries to escape a secret, he finds it and murder follow him after a weekend away with messages. Electric Vicuna Productions  

Moonlight Audio Theatre
GALAXY MASTER VERSUS THE VARN

Moonlight Audio Theatre

Play Episode Listen Later May 23, 2014 9:10


GALAXY MASTER VERSUS THE VARN (HORROR) Ed Gwynn is tired of playing the greatest villain "Galaxy Master" in the movies. Unfortunately for him, so are the Varn! . From the Wavefront Anthology Series.  Starring Cayenne Chris Conroy, John Bell and M. Sieirio Garcia. Electric Vicuna Productions  

Moonlight Audio Theatre
ALONE IN THE NIGHT

Moonlight Audio Theatre

Play Episode Listen Later Apr 21, 2014 57:34


ALONE IN THE NIGHT (SCI-FI) Left alone on a Listening Post, Andrew tries to survive as the last living soul in an empty universe. From the Wavefront Anthology Series. Starring Jack Ward, Genevieve Jones, Tanja Milojevic, David Ault, and John Bell.  Electric Vicuna Productions   2013 SILVER MARK TIME AWARD WINNER FOR SCIENCE FICTION!  

Moonlight Audio Theatre

ONE BY ONE (HALLOWEEN/HORROR) Stars John Bellas Dan Cummings the ‘Old-Timey Man’ just ready for retirement from thirty years as a DJ and local celebrity. This Halloween however, an accidental crash in the Halifax harbor sends the city and Dan into a spiral of terror! Written and directed by Jack J. Ward. From the Darker Musings Anthology series of Mystery and Terror.  Electric Vicuna Productions  

The Sonic Society
Episode 319- Stranger Still

The Sonic Society

Play Episode Listen Later Jan 22, 2013 60:20


David is stuck in a cold cave with a blizzard roaring through the mountains, as he thinks warmly about Harry Strange and some more CKDU's King's College Infringement Festival including with a new version from Electric Vicuna Productions of an old favourite "Night Driving" starring Colleen MacIssac, Sonja Gould, and Dorian Lang. M. Sieiro Garcia continues as Bumper Girl.

Moonlight Audio Theatre
BORROWED TIME

Moonlight Audio Theatre

Play Episode Listen Later Nov 9, 2012 37:01


BORROWED TIME (SCI-FI) After an accident with a time travelling machine, Louis Blunt finds himself in more trouble than he could imagine in a thousand lifetimes! Starring Joe Stofko, John Bell, Lyn Cullen, and Colleen MacIssac. Electric Vicuna Productions  

Moonlight Audio Theatre
SORRY WRONG NUMBER

Moonlight Audio Theatre

Play Episode Listen Later Nov 5, 2012 36:03


SORRY WRONG NUMBER (SUSPENSE) Sonic Summerstock Playhouse 2012 presents the classic “Sorry, Wrong Number” starring Schoel Strang and the Electric Vicuna Players. Directed by Jack J. Ward and produced by John Bell. Electric Vicuna Productions  

Moonlight Audio Theatre
COACH NUMBER 6

Moonlight Audio Theatre

Play Episode Listen Later Nov 2, 2012 23:26


COACH NUMBER 6 (SUSPENSE) Frank has one time a year with his daughter and son to make up being the father he's not. They take an annual trip across Canada on the rail. You never know who you're going to meet on the lonely tracks, even terror and death. Starring David Ault and John Bell.  Electric Vicuna Productions  

The Sonic Society
Sonic Summerstock Theatre Presents: Sorry Wrong Number

The Sonic Society

Play Episode Listen Later Aug 28, 2012 36:03


Tonight's finale performance of the Summerstock 2012 summer season brings us EVP's "Sorry, Wrong Number" starring Schoel Strang and the Electric Vicuna Players. Directed by Jack J. Ward and produced by John Bell from Bell's in the Batfry! Your host this week is Joe Stofko!

The Sonic Society
Episode 298- Boxing up Season 7!

The Sonic Society

Play Episode Listen Later Jun 29, 2012 51:24


We complete Season 7 with our feature from Decoder Ring Theatre's "Marvellous Boxes" Anthology and have a gander at EVP's newest Wavefront Short "Nanites" starring David Ault and Mindy Rast-Keenan! Guest starring in the finale are Bill Hollweg, Gregg Taylor and John Bell! Hold on to your lawn chairs and tanning oil, because Sonic Summerstock Theatre 2012 begins next week! And back in September for Season 8, we're just two shows away from our 300th regular season show. WOW! Thanks so much for an awesome season everyone!

Moonlight Audio Theatre
GRAVES’ SHIFT OPEN FOR BUSINESS

Moonlight Audio Theatre

Play Episode Listen Later Jun 25, 2011 57:14


GRAVES’ SHIFT OPEN FOR BUSINESS (MYSTERY) Written and produced by Canadian Radio Dramatist Jack Ward (The Sonic Society). Follow no nonsense noir detective Phillipa Graves as she takes on her first big case. Electric Vicuna Productions  

Moonlight Audio Theatre

HILL MANOR (COMEDY) Jonah Hill runs a Bed and Breakfast and is a most disagreeable man. The only thing worse than running a B&B is trying to make his mother, his wife and the guests from Vancouver happy! Electric Vicuna Productions    

Moonlight Audio Theatre
SOUL SURVIVOR

Moonlight Audio Theatre

Play Episode Listen Later Jun 24, 2011 34:04


SOUL SURVIVOR  (DRAMA) Soul Survivor is a dramatic mythic tale in which an angel and a demon vie for the soul of an innocent young girl. Electric Vicuna Productions    

The Sonic Society
Dueling with the Past

The Sonic Society

Play Episode Listen Later Aug 25, 2009


Another great Dead Line anthology episode. This time it's "Duel" written by Jack J. Ward, and post produced by the amazing Matt Leong from Atomic Clockwork Media.Afterwards, its another fantastic look into the past with two half hour Suspense episodes in the Shadowlands, and a bried rememberance of Biff Straker's earlier incarnation.

The Sonic Society
The Live Jack and Shannon Show Party!

The Sonic Society

Play Episode Listen Later Jun 30, 2009 76:59


It's the last episode of The Jack and Shannon show for season two! Following that, join us for a live discussion show for all the actors! Did you get a question in? Amazingly? Only ONE swear between six actors. Go Jack, Go!

The Sonic Society
Jack and Shannon Gone Batty

The Sonic Society

Play Episode Listen Later Jun 23, 2009


Jack and Shannon have a chat with John Bell- writer and character actor for most of the many voices in the comedy series- Bell's in the Batfry. After the interview, listen to part one of the season finale of The Jack and Shannon Show, "Garage Sale"!

The Sonic Society
Jack and Shannon Texas Style

The Sonic Society

Play Episode Listen Later Jun 18, 2009


Jack and Shannon take another summer show to interview Texas Radio Theater's Rich Frolich and premiere Episode 4 of Season 2's "The Jack and Shannon Show- On Location". Jack and Shannon try to escape the foul clutches of evil mastermind Ken, Ken MacAlpine but where they go, the show follows- and so does Ken, Moonglow and the guy who's replaced Jake- Moh.

The Sonic Society
Summer Begins with a Chat….

The Sonic Society

Play Episode Listen Later Jun 2, 2009 45:49


Jack and Shannon interview the fabulous Matt, and then air the second episode of The Jack and Shannon show!

The Sonic Society
Final Messages

The Sonic Society

Play Episode Listen Later May 26, 2009 59:22


WARNING: This week's show deals with adult situations. Listener Discretion is advised. It's the last episode of Season four! We wrap up a huge Zug call session (So skip ahead about thirty minutes if you hate our chatter), and finish up with our own Messages!

The Sonic Society
The Replacement Killers

The Sonic Society

Play Episode Listen Later May 19, 2009 52:20


It's a double header of Electric Vicuna Shows! The Replacement Show and The Jack and Shannon Show! Could I say "Show" anymore often? Enjoy the show!

The Sonic Society
Twin Stars Part Deux

The Sonic Society

Play Episode Listen Later Apr 28, 2009 54:40


It's a triple-header as we air the second part of Twin Stars, from Kung Fu Action Theatre, a The Frequency of Fear mini episode, and an Electric Vicuna original, Anniversary.

The Sonic Society
Twin Peeps

The Sonic Society

Play Episode Listen Later Apr 21, 2009 57:37


It's part one of Kung Fu Action Theatre's Twin Stars, a double header of Misfit Audio's George and Gracie and an Electric Vicuna original, Choice!

The Sonic Society
20,000 Leagues Texas Style

The Sonic Society

Play Episode Listen Later Apr 14, 2009


Jack and Shannon go diving into Texas Radio Theater's 20,000 Leagues under the Sea by Jules Verne and end off the show with a short of their own from the Dead Line series "Deathbed Confessions" from Electric Vicuna.

Das Kalenderblatt
#01 Kalenderblatt (Laura Vicuna geb)

Das Kalenderblatt

Play Episode Listen Later Apr 4, 2007 5:23


Salesianerin Laura Vicuna geboren