Podcasts about Lepton

  • 28PODCASTS
  • 40EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Oct 25, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Lepton

Latest podcast episodes about Lepton

Adafruit Industries
EYE ON NPI - Teledyne FLIR Lepton® 3.1R Pocket-Sized Thermal Camera

Adafruit Industries

Play Episode Listen Later Oct 25, 2024 9:59


This week's EYE ON NPI knows where New York Hottest Club is at, it's the Teledyne FLIR Lepton® 3.1R Pocket-Sized Thermal Camera (https://www.digikey.com/en/product-highlight/t/teledyne-flir/pocket-sized-thermal-camera), a bite-sized full-featured video camera for remote thermal measurements. With a resolution of 160x120 pixels, remote temperature measurements of -40°C to +300°C, and the size of a coin, this camera can be embedded into any kind of product, whether it's running Linux, RTOS or a plain old microcontroller. Thermal cameras are multi-purpose, with usage in medical, industrial, construction, maintenance and security industries. Use them to make sure equipment is running at the right temperature and not overheating, that insulation for a room is performing adequately, locating people or animals, or detecting fevers without touching. FLIR makes the best low-cost, small-size thermal cameras and they're available off-the-shelf at DigiKey for quick integration. Each camera outputs either a simple grayscale-valued frame or one with a false-color RGB888 palette - the palette can be configured over I2C. The Lepton 3.1R is one of a series of cameras available from FLIR, including the Lepton 2 and 3.5. What's great is all have the same physical pinout and shape that can plug into a socket. This is great for manufacturing yield and field repair: the expensive module is placed last in the manufacturing line so earlier yield issues don't affect it. Also you can swap different resolution/FOV modules to customize for the end-user. For example, the Lepton 2 (https://www.digikey.com/en/products/detail/flir-lepton/500-0763-01/6250105) is a little less expensive but has only 80x60 pixels. Or you can upgrade to the Lepton 3.5 (https://www.digikey.com/en/products/detail/flir-lepton/500-0771-01/7606616) with similar resolution but a narrower FOV. Note that the FOV will affect the distortion greatly: a wider FOV requires a lens to focus the IR emissions but will fisheye the middle and compress the edges. There's software from Teledyne FLIR (https://www.flir.com/developer/lepton-integration/lepton-3.1r-dewarping-application-note/) that will "de-warp" the 3.1R's output, using Open CV, to give you more realistic imagery. To learn how to work with these modules, we recommend the Lepton engineering integration guide (https://flir.netx.net/file/asset/13333/original/attachment). Unlike the simplest thermal camera modules and sensors, which use only I2C, or the most complex USB-video output devices, the Leptons use a combination on I2C for configuration - called the CCI Command and Control Interface - and SPI for VoSPI - a.k.a. video over SPI. This makes them possible to integrate with a wide range of microcontrollers or microcomputers. As mentioned before, you don't solder the cameras to the PCB. Instead they are plugged into a common Molex 1050281001 socket (https://www.digikey.com/en/products/detail/molex/1050281001/3045223) which is only $1 at DigiKey and comes on a pick-and-place reel. If you want to get started very quickly, DigiKey and GroupGets (https://www.digikey.com/en/supplier-centers/groupgets) have partnered up to offer a wide range of breakout boards, USB adapters and dev-boards that feature the Teledyne FLIR Leptons (https://www.digikey.com/short/2djrnzpr) GroupGets also published firmware and example code (https://github.com/orgs/groupgets/repositories?type=all) to get you started with their products so you can quickly evaluate the Lepton and make sure it will work and what resolution/FOV is ideal: simply swap the different models in and out of the Molex socket. GroupGets also works with makers to get their prototypes to market, working with DigiKey for part sourcing (https://www.digikey.com/en/blog/digikey-partners-with-groupgets-to-help-startups-getmade) so if you have an idea and need a help making it to production check them out! If you need a high-quality thermal camera that is plug-and-play, easy to integrate and at a great cost, the Teledyne FLIR Lepton 3.1R Pocket-Sized Thermal Camera is hot hot HOT and in stock right now for immediate purchase from DigiKey (https://www.digikey.com/short/0zr8w59q). Order today, pick up an eval board too, and you can be measuring the world around you by tomorrow afternoon. See on DigiKey at https://www.digikey.com/short/0zr8w59q See the manufacturer's video https://www.youtube.com/watch?v=9xsDuiq8eZc

OnBoard!
EP 52. 一线亲历者对谈:生成式AI这一年,中美市场的异同、机会与未来

OnBoard!

Play Episode Listen Later May 9, 2024 126:18


非常久违的两位主播的研究对谈来了!在 ChatGPT 诞生近一年半的时间里,生成式 AI 领域几乎每天都在发生激动人心的变化。从大模型到应用,从软件到机器人,从文字到图片、视频、声音,从全新的商业模式到对现有业务的赋能。比起很久之前那一期对谈,不只是 AI,两位主播也都分别开始了新的征程,过去一年有了很多机会在中美一线市场频繁穿梭,终于有机会分享一些我们沉淀下来的观察与思考。 Hello world, who is OnBoard!? Monica 去年加入了另一家美元 VC,更聚焦地关注海外的早期投资机会。GN从美元机构离开,创立了 SaaS/AI 社区 Linkloud(公众号同名),帮助越来越多中国软件和科技公司走向全球。AI 无疑是这个时代里边最大的变量之一,近两个小时,过去一年在中美频繁奔波的我们,探讨了你关心的各种问题: AI应用落地真的不及预期吗? 从应用到infra有哪些有意思的落地案例? 如何看待国内AI的进展和弯道超车的机会? 中美差异背后的原因是什么? AI公司出海有什么最佳实践与建议? 一些拙见,抛砖引玉,希望对大家有一些些启发~!Enjoy! 我们都聊了什么 03:11 两位主播的自我介绍,以及最近半年日常使用的AI产品。 15:54 一年以来,哪些AI产品或落地超预期或不及预期? 20:24 为什么还在成长期的SaaS公司最容易将AI落地? 23:11 AI在全球其他地区的渗透有什么不一样的地方? 26:00 为什么在美国大模型和Infra层的进展会超预期? 30:16 对苹果Siri的预期,以及可能面临的限制在那里? 35:31 Soundhound是如何结合Voice AI来落地点餐场景,并完成商业化的? 40:42 EvolutionIQ是如何在保险领域结合AI并促进业务增长的? 49:08 Monica错过的一家初创公司是如何将AI融入销售人员工作流的? 55:47 为什么AI代码生成领域在今年会百花齐放? 65:38 国内AI的进展与美国有什么不同,为什么在C端会出现更多产品? 76:07 中美资本市场的差异在哪里,以及创业者该如何在市场下行时树立长期愿景? 81:58 为什么中美差异最大的是AI在B端的发展,以及机器人是否是个变量? 92:55 为什么“单点极致”可能是中国AI公司出海最重要的方式? 97:33 为什么出海第一步要走出国门,感受并融入开放的生态? 100:55 作为投资人,如何看待面对大模型公司下创业公司的壁垒和竞争力? 106:41 两位主播对今年AI的“大胆”预测和期待有哪些? 119:02 最后,奉上我们这一年新种草的播客和Newsletter,希望对听众有帮助! 提到的公司 Devin (by Cognition Lab): cognitionlab.com SWE-agent: swe-agent.com DBRX by Databricks: github.com Jamba: A Hybrid Transformer-Mamba Language Model Hume AI: www.hume.ai Monica.im: www.youtube.com Gemini Advanced: www.cnn.com Perplexity: www.perplexity.ai Kimi Chat: asianwiki.com Six助手(目前还在灰度测试,微信不接受新用户啦) Workstream: www.workstream.us Klarna: www.klarna.com Speak: https://www.speak.com/ Lepton.ai: www.lepton.ai Soundhound: www.soundhound.com EvolutionIQ: evolutioniq.com Siro: siro.ai Magic.dev: magic.dev Codium: www.roboleary.net Cursor: www.cursor.app Augment: www.augment.co Sweep: www.sweep.io Typeface: www.typeface.ai Sierra AI: www.siera.ai Physical intelligence: www.bloomberg.com Skild: www.skild.ai Covariant: covariant.ai Figure: www.figure.ai Cobot: www.tm-robot.com Deepmind RT-X: deepmind.google 推荐的播客和newsletter Latent Space | swyx & Alessio | Substack Bg2 Pod Interconnected | Where Tech, Investing, Geopolitics Come ... Elad Gil First Round Review What's

OnBoard!
EP 48. 对话Lepton AI创始人贾扬清:AI需要怎样的基础设施,模型与应用未来格局

OnBoard!

Play Episode Listen Later Mar 12, 2024 86:47


久违的一对一访谈回来啦!这次的嘉宾绝对重磅,贾扬清老师,关注AI领域的同学应该都听过他的鼎鼎大名!他在 UC Berkeley 博士期间创立了深度学习框架 Caffe, 很快成为行业事实标准。先后在 Google Brain, Facebook AI 从事最前沿的AI研究,随后又担任了阿里巴巴技术副总裁,领导大数据计算平台。2023年开始新征程,在硅谷创立了 Lepton AI. Hello World, who is OnBoard!? 作为AI和infra行业的行业领军人物,扬清老师是如何思考自己AI创业的方向的?他如何理解未来AI对于基础设施的需求,跟云计算这么多年的发展有哪些异同的地方?这一年以来,回到世界AI创新中心的硅谷,他对于AI和创业的理解、开发者工具和应用的价值、开源和闭源模型等等话题,都有怎样的思考迭代? 我们不知不觉又聊了近两个小时,真是干货满满,你也能感受到扬清条理清晰、观点犀利,又温和儒雅,实在是太令人享受的谈话了。这大概就是播客的魅力,让我们在文字之外,感受到更真实鲜活的人。嘉宾长期在美国工作生活,有英文在所难免,不接受抱怨!Enjoy! 嘉宾介绍 贾扬清(推特:@jiayq),Lepton.ai 创始人。本科和研究生阶段就读于清华大学自动化专业,后赴加州大学伯克利分校攻读计算机科学博士。他在博士期间创立并开源了如今业内耳熟能详的深度学习框架Caffe,被微软、雅虎、英伟达、Adobe 等公司采用。2013年毕业后,他加入谷歌,是谷歌大脑 TensorFlow 的作者之一。2016年2月加盟Facebook,并开发出Caffe2Go、Caffe2、PyTorch等深度学习框架。2019 年加入阿里巴巴,担任阿里巴巴集团副总裁、阿里云智能计算平台事业部总裁。 嘉宾主持:戴雨森,真格基金合伙人,清华大学工业工程系2004级校友,曾在斯坦福大学管理科学与工程系就读。戴雨森22岁时参与创办了知名互联网上市公司聚美优品,主管互联网产品、运营、市场投放、品类等。加入真格基金之后,主要关注人工智能方向投资。 OnBoard! 主持:Monica:美元VC投资人,前 AWS 硅谷团队+ AI 创业公司打工人,公众号M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 我们都聊了什么 02:14 主持和嘉宾的自我介绍,Lepton 最近一篇论文为什么值得关注? 06:00 Lepton AI是做什么的,为什么称之为 AI cloud company? 10:02 为什么想要成立 Lepton AI? 11:50 设计针对AI的基础设施难点在哪里?跟传统云厂商和HPC的差别是什么? 19:46 为什么说现在我们不需要担心AI推理成本?未来提升的空间有多少?硬件和软件还可能有哪些突破? 25:27 开发者如何选择AI基础设施和响应的开发工具?为什么 leaderboard 是不够的? 28:49 Nvidia 会有新的挑战者吗?什么是“不可能三角”? 33:48 MLOps 是个伪命题?!AI 需要的开发工具是怎样的? 39:01 应用开发门槛越来越低,如何思考AI应用的价值?微软20年前的海报给了我们怎样的启发? 44:47 AI native 的组织是怎样的? 54:51 开源和闭源、专用和通用模型未来的关系?未来会 one model rules all 吗? 64:24 创业之后有什么感受和收获?去年年初提出的“三个基本假设”,这一年有什么变化? 67:56 未来AI应用和平台的市场格局会发生怎样的变化? 70:01 为什么说我们低估了颠覆的难度?期待5年后AI可以完成什么? 76:59 快问快答:喜欢的AI产品,推荐的书籍,解压的方式,想要问 AI 什么问题? 我们提到的内容 DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models, Paper, Code Meta research: Training ImageNet in 1 Hour AI inference leaderboard Lepton Search, Code Perplexity 推荐的书:菊与刀 参考文章 贾扬清的个人网站 贾扬清:三个基础假设 贾扬清:ChatGPT,和聪明地设计 Infra Twitter 讨论:Are LLM APIs losing money? Does One Large Model Rule Them All? 欢迎关注M小姐的微信公众号,了解更多中美软件、AI与创业投资的干货内容!M小姐研习录 (ID: MissMStudy) 欢迎在评论区留下你的思考,与听友们互动。喜欢 OnBoard! 的话,也可以点击打赏,请我们喝一杯咖啡!如果你用 Apple Podcasts 收听,也请给我们一个五星好评,这对我们非常重要。 OnBoard! 终于成立听友群啦!新年新气象,加入Onboard听友群,结识到高质量的听友们,我们还会组织线下主题聚会,开放实时旁听播客录制,嘉宾互动等新的尝试。添加任意一位小助手微信,onboard666, 或者 Nine_tunes, 发送你的姓名、公司和职位,小助手会拉你进群。期待你来!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well

LifePointe Elk Grove
Money Wise - A Lepton, Bronze Trumpet and Dollar Bill - Week 4

LifePointe Elk Grove

Play Episode Listen Later Mar 3, 2024 38:46


Walk to Work - A Mobile Hearthstone Podcast
W2W 1238 - My Top 10 BGs Spells!

Walk to Work - A Mobile Hearthstone Podcast

Play Episode Listen Later Dec 6, 2023 31:28


I go over my top 10 spells coming to Battlegrounds today, before playing Lepton's Miracle Priest. You can find the deck import code below the following contact links.  Join our Discord community here or at discord.me/blisterguy. You can follow me @blisterguy or the podcast @walktoworkHS on twitter. Subscribe to my Youtube channel. You can support this podcast and my other Hearthstone work at Patreon here. # 2x (1) Animate Dead # 2x (1) Crimson Clergy # 2x (1) Fan Club # 2x (1) Flash Heal # 2x (1) Funnel Cake # 2x (1) Shard of the Naaru # 2x (1) The Light! It Burns! # 1x (2) Astalor Bloodsworn # 2x (2) Creation Protocol # 2x (2) Power Chord: Synchronize # 2x (2) Thrive in the Shadows # 2x (3) Holy Nova # 2x (3) Injured Hauler # 1x (3) Love Everlasting # 2x (6) Thirsty Drifter # 1x (7) Aman'Thul # 1x (8) Reno, Lone Ranger #  AAECAfq0BgTipAXPxgXP9gWvqAYNougDrYoEhJ8Ey6AE+dsEpJEFu8cFoukF7fcF+/gF6ZgGxpwGuJ4GAAA=

Castle Hill Cricket Chat. A Huddersfield Cricket League Podcast
Episode 49 - The Big Preview w/ Jacob Mulhall (Golcar CC)

Castle Hill Cricket Chat. A Huddersfield Cricket League Podcast

Play Episode Listen Later Apr 15, 2023 67:25


It was a false start as far as cricket may be concerned but it didn't stop the regular Castle Hill Podcast men from dissecting the latest news in their pre-season special. Recorded entirely on location for the first time (Thank you Golcar Cricket Club. Apologies for the background noise!) Steve, Andrew and Jamie welcomed previous guest and all-round good egg Jacob Mulhall in the location of his new club. Familiar surroundings since he had played for Golcar back in 2018. Jacob, freshly arrived back from his second winter stint in New South Wales, updated us on his progress over there and how he's looking forward to getting back to reviving a formidable opening batting partnership with ‘Cobber'. With the recent downsizing at his former club Lepton Highlanders, we took the opportunity to get his thoughts on the events of what was his ‘home' club. We've also brought in a new feature for guests which is our one minute on-the-spot quiz and we sent down some awkward Lepton questions to see how Jacob's forward defence was looking. Listen on to hear more transfer tittle-tattle, a return of ‘The Opener' and the outrageous launch of Finch's Forecast; the Castle Hill Cricket Chat prediction game where two of the four of us did absolutely zero preparation into it. You can submit your own predictions via the form in the link here: https://docs.google.com/forms/d/e/1FAIpQLScaO6cTXR2GvqJpR7wTRJd6aP-vvubebMNoAF1lL3Q1GDuDTQ/viewform

Abstract Essay
Abstract Essay Volume 136 Lepton by Daniel Lucas

Abstract Essay

Play Episode Listen Later Mar 3, 2022 4:09


Abstract Essay Volume 136 Lepton by Daniel Lucas.Available on Amazon and leading online bookstore worldwide --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

The Never Rad Miscellany
Iron Tyrant Part 4

The Never Rad Miscellany

Play Episode Listen Later Aug 16, 2021 19:14


SFX:    Dramatic Sting. LEPTON:    Welcome back; I'm your host Lepton Brown and this is the live season finale of Iron Tyrant. We’re going to bring all of this week's doomtestants back to the judgment hexagon to answer to me, Comptroller Beens, and special guest judges the artist known only as Britamatone, the psychic entity Mauve, […]

The Never Rad Miscellany
Iron Tyrant Part 3

The Never Rad Miscellany

Play Episode Listen Later Aug 2, 2021 15:11


MFX:    Return Cue LEPTON:    Welcome back to Iron Tyrant. When we left off, Professor Quantus Verblanskowicz had put thousands of Fleevians to work mining ore for a giant smashing machine. The trouble is that the planet Fleeve is relatively poor in ores and minerals. QUANTUS:    SMASH! SMASH! HATE! LEPTON:    Quantus will likely have to import […]

smash tyrant lepton mfx
The Never Rad Miscellany
Iron Tyrant Part 2

The Never Rad Miscellany

Play Episode Listen Later Jul 19, 2021 5:33


LEPTON:    Welcome back to Iron Tyrant with me, your host Lepton Brown. Our doomtestants have been whipping up a little bespoke Apocalypse for the cultists of the planet Fleeve.      Doomtestant Gandra has been planning… something; she won't tell us, and doomtestant Malmo Zarathustra hopes to wow the judges with a fireworks spectacular.  Doomtestant Quantas's […]

Castle Hill Cricket Chat. A Huddersfield Cricket League Podcast
Episode 7 - Guest: Jacob Mulhall & Sykes Cup Reviews

Castle Hill Cricket Chat. A Huddersfield Cricket League Podcast

Play Episode Listen Later May 13, 2021 102:12


Despite having played first team cricket for seven years, Lepton Highlanders captain is still, remarkably, only 21. We enjoyed the company of Jacob on episode 7 of Cricket Hill Cricket Chat podcast. Jacob is part of a sprawling, extended cricketing family so he talked about being part of the fabric of a cricket club. He was the last winning skipper of a Huddersfield Joe Lumb side when they won it in 2017. He also made a move to play Premiership cricket the same season when he changed clubs for Golcar. In this wide-ranging conversation he shared his experiences during those formative years. We also discussed junior cricket and summarised the exciting action in the first round of this season's Sykes Cup as Jacob's team Lepton came within a single run of toppling one of the leading Premiership sides. Have a listen below. If you'd like to be a guest on a future episode to talk about your club, or a big moment in your career, just send us a message via the podcast's contact form or via a message on Twitter.

Eigenbros
Eigenbros ep 109 - Lepton Flavor Universality Violated

Eigenbros

Play Episode Listen Later Mar 26, 2021 61:29


Terence & Juan discuss the recent news from CERN (03/24/2021) which could potentially violate Lepton Universality. This could imply problems with the current version of the Standard Model of Particle Physics, and more importantly, a chance to discover New Physics.

Abstract Essay
Lepton

Abstract Essay

Play Episode Listen Later Mar 23, 2021 7:43


What are the different types of Lepton and its functions --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

lepton
RTS.FM radio
Lepton | RTS.FM Budapest x Bedroom Christmas | 25.12.2020

RTS.FM radio

Play Episode Listen Later Jan 6, 2021 64:31


Lepton | RTS.FM Budapest x Bedroom Christmas | 25.12.2020 Recorded and streamed live at DJ Studio, Budapest on 25th December 2020. YouTube: https://youtu.be/fVpJuhfwyAo Bedroom: www.facebook.com/bedroomcrew.budapest Event: https://www.facebook.com/events/489643858684239/ Venue: https://www.facebook.com/djstudio.hu Lepton (HU) Bedroom / Budapest Fine Selection / Budapest Soundcloud: https://soundcloud.com/lepton_bedroom Facebook: https://www.facebook.com/leptonfrombedroom RTS.FM Budapest: YouTube: bit.ly/rtsfmbudapest SoundCloud: soundcloud.com/rtsfm/sets/rts-fm-budapest Mixcloud: www.mixcloud.com/rtsfmbudapest RTS.FM Budapest archives: YouTube: bit.ly/rtsfmbudapest SoundCloud: soundcloud.com/rtsfm/sets/rts-fm-budapest Mixcloud: www.mixcloud.com/rtsfmbudapest

RTS.FM radio
Lepton, Monoclick, Electric Boutique | RTS.FM Budapest x Bedroom Christmas | 25.12.2020

RTS.FM radio

Play Episode Listen Later Jan 6, 2021 67:17


Lepton, Monoclick, Electric Boutique | RTS.FM Budapest x Bedroom Christmas | 25.12.2020 Recorded and streamed live at DJ Studio, Budapest on 25th December 2020. YouTube: https://youtu.be/SB8xiBynGBs Bedroom: www.facebook.com/bedroomcrew.budapest Event: https://www.facebook.com/events/489643858684239/ Venue: https://www.facebook.com/djstudio.hu Bedroom Crew Bedroom / Budapest Fine Selection / Budapest Soundcloud: https://soundcloud.com/bedroomcrewbudapest Facebook: www.facebook.com/bedroomcrew.budapest RTS.FM Budapest: YouTube: bit.ly/rtsfmbudapest SoundCloud: soundcloud.com/rtsfm/sets/rts-fm-budapest Mixcloud: www.mixcloud.com/rtsfmbudapest RTS.FM Budapest archives: YouTube: bit.ly/rtsfmbudapest SoundCloud: soundcloud.com/rtsfm/sets/rts-fm-budapest Mixcloud: www.mixcloud.com/rtsfmbudapest

Microwave Journal Podcasts
Learn About Innovative Metamaterial-Based SATCOM Solutions Approach From Kymeta

Microwave Journal Podcasts

Play Episode Listen Later Aug 19, 2020 24:17


With Kymeta's recent acquisition of Lepton to expand their offerings, Microwave Journal editor, Pat Hindle, talked with Doug Hutcheson, Executive Chairman of Kymeta, about their acquisition, technology progress and future SATCOM market trends.

innovative executive chairman satcom metamaterials lepton kymeta pat hindle microwave journal
The Tailored Life Podcast
Ep. 381 - The Hungry Brain with Stephan Guyenet

The Tailored Life Podcast

Play Episode Listen Later Feb 13, 2020 66:04


Today's guest is Stephan Guyenet, PHD and author of a very popular book - The Hungry Brain: Outsmarting the Instincts That Make Us Overeat. We dive into A LOT of topics, but all centered around why people overeat, gain body fat, and sometimes cannot control this (or so they think). We also dive into the brain and hormonal system's influence on this idea of gaining excess body fat and overeating. Check out his book The Hungry Brain and his latest project, Red Pen Reviews. This show is brought to you by our proud sponsors, Top Notch Nutrition (enter code BOOMBOOM to save 10% off) and Creapure, the purest creatine on the planet. Apply for our World Renowned Coaching Program, RIGHT HERE. Remember to join our private FB community, RIGHT HERE. ---- Timestamps: 4:05 - Who is Stephan Guyenet  8:40 - Explaining how the brain and metabolism tie in together 16:25 - Timeline for regulatory response 19:55 - Lepton response solely tied to fat or does it have any tie into calories? 24:15 - Boosting your metabolism  27:55 - Hormonal repercussions of dieting 33:00 - Pinpointing and removing highly palatable foods  45:55 - Outsmarting the system to Reestablish set points 50:20 - Argument of no matter what, if calories are met, you will lose weight. 57:30 - Dr. Stephan Guyenet new project “Red Pen Reviews” 1:03:50 - Where to find Stephan Guyenet and all of his content ---- ---- Apply For Coaching: bit.ly/Coaching-App Get Your Free Copy of The Nutrition Hierarchy, HERE Learn How We Coach: Read This Case Study Article Top 4 Episodes: - Nutritional Periodization - Nutrition FAQ - Training FAQ - My Story  ---- You can get access to ALL of our content in one place, now: https://tailoredcoachingmethod.com/links/ Join The Boom Boom Elite (BBP's Membership Site) to receive exclusive content and interviews, monthly training programs, bonus eBooks, the private coaching forum, and more by visiting https://tailoredcoachingmethod.com/elite-membership/ ASK BOOM-BOOM YOUR QUESTION HERE! Check out all of our e-books by visiting https://tailoredcoachingmethod.com/products/ Boom Boom Performance Coaching Info: https://tailoredcoachingmethod.com/online-coaching/ ---- Social Links: Blog – www.TailoredCoachingMethod.com Facebook - https://www.facebook.com/boomboomperformance/
 Instagram - https://www.instagram.com/cody.boomboom/ 
 YouTube - https://www.youtube.com/user/BoomBoomPerformance Email – info@tailoredcoachingmethod.com  As Featured on: Huffington Post, Bodybuilding.com, The PTDC, Dr. John Rusin, Muscle For Life, HLHL, iN3, OPEX Fitness and More…

Bass Agenda
BassAgenda Mix For Lepton (Dublin Digital Radio) 3rd Aug 2019

Bass Agenda

Play Episode Listen Later Aug 6, 2019 60:06


A dark and heavy ride through old stuff, new stuff, forthcoming stuff, and two premieres from Transhumanism 3* (releasing in September) Heuristic Audio - Soul Frus John Selway - Stars In The Gutter Serge Geyzel – Black Mirrors RXmode* - Desperate Umwelt - Faceless Power Deemphasis - Where R U Bass Junkie - Surrender Or Be Destroyed!!! Mike Ash - Experiment (Bonus Beats) Nexus 23 - Transform (Beatprozessor's Dirty Bass Transform) Nikki Nair - EXP2 Sync 24 - Poll Wars Hatch - Hatch-Hop Anthony Rother - Bilocation Simon Leeks - Heavy Biotech w1b0* - Evans Gambit

transhumanism lepton dublin digital radio
Python Barcelona Podcast
PyBCN #4 - Herramientas del día a día

Python Barcelona Podcast

Play Episode Listen Later May 29, 2019 45:19


En este episodio hablamos de las herramientas que conocemos y usamos en nuestro día a día, tanto dentro como fuera de Python. Algunas de las que comentamos: DB clients: DBeaver, TeamSQL, SQL Workbench/J, IntelliJ DataGrip Visualizadores de documentación: Zeal, Dash. Extensiones de Chrome. VCS: Magit, Kraken, Sourcetree, Hub. Diff view de PyCharm. Docker: minikube, k3s, kind (kubernetes in docker) Shell search tools: fzf (extensión de autocompletado “difuso” para la shell, extensible para escribir propios completadores), ag (silver searcher), rip grep. Shell tools: zsh, fish, ohmyzsh Gestores de gists y notas: Lepton, Cacher, deft mode in emacs, nvalt. bear (editor de pago para notas en móvil), orgmode, babel-org, orgzly, orgmode. Gestores de tickets: JIRA, Trello. Opciones: Alternativeto.com

Consultor y Desarrollador Web
Episodio 71 – FridayTip – Snippets con Gist (Lepton, Cacher, Gisto, CodeHub)

Consultor y Desarrollador Web

Play Episode Listen Later Mar 21, 2019 17:19


Episodio número 71 - FridayTip de la semana, en el episodio de hoy les voy a hablar sobre la herramienta Gist, que es y qué software podemos usar. Encontrá todo el texto del episodio y las notas del programa en https://ardid.com.ar/podcast/episodio-71-friday-gist-github

SOUL SCHOOL with Audrey
EP 15: Turning Pain into Purpose with Jenn Hepton

SOUL SCHOOL with Audrey

Play Episode Listen Later Feb 6, 2019 57:37


EP 15: Turning Pain into Purpose with Jenn Hepton Today’s soulful conversation is with Jenn Hepton, a Certified Grief Support Coach, Life + Wellness Coach, Speaker, Author, and Educator. She coaches ​womxn who want to make sense of their lives after loss and who are tired of hiding their voices and feeling powerless in their grief. She works with them to navigate through the darkness, find a place of expansion, and purpose. I don’t believe there are accidental meetings. We have soul contracts with each other for lessons that need to be learned and gifts to share with each other. The day I had this conversation with grief coach, Jenn Hepton, was also the day my father passed away. I didn’t know my dad would pass away later that evening. I truly believe the Universe/God/Source/angels sent me Jenn on that specific day as a gift, in preparation of my loss, to hear her sage advice of how to deal with grief, patch up a broken heart and learn how to heal with grace. This is a heart-wrenching episode. Some people won’t want to hear about loss and will skip this. I get it. We live in a world that doesn’t deal well with pain and loss, maybe because we don’t want to see or feel suffering, or we just don’t know HOW to deal with grief. But as I learned from Jenn, that when we acknowledge pain and hold space to feel our feelings, even the radical acceptance that it is okay not to be okay, we can then heal ourselves and support those who need healing too. Experiencing 7 pregnancy losses with the loss of her twins at 22 weeks and her daughter Loey at 39 weeks, Jenn’s massive grief lead her to a place in her soul that needed to heal, a place that she didn’t know existed and a deep healing and craving to live a meaningful life. She shares her PTG (post traumatic growth) story of how she turned her pain into serving others. Jenn is truly a gift. She is my soul sister who has helped me deal with the loss of my father by holding a nurturing and loving space so I can begin putting together my broken pieces of my heart. Soul Seekers, I hope you find some solace, wisdom and even some peace in our conversation today with this warrior healer, Jenn Hepton as she shares how she turned her pain into purpose on Soul School. Find Jenn Hepton at: https://www.lossintransition.com

Der Übercast
#UC087: Neues, Altes und iOS 11iges

Der Übercast

Play Episode Listen Later Jul 28, 2017 67:47


Neben jeder Menge Altlasten bzw. News, welche beredet werden huscht die digitalisierte Version von Sven F. immer wieder durch die Flugkabine und stiftet Unruhe. Andreas hat die iOS 11 Beta drauf und berichtet. Lieber Fluggast, wenn dir das Gehörte gefällt oder dir Sorgenfalten auf die edle Stirn fabriziert, dann haben wir etwas für dich: iTunes Bewertungen. Follow-up Hacker stiebietzt $31M von Ether Feedback zu Rasiererdings von Passagier Frank M.: Dass Hobel mit offenen Kamm (oc) aggressiver sind als solche mit geschlossenem (cc) ist ein Gerücht, dass sich hartnäckig hält. Wie agressiv ein Hobel ist, liegt grundsätzlich am Spalt zwischen Klinge und unterer Platte. Je größer, desto aggressiver. Merkt man ja bei allen verstellbaren Hobeln. Der allersanfteste Hobel, den ich je probiert habe, war der Phoenix Double Open Comb, bei dem sogar Kopf und Bodenplatte gezahnt sind. Ein sehr schöner und vor allem günstiger Hobel ist der Qshave Hobel, ein Klon des Merkur Futur. Alaunsteine werden zwar gerne verwendet, aber: “Zwar hat das BfR das Risiko von Alaunsteinen noch nicht untersucht. Angesichts der hohen Aluminiumkonzentrationen und der Tatsache, dass Aluminium durch Hautwunden in noch höherem Maße aufgenommen wird als über gesunde Haut, sollte man auf die Anwendung von Alaunstiften verzichten.” (SPIEGEL ONLINE) Rasiermesser schleifen ist halb so wild und auch nur selten nötig. Wenn man selbst nur ledern aber nicht schleifen will, kann man es einmal im Jahr zu einem der anerkannten Schleifer aus den Foren von gut rasiert oder forum.nassrasur.com schicken. Auf Facebook in der Gruppe “Rasiermesser&Hobel Freunde der Nassrasur” gibt es auch ein paar Profis. Während meiner Hobelzeit habe ich immer mal wieder ein Messer probiert und frustriert aufgegeben, glaubt, ich sei wirklich ein reiner Hobler. Inzwischen rasiere ich mich täglich mit dem Rasiermesser oder der Shavette. Den Hobel habe ich eine Zeitlang noch für die wöchentliche Kopfrasur benutzt, aber auch das funktioniert inzwischen als reine Ölrasur wunderbar mit einer Vanta. Ich kann nur empfehlen, es mal auszuprobieren und vom Hobel noch einen Schritt weiterzugehen. Macht am meisten Spaß und ist wirklich sanft und gründlich. F-Secure XFENCE Bug mit nicht-ASCII (z.B. Unicode) Pfaden. XFENCE überschreibt hier die eigene Datenbank und auf einmal endet man mit mehrere Regeln in einer Zeile und wird nur noch genervt. Also in der Kommandozeile nachsehen bei Problemenen: cd "/Users/Shared/F*Secure XFENCE" grep *-color '.allow|.deny' *.rc CheapCharts kann jetzt auch nach Apps. Vox: The end of the internet startup Acorn 6 - The Image Editor for Humans Git für Sketch Figma Slidium Apple Machine Learning Journal Feedly hat endlich Mute Filter YouTube/Kickstarter: Amabrush - World’s First Automatic Toothbrush Kickstarter: Hex Gambit: Fast & fluid turn-based strategy by One Man Left Wickel-Sneaker bei Galileo Indiegogo MagC – “the only Magnetic Converter giving MagSafe, USB-C and Thunderbolt 3” Systematic 198: Automation for the People with Sal Soghoian Cryptomator iOS ExpanDrive 6 Feedbackbericht DevTools (Chrome 59) und Fullsize Screenshots Filebot Nachtrag iOS 11 App Store Games/Apps Control Center Files.app Do Not Disturb - driving: Maps.app indoor maps for malls and airports HEIF and HEVC formats (alte Macs = alte Formate, was passiert bei Metadaten-Edit?) It’s still not clear to me how Apple will manage HEIF and HEVC in iCloud Photo Library. I have the public beta of iOS 11 installed on an iPad, and the photos taken with it sync properly to iCloud, but on my other devices, the show up as JPEG files, not containers. It’s possible that the central iCloud sync will retain HEIF files and sync JPEG to the end points, but then what happens when you edit a file on a Mac running an older version? That’s what we’ll have to wait to find out. Macworld: Will new HEVC and HEIF formats work in older versions of macOS and iOS? Apple TV 4 = kein HEVC Hardware Decoding auf tvOS 11, nur softwareseitig (nicht sehr effizient, irgendwann neue Geräte mit A9 Chip. Die können das dann auch hardware-seitig.) The problem, of course, is always with backwards-compatibility. WebP was aggressively promoted by Google the same way Microsoft used to promote its quirky Windows Media formats back in the early 2000s, but the WebP and non-WebP camps are still largely separate. This is unfortunate, because WebP in my opinion really isn’t very good, and a backwards-compatible compressor on top of JPEG, such as Dropbox’s Lepton [2] achieves similar results. Any HEVC-based image compressor is necessarily incompatible with JPEG; so broad buy-in would be required from makers of software and hardware products, from operating systems, file managers, image viewers, image editors, cameras, and the like, to really enjoy its improved compression rates, vs. being just another incompatible format that only functions inside controlled ecosystems. Hacker News: HEIF – High Efficiency Image File Format einseitige Übersetzung per Siri “Dark mode” via smart invert (images, media, some apps that use dark color style => funktioniert nicht überall gut) What’s New in iOS 11 Unsere Picks Patrick: Blackroll Andreas: Gboard In Spenderlaune? Wir haben Flattr und PayPal am Start und würden uns freuen.

RWpod - подкаст про мир Ruby и Web технологии
06 выпуск 05 сезона. The Lego Way of Structuring Rails Code, Sidekiq-merger, Relaxo, Kap, Dante II, Notti.js, Lepton и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Feb 12, 2017 29:00


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby The Lego Way of Structuring Rails Code, Simple tips to make scaling your database easier as you grow и Hash Tables Explained How to DRY out your RSpec Tests using Shared Examples и Autocomplete Using Redis Sidekiq-merger - merge sidekiq jobs occurring before the execution times, Relaxo - a transactional database built on top of git. It's aim is to provide a robust interface for document storage and sorted indexes и Decoding and Interacting with Barcodes JavaScript How to Achieve Reusability with React Components и Why Webpack 2's Tree Shaking is not as effective as you think Top mentioned books on stackoverflow.com, Distributed Object Protocol и Mnemonist - curated collection of data structures for the JavaScript language Kap - an open-source screen recorder built with web technology, Lepton - a lean GitHub Gist Desktop Client based on Electron, Dante II - just another medium clone built on top of DraftJs и Notti.js - dead simple user notification

More Than Just Code podcast - iOS and Swift development, news and advice
Episode 102: A Tree Branch With a Chip Attached

More Than Just Code podcast - iOS and Swift development, news and advice

Play Episode Listen Later Jul 30, 2016 105:37


This week we find out what Jaime thinks about burritos and his first computer. We discussed the rumored MacBook Air with USB-C port. We discuss the iPad Paradox in light of Apple's better than expected Q3 results and compare the iPad with the Surface. We dig deeper into Apple's third-quarter results and Apple's sale of the one billionth iPhone sold. We follow up on NIST declaration of SMS two-factor authentication insecurity. Picks: ItsyCal, Flask-ASK, Prisma, MiFlight Circuit Tree 3D by Steve Duggan Episode 102 Show Notes: Chipotles Canadian Tire money MacBook Air might not be scrapped after all Fantastical 2 Buddybuild The iPad Paradox iPad year over year growth. Sixcolors https://twitter.com/bleedsixcolors/status/758047510422179840 Surface Pro 4 Lenovo PC on stick Apple beats third-quarter expectations with $7.8-billion profit Larry Ellison Continuum Apple Has Sold 1 Billion Phones The race to a billion—2012 Update New Blackberry NIST declares the age of SMS-based 2-factor authentication over Two-Factor Authentication • Authy 2life - Get the Most Out Of Your Relationship pCalc Lepton image compression: saving 22% losslessly from images at 15MB/s indiedevstock NSScanner Rocksmith iRig HD AmpliTube AudioBus The New York Code + Design Academy Episode 102 Picks: ItsyCal Flask-Ask Flask-Ask: A New Python Framework for Rapid Alexa Skills Kit Development Flask-Ask - Using Alexa Skills with John Wheeler Secure tunnels to localhost Prisma MiFlight

The Scientific Odyssey
Episode 2.25: Putting the Puzzle Together

The Scientific Odyssey

Play Episode Listen Later Oct 2, 2015 45:47


A discussion of the creation of the Standard Model including the work of Sheldon Glashow, Abdus Salam and Steven Weinberg.

What's in the news Robin
The Lepton RDA ~ KVLT Mods ~ Norway ~ Hard Cheese

What's in the news Robin

Play Episode Listen Later Aug 13, 2015 68:22


8-13-15 VLOG Hey everyone! welcome to the VLOG for August 13th 2015. As per usual we have an action packed VLOG this week. There is some vape news out of Norway regarding bans that I found pretty interesting. We do a first impressions of the KVLT mod and the LEPTON Rda and we talk about a VERY strange juice called MYLK from the Brewell people. Of course there is plenty of beer and shoututs included as well. So tuck in, Grab your best vape and enjoy the program. Timestamps, Soundcloud link and other crucial links are all listed below. Also remember that I will be gone at ECC for the next 4 days, so comments getting replied to will take an extra long time, if at all. -----TIMESTAMPS----- Top of the program is some announchemts / Updates / Norway Advocacy is at 13:27 Rob Swire talk is at 15:34 Beer is at 20:54 Shoutouts are at 28:24 First impresions are at 36:39 Reviews for things that never got reviews ( new segment ) is at 56:24 Crucial links are below A raffle for Michael Faulkner will be held here soon https://www.facebook.com/groups/607296666011785/ -----The Advocacy----- Together Win or Lose https://www.youtube.com/channel/UCd0wAvPLnV2H4_e9tR8wzPA Norway Vaping http://www.planetofthevapes.co.uk/news/vaping-news/2015-08-05_no-way-norway.html -----The Beer----- Sanctus Dominicus http://www.sanctusdominicus.org/ -----The Vapor----- KVLT mods http://kvltmods.bigcartel.com/product/soulreaper-dual-18650-unregulated-w-mosfet Emperor Vap'east Lepton RDA https://www.facebook.com/emperorvapeast/timeline Brewell Vapory MYKL Juice http://justvapeinc.myshopify.com/products/brewell-vapory-mylk-strawberry?variant=4206985477 Brewell Facebook ( seriously, their other juices are great ) https://www.facebook.com/brewellvapory -----The Other----- Rob Swire Story http://vaperanks.com/musician-attributes-hearing-loss-to-electronic-cigarettes/ True Cost Of Smoking Infographic http://www.vapourlites.com/blog/the-true-cost-of-smoking.html Metal Gear Vaper giveaway https://www.youtube.com/watch?v=i-Yk3VARtG4 -----Weekly Review Series----- Asassin mech mod https://www.youtube.com/watch?v=Z6rPtoaW7PM 454 V2 RDA https://www.youtube.com/watch?v=xIfp2WRLBlU General DNA40 Pipe https://www.youtube.com/watch?v=SLQqdbzEUMM -----My Social Media----- Instagram http://instagram.com/grimmgreen Twitter https://twitter.com/grimmgreen Facebook https://www.facebook.com/GrimmGreen Namberjuice http://namberjuice.com/ Also please remember that unless you make it so I can reply to you. I will be un-able to reply to your comments.

Big Woods Bible Church
2-23-2014 It's Not Yours Lessons from the Lepton

Big Woods Bible Church

Play Episode Listen Later Jul 4, 2015 42:15


2-23-2014 It's Not Yours Lessons from the Lepton

lessons lepton
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05
Search for strongly interacting supersymmetric particles decaying to final states with an isolated lepton with the ATLAS detector at the LHC

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 04/05

Play Episode Listen Later Feb 26, 2014


Two analyses searching for squarks and gluinos which decay into final states with multiple jets, an isolated electron or muon and a large missing transverse energy are presented. Both rely on data taken by the ATLAS detector in pp collisions at a center-of-mass energy of 8 TeV at the LHC during 2012. The first analysis uses a subset of 5.8 fb-1 of this dataset, the other analysis uses the full statistics of 20.3 fb-1. Both analysis share the same methods regarding the triggers and the background estimation techniques. The two dominant backgrounds are ttbar and W+jets production. The ttbar and the W+jets backgrounds are estimated in a semi-data-driven method. The minor QCD multi-jet background is estimated in an entirely data-driven method. The final background estimates in the analyses are derived in a profile-log-likelihood fit. None of the analyses sees an excess beyond Standard Model expectations. The analysis of the partial dataset derives limits in a MSUGRA/CMSSM model with parameters A_0=0, tan(beta) = 10 and mu > 0 and excludes squarks and gluinos with masses below 1.2 TeV for equal squark and gluino masses. The analysis of the full dataset derives limits in simplified models and in a MSUGRA/CMSSM model with parameters A_0=-2 m_0, tan(beta) = 30 and mu > 0. Gluinos (squarks) with masses below 1.2 TeV (750 GeV) can be excluded for vanishing LSP masses in simplified models. Gluino masses below 1.2 TeV can be excluded for every m_0 value in the MSUGRA/CMSSM model.

Physik - Open Access LMU - Teil 02/02
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in pp collisions at √s=8TeV with the ATLAS detector

Physik - Open Access LMU - Teil 02/02

Play Episode Listen Later Jan 1, 2014


Measurements of fiducial and differential cross sections of Higgs boson production in the H→ZZ⁎→4ℓ decay channel are presented. The cross sections are determined within a fiducial phase space and corrected for detection efficiency and resolution effects. They are based on 20.3 fb−1 of pp collision data, produced at √s=8TeV centre-of-mass energy at the LHC and recorded by the ATLAS detector. The differential measurements are performed in bins of transverse momentum and rapidity of the four-lepton system, the invariant mass of the subleading lepton pair and the decay angle of the leading lepton pair with respect to the beam line in the four-lepton rest frame, as well as the number of jets and the transverse momentum of the leading jet. The measured cross sections are compared to selected theoretical calculations of the Standard Model expectations. No significant deviation from any of the tested predictions is found.

Physik - Open Access LMU - Teil 02/02
Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb−1 of s√ = 8 TeV proton-proton collision data with the ATLAS detector

Physik - Open Access LMU - Teil 02/02

Play Episode Listen Later Jan 1, 2014


A search for supersymmetry (SUSY) in events with large missing transverse momentum, jets, at least one hadronically decaying tau lepton and zero or one additional light leptons (electron/muon), has been performed using 20.3fb−1 of proton-proton collision data at s√ = 8 TeV recorded with the ATLAS detector at the Large Hadron Collider. No excess above the Standard Model background expectation is observed in the various signal regions and 95% confidence level upper limits on the visible cross section for new phenomena are set. The results of the analysis are interpreted in several SUSY scenarios, significantly extending previous limits obtained in the same final states. In the framework of minimal gauge-mediated SUSY breaking models, values of the SUSY breaking scale Λ below 63 TeV are excluded, independently of tan β. Exclusion limits are also derived for an mSUGRA/CMSSM model, in both the R-parity-conserving and R-parity-violating case. A further interpretation is presented in a framework of natural gauge mediation, in which the gluino is assumed to be the only light coloured sparticle and gluino masses below 1090 GeV are excluded.

Physik - Open Access LMU - Teil 02/02
Search for long-lived neutral particles decaying into lepton jets in proton-proton collisions at √s=8 TeV with the ATLAS detector

Physik - Open Access LMU - Teil 02/02

Play Episode Listen Later Jan 1, 2014


Several models of physics beyond the Standard Model predict neutral particles that decay into final states consisting of collimated jets of light leptons and hadrons (so-called “lepton jets”). These particles can also be long-lived with decay length comparable to, or even larger than, the LHC detectors’ linear dimensions. This paper presents the results of a search for lepton jets in proton-proton collisions at the centre-of-mass energy of √s = 8 TeV in a sample of 20.3 fb⁻¹ collected during 2012 with the ATLAS detector at the LHC. Limits on models predicting Higgs boson decays to neutral long-lived lepton jets are derived as a function of the particle’s proper decay length

Physik - Open Access LMU - Teil 02/02
Search for new particles in events with one lepton and missing transverse momentum in pp collisions at √s=8 TeV with the ATLAS detector

Physik - Open Access LMU - Teil 02/02

Play Episode Listen Later Jan 1, 2014


This paper presents a search for new particles in events with one lepton (electron or muon) and missing transverse momentum using 20.3fb⁻¹ of proton-proton collision data at √s = 8TeV recorded by the ATLAS experiment at the Large Hadron Collider. No significant excess beyond Standard Model expectations is observed. A W′ with Sequential Standard Model couplings is excluded at the 95% confidence level for masses up to 3.24TeV. Excited chiral bosons (W∗) with equivalent coupling strengths are excluded for masses up to 3.21TeV. In the framework of an effective field theory limits are also set on the dark matter-nucleon scattering cross-section as well as the mass scale M∗ of the unknown mediating interaction for dark matter pair production in association with a leptonically decaying W

Physik - Open Access LMU - Teil 02/02
Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in s√ = 8 TeV pp collisions with the ATLAS detector

Physik - Open Access LMU - Teil 02/02

Play Episode Listen Later Jan 1, 2014


The results of a search for top squark (stop) pair production in final states with one isolated lepton, jets, and missing transverse momentum are reported. The analysis is performed with proton-proton collision data at s√ = 8 TeV collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20 fb−1. The lightest supersymmetric particle (LSP) is taken to be the lightest neutralino which only interacts weakly and is assumed to be stable. The stop decay modes considered are those to a top quark and the LSP as well as to a bottom quark and the lightest chargino, where the chargino decays to the LSP by emitting a W boson. A wide range of scenarios with different mass splittings between the stop, the lightest neutralino and the lightest chargino are considered, including cases where the W bosons or the top quarks are off-shell. Decay modes involving the heavier charginos and neutralinos are addressed using a set of phenomenological models of supersymmetry. No significant excess over the Standard Model prediction is observed. A stop with a mass between 210 and 640 GeV decaying directly to a top quark and a massless LSP is excluded at 95% confidence level, and in models where the mass of the lightest chargino is twice that of the LSP, stops are excluded at 95% confidence level up to a mass of 500 GeV for an LSP mass in the range of 100 to 150 GeV. Stringent exclusion limits are also derived for all other stop decay modes considered, and model-independent upper limits are set on the visible cross-section for processes beyond the Standard Model.

Naked Science Scrapbook
What is the Higgs boson? - Science Scrapbook 13.10.07

Naked Science Scrapbook

Play Episode Listen Later Oct 6, 2013 5:09


Scientists at the Large Hadron Collider have found evidence of a new fundamental particle, that could be the much sought-after Higgs boson. In this Naked Science Scrapbook we find out what a Higgs boson actually is, and why finding it could help us understand the structure of the universe a little bit better...

UC Davis Particle Physics Seminars
Higgs Decaying to Lepton Jets

UC Davis Particle Physics Seminars

Play Episode Listen Later Nov 23, 2010 71:11


Tomer Volansky presents a novel scenario in which the Higgs decays predominantly into a light hidden sector either directly or through light SUSY states.

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05
Measurements of the Neutral Current ep Cross Sections Using Longitudinally Polarised Lepton Beams at HERAII

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05

Play Episode Listen Later Mar 15, 2007


This thesis presents inclusive $e^pm p$ single and double differential cross sections for neutral current deep inelastic scattering measured as functions of the four-momentum transfer squared $Q^2$ and the Bjorken variable $x$ in interactions of longitudinally polarised leptons with unpolarised protons using the H1 detector at HERA~II. An overview of the phenomenology of deep inelastic scattering is given and the experimental apparatus as well as the measurement and analysis procedures are described. The analysis is based on $e^+p$ data taken in 2003-04 and $e^-p$ data taken in 2005 at a centre-of-mass energy of $sqrt s = 318$~GeV, with integrated luminosities of 47.6~pb$^{-1}$ and 98.4~pb$^{-1}$ for the $e^+ p$ and $e^- p$ samples, respectively. The cross sections are measured in the range of $200 < Q^2 < 20,000$~GeV$^2$ and $0.0032

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05
Measurement of the Top Quark Mass at D0 Run II with the Matrix Element Method in the Lepton+Jets Final State

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 02/05

Play Episode Listen Later Oct 21, 2005


The mass of the top quark is a fundamental parameter of the Standard Model. Its precise knowledge yields valuable insights into unresolved phenomena in and beyond the Standard Model. A measurement of the top quark mass with the matrix element method in the lepton+jets final state in D0 Run II is presented. Events are selected requiring an isolated energetic charged lepton (electron or muon), significant missing transverse energy, and exactly four calorimeter jets. For each event, the probabilities to originate from the signal and background processes are calculated based on the measured kinematics, the object resolutions and the respective matrix elements. The jet energy scale is known to be the dominant source of systematic uncertainty. The reference scale for the mass measurement is derived from Monte Carlo events. The matrix element likelihood is defined as a function of both, mtop and jet energy scale JES, where the latter represents a scale factor with respect to the reference scale. The top mass is obtained from a two-dimensional correlated fit, and the likelihood yields both the statistical and jet energy scale uncertainty. Using a dataset of 320 pb-1 of D0 Run II data, the mass of the top quark is measured to be mtop (ljets) = 169.5 +/- 4.4(stat.+JES) +1.7-1.6(syst.) GeV mtop (ejets) = 168.8 +/- 6.0(stat.+JES) +1.9-1.9(syst.) GeV mtop (mujets)= 172.3 +/- 9.6(stat.+JES) +3.4-3.3(syst.) GeV The jet energy scale measurement in the lepton+jets sample yields JES=1.034 +/- 0.034, suggesting good consistency of the data with the simulation. The measurement forecasts significant improvements to the total top mass uncertainty during Run II before the startup of the LHC, as the data sample will grow by a factor of ten and D0's tracking capabilities will be employed in jet energy reconstruction and flavor identification.

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 01/05
Bestimmung der Masse und Breite des W-Bosons im semileptonischen Zerfallskanal mit dem OPAL Detektor bei LEP

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 01/05

Play Episode Listen Later Apr 20, 2004


Diese Arbeit ist eine Weiterentwicklung der beim OPAL-Experiment verwendeten Faltungsmethode zur Bestimmung der Masse des geladenen Eichbosons der schwachen Wechselwirkung. Die Methode wurde ausgeweitet auf eine gleichzeitige Bestimmung der Masse Mw und der Zerfallsbreite Gw des W-Boson genannten Eichbosons. Analysiert wurden dazu Daten, die mit dem OPAL-Experiment in den Jahren 1997 bis 2000 aufgezeichnet wurden. Von den möglichen Zerfällen der erzeugten W-Bosonpaare werden nur semileptonische betrachtet, bei denen ein W-Boson hadronisch in ein Quark-Antiquark-Paar zerfällt und das andere in ein geladenes Lepton und ein Neutrino. In der Faltungsmethode werden die aus der Detektorauflösung resultierenden Fehler der einzelnen Ereignisse berücksichtigt. Dazu wird eine Funktion P(m) für jedes Ereignis ermittelt, welche die Wahrscheinlichkeit angibt, daß die produzierten W-Bosonen eine mittlere Masse m haben. Diese sogenannte Ereigniswahrscheinlichkeitsdichte wird mit einer Physikfunktion PF(m;Mw,Gw) gefaltet, die von den Parametern Masse Mw und Zerfallsbreite Gw des W-Bosons abhängt. Sie beschreibt die Erzeugungswahrscheinlichkeit der W-Bosonpaare unter Berücksichtigung von Photonabstrahlung im Anfangszustand. Aus dieser Faltung erhält man eine von Mw und Gw abhängige Ereignis-Likelihoodfunktion, die ein Wahrscheinlichkeitsmaß dafür ist, daß dieses Ereignis von einem W-Boson mit den Parametern Mw und Gw herrührt. Aus allen selektierten semileptonischen W-Bosonereignissen wird eine Gesamt-Likelihood-Funktion L(Mw,Gw) berechnet. Durch Maximierung dieser Funktion bezüglich Mw und Gw ist erstmals bei OPAL eine gleichzeitige Bestimmung der Masse und Breite des W-Bosons möglich. Mit einer integrierten Gesamtluminosität von 683.84 pb^-1, die in den Jahren 1997 bis 2000 bei Schwerpunktsenergien von 183 bis 208 GeV vom OPAL-Experiment aufgezeichnet wurden, ergibt sich aus den semileptonischen Zerfällen von W-Bosonpaaren ein Wert für die Masse Mw und Breite Gw des W-Bosons zu: Mw = 80.424 +- 0.077 GeV/c^2 Gw = 2.126 +- 0.130 GeV/c^2 Die gemessenen Parameter befinden sich in guter Übereinstimmung mit den Vorhersagen des Standardmodells der Teilchenphysik.

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 01/05
A measurement of the relative decay rate of the charm quark into leptons

Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 01/05

Play Episode Listen Later Feb 10, 2003


The inclusive rate of leptons produced from charmed particle decays (from Z -> charm anticharm) has been measured using a data sample of four million hgadronic Z decays recorded by the ALEPH experiment at LEP from 1991 to 1995. The analysis uses a double-tag and is self-calibrating, i.e. it makes minimum use of the Monte Carlo simulated information. Each event is divided into two hemispheres; hemispheres containing charm quarks are identified with a NEural Network anlaysis method. Lepton candidates are then searched for in the opposite hemisphere. Based on fully reconstructed D candidates, a value of BR(c -> l + X) = 9.09 +/- 0.61 % (l = e or mu) is obtained. The uncertainy includes both the statistical and systematic components.