POPULARITY
163 Ciencia Soviética, Pragmatismo, y las Pitagóricas de la Virtud De la mano del filósofo matemático Martin Gardner, del libro titulado “La ciencia. Lo bueno, lo malo y lo feo”, traigo una reflexión sobre los Física Cuántica Soviética. Luego, en donde hemos pagado los derechos de autor, les proponemos una reflexión musical sobre el Pragmatismo. Por último, de la obra “Una historia poco convencional de la filosofía occidental”, escrito por la filósofa eco-feminista Karen Warren, les propongo una conversación entre Aristóteles y las pitagóricas Periktione y Theano. Cerramos con una canción del dominio público, debido a que nos gastamos todo el presupuesto en la reflexión musical, de parte de Clara Smith, compuesta alrededor de 1933, titulada “Blues del Palacio de Justicia”. Bibliografía: Gardner, Martin (1988) La ciencia. Lo bueno, lo malo y lo falso, Alianza. Warren, Karen (2009) An Unconventional History of Western Philosophy: Conversations Between Men and Women Philosophers [Una historia poco convencional de la filosofía occidental: conversaciones entre filósofos y filosóficas], Rowman & Littlefield. Música: Cortina de Fondo: Epistemólogo Ebrio (2024) Daydream Blues, Suno. Jingle de Introducción: Epistemólogo Ebrio (2024) Podcast del Epistemólogo Ebrio, Suno. Jingle de Reflexión: Epistemólogo Ebrio (2024) Reflexión Musical, Suno. Tema del Dominio Público: Czech National Symphony Orchestra (2024) Edvard Grieg. En el salón del rey de la montaña, Musopen Kickstarter Project. Tema de Cierre: Epistemólogo Ebrio (2024) Blues del Epistemólogo Ebrio, Suno. Jazz del Pragmatismo: Queremos salir de las sombras y la oscuridad, Necesitamos que los instrumentos nos llenen de aire. Nuestro espíritu tiene que salir de su desaire, Tenemos que volver a experimentar la realidad. Es hora de bailar, de cantar De correr, de barrer Es hora de saltar y de contar De querer, de aprender Es el momento de actuar. Es el momento de averiguar. Es el momento de triunfar Puedes escucharlo desde la aplicación SPOTIFY: https://open.spotify.com/embed/show/1uobRUSrFJp52FZdcsCOQe?si=68RLeyXWQaW3FLQw8VNwGQ Puedes escucharlo directamente desde IVOOX: https://ar.ivoox.com/es/podcast-educacion-para-jovenes-epistemologia-audio_sq_f1638689_1.html Puedes escucharlo directamente desde YOUTUBE: https://www.youtube.com/channel/UCDaC646HXI5jCnkji4jBtMQ/featured?view_as=subscriber Puedes escucharlo directamente desde GOOGLE PODCAST: https://podcasts.google.com/?feed=aHR0cHM6Ly93d3cuaXZvb3guY29tL2VkdWNhY2lvbi1wYXJhLWpvdmVuZXMtZXBpc3RlbW9sb2dpYS1hdWRpb19mZ19mMTYzODY4OV9maWx0cm9fMS54bWw&ep=14 Puedes escucharlo directamente desde APPLEPODCAST: https://podcasts.apple.com/au/podcast/educaci%C3%B3n-para-j%C3%B3venes-epistemolog%C3%ADa-por-audio/id1448671719 Puedes escucharlo directamente desde CASTBOX: https://castbox.fm/channel/Epistem%C3%B3logo-Ebrio-id1929217?country=us Tenemos Facebook: https://www.facebook.com/epistemologoebrio Tenemos Instagram: https://www.instagram.com/epistemologoebrio/ Tenemos Mastodon: https://mast.lat/@paravano69 Tenemos Twitter: https://twitter.com/paravano69 ¡Siempre puedes compartirlo o a tu peor enemigo o a tu mejor amigo! SALUD Y BUENAS CIENCIAS #epistemología #filosofía #ciencia #podcast #epistemólogoebrio #reflexión #historiadelaciencia #pseudociencias
Speaker CFPs and Sponsor Guides are now available for AIE World's Fair — join us on June 25-27 for the biggest AI Engineer conference of 2024!Soumith Chintala needs no introduction in the ML world — his insights are incredibly accessible across Twitter, LinkedIn, podcasts, and conference talks (in this pod we'll assume you'll have caught up on the History of PyTorch pod from last year and cover different topics). He's well known as the creator of PyTorch, but he's more broadly the Engineering Lead on AI Infra, PyTorch, and Generative AI at Meta.Soumith was one of the earliest supporters of Latent Space (and more recently AI News), and we were overjoyed to catch up with him on his latest SF visit for a braindump of the latest AI topics, reactions to some of our past guests, and why Open Source AI is personally so important to him.Life in the GPU-Rich LaneBack in January, Zuck went on Instagram to announce their GPU wealth: by the end of 2024, Meta will have 350k H100s. By adding all their GPU clusters, you'd get to 600k H100-equivalents of compute. At FP16 precision, that's ~1,200,000 PFLOPS. If we used George Hotz's (previous guest!) "Person of Compute" measure, Meta now has 60k humans of compute in their clusters. Occasionally we get glimpses into the GPU-rich life; on a recent ThursdAI chat, swyx prompted PaLM tech lead Yi Tay to write down what he missed most from Google, and he commented that UL2 20B was trained by accidentally leaving the training job running for a month, because hardware failures are so rare in Google.Meta AI's Epic LLM RunBefore Llama broke the internet, Meta released an open source LLM in May 2022, OPT-175B, which was notable for how “open” it was - right down to the logbook! They used only 16 NVIDIA V100 GPUs and Soumith agrees that, with hindsight, it was likely under-trained for its parameter size.In Feb 2023 (pre Latent Space pod), Llama was released, with a 7B version trained on 1T tokens alongside 65B and 33B versions trained on 1.4T tokens. The Llama authors included Guillaume Lample and Timothée Lacroix, who went on to start Mistral.July 2023 was Llama2 time (which we covered!): 3 model sizes, 7B, 13B, and 70B, all trained on 2T tokens. The three models accounted for a grand total of 3,311,616 GPU hours for all pre-training work. CodeLlama followed shortly after, a fine-tune of Llama2 specifically focused on code generation use cases. The family had models in the 7B, 13B, 34B, and 70B size, all trained with 500B extra tokens of code and code-related data, except for 70B which is trained on 1T.All of this on top of other open sourced models like Segment Anything (one of our early hits!), Detectron, Detectron 2, DensePose, and Seamless, and in one year, Meta transformed from a company people made fun of for its “metaverse” investments to one of the key players in the AI landscape and its stock has almost tripled since (about $830B in market value created in the past year).Why Open Source AIThe obvious question is why Meta would spend hundreds of millions on its AI efforts and then release them for free. Zuck has addressed this in public statements:But for Soumith, the motivation is even more personal:“I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India… And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for like zero dollars. And I think that was a strong reason why I ended up where I am. So like that, like the open source side of things, I always push regardless of like what I get paid for, like I think I would do that as a passion project on the side……I think at a fundamental level, the most beneficial value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me……Like, okay, I again always go back to like I'm a student in India with no money. What is my accessibility to any of these closed source models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control issue: I strongly believe if you want human aligned AI, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble.We like the way Soumith put it last year: Closed AI “rate-limits against people's imaginations and needs”!What It Takes For Open Source AI to WinHowever Soumith doesn't think Open Source will simply win by popular demand. There is a tremendous coordination problem with the decentralized nature of the open source AI development right now: nobody is collecting the valuable human feedback in the way that OpenAI or Midjourney are doing.“Open source in general always has a coordination problem. If there's a vertically integrated provider with more resources, they will just be better coordinated than open source. And so now open source has to figure out how to have coordinated benefits. And the reason you want coordinated benefits is because these models are getting better based on human feedback. And if you see with open source models, like if you go to the /r/localllama subreddit, like there's so many variations of models that are being produced from, say, Nous research. I mean, like there's like so many variations built by so many people. And one common theme is they're all using these fine-tuning or human preferences datasets that are very limited and they're not sufficiently diverse. And you look at the other side, say front-ends like Oobabooga or like Hugging Chat or Ollama, they don't really have feedback buttons. All the people using all these front-ends, they probably want to give feedback, but there's no way for them to give feedback… So we're just losing all of this feedback. Maybe open source models are being as used as GPT is at this point in like all kinds of, in a very fragmented way, like in aggregate all the open source models together are probably being used as much as GPT is, maybe close to that. But the amount of feedback that is driving back into the open source ecosystem is like negligible, maybe less than 1% of like the usage. So I think like some, like the blueprint here I think is you'd want someone to create a sinkhole for the feedback… I think if we do that, if that actually happens, I think that probably has a real chance of the open source models having a runaway effect against OpenAI, I think like there's a clear chance we can take at truly winning open source.”If you're working on solving open source coordination, please get in touch!Show Notes* Soumith Chintala Twitter* History of PyTorch episode on Gradient Podcast* The Llama Ecosystem* Apple's MLX* Neural ODEs (Ordinary Differential Equations)* AlphaGo* LMSys arena* Dan Pink's "Drive"* Robotics projects:* Dobb-E* OK Robot* Yann LeCun* Yangqing Jia of Lepton AI* Ed Catmull* George Hotz on Latent Space* Chris Lattner on Latent Space* Guillaume Lample* Yannic Kilcher of OpenAssistant* LMSys* Alex Atallah of OpenRouter* Carlo Sferrazza's 3D tactile research* Alex Wiltschko of Osmo* Tangent by Alex Wiltschko* Lerrel Pinto - RoboticsTimestamps* [00:00:00] Introductions* [00:00:51] Extrinsic vs Intrinsic Success* [00:02:40] Importance of Open Source and Its Impact* [00:03:46] PyTorch vs TinyGrad* [00:08:33] Why PyTorch is the Switzerland of frameworks* [00:10:27] Modular's Mojo + PyTorch?* [00:13:32] PyTorch vs Apple's MLX* [00:16:27] FAIR / PyTorch Alumni* [00:18:50] How can AI inference providers differentiate?* [00:21:41] How to build good benchmarks and learnings from AnyScale's* [00:25:28] Most interesting unexplored ideas* [00:28:18] What people get wrong about synthetic data* [00:35:57] Meta AI's evolution* [00:38:42] How do you allocate 600,000 GPUs?* [00:42:05] Even the GPU Rich are GPU Poor* [00:47:31] Meta's MTIA silicon* [00:50:09] Why we need open source* [00:59:00] Open source's coordination problem for feedback gathering* [01:08:59] Beyond text generation* [01:15:37] Osmo and the Future of Smell Recognition TechnologyTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we have in the studio Soumith Chintala, welcome.Soumith [00:00:17]: Thanks for having me.Swyx [00:00:18]: On one of your rare visits from New York where you live. You got your start in computer vision at NYU with Yann LeCun. That was a very fortuitous start. I was actually listening to your interview on the Gradient podcast. So if people want to know more about the history of Soumith, history of PyTorch, they can go to that podcast. We won't spend that much time there, but I just was marveling at your luck, or I don't know if it's your luck or your drive to find AI early and then find the right quality mentor because I guess Yan really sort of introduced you to that world.Soumith [00:00:51]: Yeah, I think you're talking about extrinsic success, right? A lot of people just have drive to do things that they think is fun, and a lot of those things might or might not be extrinsically perceived as good and successful. I think I just happened to like something that is now one of the coolest things in the world or whatever. But if I happen, the first thing I tried to become was a 3D VFX artist, and I was really interested in doing that, but I turned out to be very bad at it. So I ended up not doing that further. But even if I was good at that, whatever, and I ended up going down that path, I probably would have been equally happy. It's just like maybe like the perception of, oh, is this person successful or not might be different. I think like after a baseline, like your happiness is probably more correlated with your intrinsic stuff.Swyx [00:01:44]: Yes. I think Dan Pink has this book on drive that I often refer to about the power of intrinsic motivation versus extrinsic and how long extrinsic lasts. It's not very long at all. But anyway, now you are an investor in Runway, so in a way you're working on VFX. Yes.Soumith [00:02:01]: I mean, in a very convoluted way.Swyx [00:02:03]: It reminds me of Ed Catmull. I don't know if you guys know, but he actually tried to become an animator in his early years and failed or didn't get accepted by Disney and then went and created Pixar and then got bought by Disney and created Toy Story. So you joined Facebook in 2014 and eventually became a creator and maintainer of PyTorch. And there's this long story there you can refer to on the gradient. I think maybe people don't know that you also involved in more sort of hardware and cluster decision affair. And we can dive into more details there because we're all about hardware this month. Yeah. And then finally, I don't know what else, like what else should people know about you on a personal side or professional side?Soumith [00:02:40]: I think open source is definitely a big passion of mine and probably forms a little bit of my identity at this point. I'm irrationally interested in open source. I think open source has that fundamental way to distribute opportunity in a way that is very powerful. Like, I grew up in India. I didn't have internet for a while. In college, actually, I didn't have internet except for GPRS or whatever. And knowledge was very centralized, but I saw that evolution of knowledge slowly getting decentralized. And that ended up helping me learn quicker and faster for zero dollars. And I think that was a strong reason why I ended up where I am. So the open source side of things, I always push regardless of what I get paid for, like I think I would do that as a passion project on the side.Swyx [00:03:35]: Yeah, that's wonderful. Well, we'll talk about the challenges as well that open source has, open models versus closed models. Maybe you want to touch a little bit on PyTorch before we move on to the sort of Meta AI in general.PyTorch vs Tinygrad tradeoffsAlessio [00:03:46]: Yeah, we kind of touched on PyTorch in a lot of episodes. So we had George Hotz from TinyGrad. He called PyTorch a CISC and TinyGrad a RISC. I would love to get your thoughts on PyTorch design direction as far as, I know you talk a lot about kind of having a happy path to start with and then making complexity hidden away but then available to the end user. One of the things that George mentioned is I think you have like 250 primitive operators in PyTorch, I think TinyGrad is four. So how do you think about some of the learnings that maybe he's going to run into that you already had in the past seven, eight years almost of running PyTorch?Soumith [00:04:24]: Yeah, I think there's different models here, but I think it's two different models that people generally start with. Either they go like, I have a grand vision and I'm going to build a giant system that achieves this grand vision and maybe one is super feature complete or whatever. Or other people say they will get incrementally ambitious, right? And they say, oh, we'll start with something simple and then we'll slowly layer out complexity in a way that optimally applies Huffman coding or whatever. Like where the density of users are and what they're using, I would want to keep it in the easy, happy path and where the more niche advanced use cases, I'll still want people to try them, but they need to take additional frictional steps. George, I think just like we started with PyTorch, George started with the incrementally ambitious thing. I remember TinyGrad used to be, like we would be limited to a thousand lines of code and I think now it's at 5,000. So I think there is no real magic to which why PyTorch has the kind of complexity. I think it's probably partly necessitated and partly because we built with the technology available under us at that time, PyTorch is like 190,000 lines of code or something at this point. I think if you had to rewrite it, we would probably think about ways to rewrite it in a vastly simplified way for sure. But a lot of that complexity comes from the fact that in a very simple, explainable way, you have memory hierarchies. You have CPU has three levels of caches and then you have DRAM and SSD and then you have network. Similarly, GPU has several levels of memory and then you have different levels of network hierarchies, NVLink plus InfiniBand or Rocky or something like that, right? And the way the flops are available on your hardware, they are available in a certain way and your computation is in a certain way and you have to retrofit your computation onto both the memory hierarchy and like the flops available. When you're doing this, it is actually a fairly hard mathematical problem to do this setup, like you find the optimal thing. And finding the optimal thing is, what is optimal depends on the input variables themselves. So like, okay, what is the shape of your input tensors and what is the operation you're trying to do and various things like that. Finding that optimal configuration and writing it down in code is not the same for every input configuration you have. Like for example, just as the shape of the tensors change, let's say you have three input tensors into a Sparstar product or something like that. The shape of each of these input tensors will vastly change how you do this optimally placing this operation onto the hardware in a way that will get you maximal throughput. So a lot of our complexity comes from writing out hundreds of configurations for each single PyTorch operator and templatizing these things and symbolically generating the final CUDA code or CPU code. There's no way to avoid it because mathematically we haven't found symbolic ways to do this that also keep compile time near zero. You can write a very simple framework, but then you also should be willing to eat the long compile time. So if searching for that optimal performance at runtime, but that's the trade off. There's no, like, I don't think unless we have great breakthroughs George's vision is achievable, he should be thinking about a narrower problem such as I'm only going to make this for work for self-driving car connets or I'm only going to make this work for LLM transformers of the llama style. Like if you start narrowing the problem down, you can make a vastly simpler framework. But if you don't, if you need the generality to power all of the AI research that is happening and keep zero compile time and in all these other factors, I think it's not easy to avoid the complexity.Pytorch vs MojoAlessio [00:08:33]: That's interesting. And we kind of touched on this with Chris Lattner when he was on the podcast. If you think about frameworks, they have the model target. They have the hardware target. They have different things to think about. He mentioned when he was at Google, TensorFlow trying to be optimized to make TPUs go brr, you know, and go as fast. I think George is trying to make especially AMD stack be better than ROCm. How come PyTorch has been such as Switzerland versus just making Meta hardware go brr?Soumith [00:09:00]: First, Meta is not in the business of selling hardware. Meta is not in the business of cloud compute. The way Meta thinks about funding PyTorch is we're funding it because it's net good for Meta to fund PyTorch because PyTorch has become a standard and a big open source project. And generally it gives us a timeline edge. It gives us leverage and all that within our own work. So why is PyTorch more of a Switzerland rather than being opinionated? I think the way we think about it is not in terms of Switzerland or not. We actually the way we articulate it to all hardware vendors and software vendors and all who come to us being we want to build a backend in core for PyTorch and ship it by default is we just only look at our user side of things. Like if users are using a particular piece of hardware, then we want to support it. We very much don't want to king make the hardware side of things. So as the MacBooks have GPUs and as that stuff started getting increasingly interesting, we pushed Apple to push some engineers and work on the NPS support and we spend significant time from Meta funded engineers on that as well because a lot of people are using the Apple GPUs and there's demand. So we kind of mostly look at it from the demand side. We never look at it from like oh which hardware should we start taking opinions on.Swyx [00:10:27]: Is there a future in which, because Mojo or Modular Mojo is kind of a superset of Python, is there a future in which PyTorch might use Mojo features optionally?Soumith [00:10:36]: I think it depends on how well integrated it is into the Python ecosystem. So if Mojo is like a pip install and it's readily available and users feel like they can use Mojo so smoothly within their workflows in a way that just is low friction, we would definitely look into that. Like in the same way PyTorch now depends on Triton, OpenAI Triton, and we never had a conversation that was like huh, that's like a dependency. Should we just build a Triton of our own or should we use Triton? It almost doesn't, like those conversations don't really come up for us. The conversations are more well does Triton have 10,000 dependencies and is it hard to install? We almost don't look at these things from a strategic leverage point of view. We look at these things from a user experience point of view, like is it easy to install? Is it smoothly integrated and does it give enough benefits for us to start depending on it? If so, yeah, we should consider it. That's how we think about it.Swyx [00:11:37]: You're inclusive by default as long as it meets the minimum bar of, yeah, but like maybe I phrased it wrongly. Maybe it's more like what problems would you look to solve that you have right now?Soumith [00:11:48]: I think it depends on what problems Mojo will be useful at.Swyx [00:11:52]: Mainly a performance pitch, some amount of cross compiling pitch.Soumith [00:11:56]: Yeah, I think the performance pitch for Mojo was like, we're going to be performant even if you have a lot of custom stuff, you're going to write arbitrary custom things and we will be performant. And that value proposition is not clear to us from the PyTorch side to consider it for PyTorch. So PyTorch, it's actually not 250 operators, it's like a thousand operators. PyTorch exposes about a thousand operators and people kind of write their ideas in the thousand operators of PyTorch. Mojo is like, well, maybe it's okay to completely sidestep those thousand operators of PyTorch and just write it in a more natural form. Just write raw Python, write for loops or whatever, right? So from the consideration of how do we intersect PyTorch with Mojo, I can see one use case where you have custom stuff for some parts of your program, but mostly it's PyTorch. And so we can probably figure out how to make it easier for say Torch.compile to smoothly also consume Mojo subgraphs and like, you know, the interoperability being actually usable, that I think is valuable. But Mojo as a fundamental front end would be replacing PyTorch, not augmenting PyTorch. So in that sense, I don't see a synergy in more deeply integrating Mojo.Pytorch vs MLXSwyx [00:13:21]: So call out to Mojo whenever they have written something in Mojo and there's some performance related thing going on. And then since you mentioned Apple, what should people think of PyTorch versus MLX?Soumith [00:13:32]: I mean, MLX is early and I know the folks well, Ani used to work at FAIR and I used to chat with him all the time. He used to be based out of New York as well. The way I think about MLX is that MLX is specialized for Apple right now. It has a happy path because it's defined its product in a narrow way. At some point MLX either says we will only be supporting Apple and we will just focus on enabling, you know, there's a framework if you use your MacBook, but once you like go server side or whatever, that's not my problem and I don't care. For MLS, it enters like the server side set of things as well. Like one of these two things will happen, right? If the first thing will happen, like MLX's overall addressable market will be small, but it probably do well within that addressable market. If it enters the second phase, they're going to run into all the same complexities that we have to deal with. They will not have any magic wand and they will have more complex work to do. They probably wouldn't be able to move as fast.Swyx [00:14:44]: Like having to deal with distributed compute?Soumith [00:14:48]: Distributed, NVIDIA and AMD GPUs, like just like having a generalization of the concept of a backend, how they treat compilation with plus overheads. Right now they're deeply assumed like the whole NPS graph thing. So they need to think about all these additional things if they end up expanding onto the server side and they'll probably build something like PyTorch as well, right? Like eventually that's where it will land. And I think there they will kind of fail on the lack of differentiation. Like it wouldn't be obvious to people why they would want to use it.Swyx [00:15:24]: I mean, there are some cloud companies offering M1 and M2 chips on servers. I feel like it might be interesting for Apple to pursue that market, but it's not their core strength.Soumith [00:15:33]: Yeah. If Apple can figure out their interconnect story, maybe, like then it can become a thing.Swyx [00:15:40]: Honestly, that's more interesting than the cars. Yes.Soumith [00:15:43]: I think the moat that NVIDIA has right now, I feel is that they have the interconnect that no one else has, like AMD GPUs are pretty good. I'm sure there's various silicon that is not bad at all, but the interconnect, like NVLink is uniquely awesome. I'm sure the other hardware providers are working on it, but-Swyx [00:16:04]: I feel like when you say it's uniquely awesome, you have some appreciation of it that the rest of us don't. I mean, the rest of us just like, you know, we hear marketing lines, but what do you mean when you say NVIDIA is very good at networking? Obviously they made the acquisition maybe like 15 years ago.Soumith [00:16:15]: Just the bandwidth it offers and the latency it offers. I mean, TPUs also have a good interconnect, but you can't buy them. So you have to go to Google to use it.PyTorch MafiaAlessio [00:16:27]: Who are some of the other FAIR PyTorch alumni that are building cool companies? I know you have Fireworks AI, Lightning AI, Lepton, and Yangqing, you knew since college when he was building Coffee?Soumith [00:16:40]: Yeah, so Yangqing and I used to be framework rivals, PyTorch, I mean, we were all a very small close-knit community back then. Caffe, Torch, Theano, Chainer, Keras, various frameworks. I mean, it used to be more like 20 frameworks. I can't remember all the names. CCV by Liu Liu, who is also based out of SF. And I would actually like, you know, one of the ways it was interesting is you went into the framework guts and saw if someone wrote their own convolution kernel or they were just copying someone else's. There were four or five convolution kernels that were unique and interesting. There was one from this guy out of Russia, I forgot the name, but I remembered who was awesome enough to have written their own kernel. And at some point there, I built out these benchmarks called ConNet benchmarks. They're just benchmarking all the convolution kernels that are available at that time. It hilariously became big enough that at that time AI was getting important, but not important enough that industrial strength players came in to do these kinds of benchmarking and standardization. Like we have MLPerf today. So a lot of the startups were using ConNet benchmarks in their pitch decks as like, oh, you know, on ConNet benchmarks, this is how we fare, so you should fund us. I remember Nirvana actually was at the top of the pack because Scott Gray wrote amazingly fast convolution kernels at that time. Very interesting, but separate times. But to answer your question, Alessio, I think mainly Lepton, Fireworks are the two most obvious ones, but I'm sure the fingerprints are a lot wider. They're just people who worked within the PyTorch Cafe2 cohort of things and now end up at various other places.Swyx [00:18:50]: I think as a, both as an investor and a people looking to build on top of their services, it's a uncomfortable slash like, I don't know what I don't know pitch. Because I've met Yang Tsing and I've met Lin Chao. Yeah, I've met these folks and they're like, you know, we are deep in the PyTorch ecosystem and we serve billions of inferences a day or whatever at Facebook and now we can do it for you. And I'm like, okay, that's great. Like, what should I be wary of or cautious of when these things happen? Because I'm like, obviously this experience is extremely powerful and valuable. I just don't know what I don't know. Like, what should people know about like these sort of new inference as a service companies?Soumith [00:19:32]: I think at that point you would be investing in them for their expertise of one kind. So if they've been at a large company, but they've been doing amazing work, you would be thinking about it as what these people bring to the table is that they're really good at like GPU programming or understanding the complexity of serving models once it hits a certain scale. You know, various expertise like from the infra and AI and GPUs point of view. What you would obviously want to figure out is whether their understanding of the external markets is clear, whether they know and understand how to think about running a business, understanding how to be disciplined about making money or, you know, various things like that.Swyx [00:20:23]: Maybe I'll put it like, actually I will de-emphasize the investing bit and just more as a potential customer. Oh, okay. Like, it's more okay, you know, you have PyTorch gods, of course. Like, what else should I know?Soumith [00:20:37]: I mean, I would not care about who's building something. If I'm trying to be a customer, I would care about whether...Swyx [00:20:44]: Benchmarks.Soumith [00:20:44]: Yeah, I use it and it's usability and reliability and speed, right?Swyx [00:20:51]: Quality as well.Soumith [00:20:51]: Yeah, if someone from some random unknown place came to me and say, user stuff is great. Like, and I have the bandwidth, I probably will give it a shot. And if it turns out to be great, like I'll just use it.Benchmark dramaSwyx [00:21:07]: Okay, great. And then maybe one more thing about benchmarks, since we already brought it up and you brought up Confident Benchmarks. There was some recent drama around AnyScale. AnyScale released their own benchmarks and obviously they look great on their own benchmarks, but maybe didn't give the other... I feel there are two lines of criticism. One, which is they didn't test some apples for apples on the kind of endpoints that the other providers, that they are competitors with, on their benchmarks and that is due diligence baseline. And then the second would be more just optimizing for the right thing. You had some commentary on it. I'll just kind of let you riff.Soumith [00:21:41]: Yeah, I mean, in summary, basically my criticism of that was AnyScale built these benchmarks for end users to just understand what they should pick, right? And that's a very good thing to do. I think what they didn't do a good job of is give that end user a full understanding of what they should pick. Like they just gave them a very narrow slice of understanding. I think they just gave them latency numbers and that's not sufficient, right? You need to understand your total cost of ownership at some reasonable scale. Not oh, one API call is one cent, but a thousand API calls are 10 cents. Like people can misprice to cheat on those benchmarks. So you want to understand, okay, like how much is it going to cost me if I actually subscribe to you and do like a million API calls a month or something? And then you want to understand the latency and reliability, not just from one call you made, but an aggregate of calls you've made over several various times of the day and times of the week. And the nature of the workloads, is it just some generic single paragraph that you're sending that is cashable? Or is it like testing of real world workload? I think that kind of rigor, like in presenting that benchmark wasn't there. It was a much more narrow sliver of what should have been a good benchmark. That was my main criticism. And I'm pretty sure if before they released it, they showed it to their other stakeholders who would be caring about this benchmark because they are present in it, they would have easily just pointed out these gaps. And I think they didn't do that and they just released it. So I think those were the two main criticisms. I think they were fair and Robert took it well.Swyx [00:23:40]: And he took it very well. And we'll have him on at some point and we'll discuss it. But I think it's important for, I think the market being maturing enough that people start caring and competing on these kinds of things means that we need to establish what best practice is because otherwise everyone's going to play dirty.Soumith [00:23:55]: Yeah, absolutely. My view of the LLM inference market in general is that it's the laundromat model. Like the margins are going to drive down towards the bare minimum. It's going to be all kinds of arbitrage between how much you can get the hardware for and then how much you sell the API and how much latency your customers are willing to let go. You need to figure out how to squeeze your margins. Like what is your unique thing here? Like I think Together and Fireworks and all these people are trying to build some faster CUDA kernels and faster, you know, hardware kernels in general. But those modes only last for a month or two. These ideas quickly propagate.Swyx [00:24:38]: Even if they're not published?Soumith [00:24:39]: Even if they're not published, the idea space is small. So even if they're not published, the discovery rate is going to be pretty high. It's not like we're talking about a combinatorial thing that is really large. You're talking about Llama style LLM models. And we're going to beat those to death on a few different hardware SKUs, right? Like it's not even we have a huge diversity of hardware you're going to aim to run it on. Now when you have such a narrow problem and you have a lot of people working on it, the rate at which these ideas are going to get figured out is going to be pretty rapid.Swyx [00:25:15]: Is it a standard bag of tricks? Like the standard one that I know of is, you know, fusing operators and-Soumith [00:25:22]: Yeah, it's the standard bag of tricks on figuring out how to improve your memory bandwidth and all that, yeah.Alessio [00:25:28]: Any ideas instead of things that are not being beaten to death that people should be paying more attention to?Novel PyTorch ApplicationsSwyx [00:25:34]: One thing I was like, you know, you have a thousand operators, right? Like what's the most interesting usage of PyTorch that you're seeing maybe outside of this little bubble?Soumith [00:25:41]: So PyTorch, it's very interesting and scary at the same time, but basically it's used in a lot of exotic ways, like from the ML angle, what kind of models are being built? And you get all the way from state-based models and all of these things to stuff nth order differentiable models, like neural ODEs and stuff like that. I think there's one set of interestingness factor from the ML side of things. And then there's the other set of interesting factor from the applications point of view. It's used in Mars Rover simulations, to drug discovery, to Tesla cars. And there's a huge diversity of applications in which it is used. So in terms of the most interesting application side of things, I think I'm scared at how many interesting things that are also very critical and really important it is used in. I think the scariest was when I went to visit CERN at some point and they said they were using PyTorch and they were using GANs at the same time for particle physics research. And I was scared more about the fact that they were using GANs than they were using PyTorch, because at that time I was a researcher focusing on GANs. But the diversity is probably the most interesting. How many different things it is being used in. I think that's the most interesting to me from the applications perspective. From the models perspective, I think I've seen a lot of them. Like the really interesting ones to me are where we're starting to combine search and symbolic stuff with differentiable models, like the whole AlphaGo style models is one example. And then I think we're attempting to do it for LLMs as well, with various reward models and search. I mean, I don't think PyTorch is being used in this, but the whole alpha geometry thing was interesting because again, it's an example of combining the symbolic models with the gradient based ones. But there are stuff like alpha geometry that PyTorch is used at, especially when you intersect biology and chemistry with ML. In those areas, you want stronger guarantees on the output. So yeah, maybe from the ML side, those things to me are very interesting right now.Swyx [00:28:03]: Yeah. People are very excited about the alpha geometry thing. And it's kind of like, for me, it's theoretical. It's great. You can solve some Olympia questions. I'm not sure how to make that bridge over into the real world applications, but I'm sure people smarter than me will figure it out.Synthetic Data vs Symbolic ModelsSoumith [00:28:18]: Let me give you an example of it. You know how the whole thing about synthetic data will be the next rage in LLMs is a thing?Swyx [00:28:27]: Already is a rage.Soumith [00:28:28]: Which I think is fairly misplaced in how people perceive it. People think synthetic data is some kind of magic wand that you wave and it's going to be amazing. Synthetic data is useful in neural networks right now because we as humans have figured out a bunch of symbolic models of the world or made up certain symbolic models because of human innate biases. So we've figured out how to ground particle physics in a 30 parameter model. And it's just very hard to compute as in it takes a lot of flops to compute, but it only has 30 parameters or so. I mean, I'm not a physics expert, but it's a very low rank model. We built mathematics as a field that basically is very low rank. Language, a deep understanding of language, like the whole syntactic parse trees and just understanding how language can be broken down and into a formal symbolism is something that we figured out. So we basically as humans have accumulated all this knowledge on these subjects, either synthetic, we created those subjects in our heads, or we grounded some real world phenomenon into a set of symbols. But we haven't figured out how to teach neural networks symbolic world models directly. The only way we have to teach them is generating a bunch of inputs and outputs and gradient dissenting over them. So in areas where we have the symbolic models and we need to teach all the knowledge we have that is better encoded in the symbolic models, what we're doing is we're generating a bunch of synthetic data, a bunch of input output pairs, and then giving that to the neural network and asking it to learn the same thing that we already have a better low rank model of in gradient descent in a much more over-parameterized way. Outside of this, like where we don't have good symbolic models, like synthetic data obviously doesn't make any sense. So synthetic data is not a magic wand where it'll work in all cases in every case or whatever. It's just where we as humans already have good symbolic models off. We need to impart that knowledge to neural networks and we figured out the synthetic data is a vehicle to impart this knowledge to. So, but people, because maybe they don't know enough about synthetic data as a notion, but they hear, you know, the next wave of data revolution is synthetic data. They think it's some kind of magic where we just create a bunch of random data somehow. They don't think about how, and then they think that's just a revolution. And I think that's maybe a gap in understanding most people have in this hype cycle.Swyx [00:31:23]: Yeah, well, it's a relatively new concept, so. Oh, there's two more that I'll put in front of you and then you can see what you respond. One is, you know, I have this joke that it's, you know, it's only synthetic data if it's from the Mistral region of France, otherwise it's just a sparkling distillation, which is what news research is doing. Like they're distilling GPT-4 by creating synthetic data from GPT-4, creating mock textbooks inspired by Phi 2 and then fine tuning open source models like Llama. And so I don't know, I mean, I think that's, should we call that synthetic data? Should we call it something else? I don't know.Soumith [00:31:57]: Yeah, I mean, the outputs of LLMs, are they synthetic data? They probably are, but I think it depends on the goal you have. If your goal is you're creating synthetic data with the goal of trying to distill GPT-4's superiority into another model, I guess you can call it synthetic data, but it also feels like disingenuous because your goal is I need to copy the behavior of GPT-4 and-Swyx [00:32:25]: It's also not just behavior, but data set. So I've often thought of this as data set washing. Like you need one model at the top of the chain, you know, unnamed French company that has that, you know, makes a model that has all the data in it that we don't know where it's from, but it's open source, hey, and then we distill from that and it's great. To be fair, they also use larger models as judges for preference ranking, right? So that is, I think, a very, very accepted use of synthetic.Soumith [00:32:53]: Correct. I think it's a very interesting time where we don't really have good social models of what is acceptable depending on how many bits of information you use from someone else, right? It's like, okay, you use one bit. Is that okay? Yeah, let's accept it to be okay. Okay, what about if you use 20 bits? Is that okay? I don't know. What if you use 200 bits? I don't think we as society have ever been in this conundrum where we have to be like, where is the boundary of copyright or where is the boundary of socially accepted understanding of copying someone else? We haven't been tested this mathematically before,Swyx [00:33:38]: in my opinion. Whether it's transformative use. Yes. So yeah, I think this New York Times opening eye case is gonna go to the Supreme Court and we'll have to decide it because I think we never had to deal with it before. And then finally, for synthetic data, the thing that I'm personally exploring is solving this great stark paradigm difference between rag and fine tuning, where you can kind of create synthetic data off of your retrieved documents and then fine tune on that. That's kind of synthetic. All you need is variation or diversity of samples for you to fine tune on. And then you can fine tune new knowledge into your model. I don't know if you've seen that as a direction for synthetic data.Soumith [00:34:13]: I think you're basically trying to, what you're doing is you're saying, well, language, I know how to parametrize language to an extent. And I need to teach my model variations of this input data so that it's resilient or invariant to language uses of that data.Swyx [00:34:32]: Yeah, it doesn't overfit on the wrong source documents.Soumith [00:34:33]: So I think that's 100% synthetic. You understand, the key is you create variations of your documents and you know how to do that because you have a symbolic model or like some implicit symbolic model of language.Swyx [00:34:48]: Okay.Alessio [00:34:49]: Do you think the issue with symbolic models is just the architecture of the language models that we're building? I think maybe the thing that people grasp is the inability of transformers to deal with numbers because of the tokenizer. Is it a fundamental issue there too? And do you see alternative architectures that will be better with symbolic understanding?Soumith [00:35:09]: I am not sure if it's a fundamental issue or not. I think we just don't understand transformers enough. I don't even mean transformers as an architecture. I mean the use of transformers today, like combining the tokenizer and transformers and the dynamics of training, when you show math heavy questions versus not. I don't have a good calibration of whether I know the answer or not. I, you know, there's common criticisms that are, you know, transformers will just fail at X. But then when you scale them up to sufficient scale, they actually don't fail at that X. I think there's this entire subfield where they're trying to figure out these answers called like the science of deep learning or something. So we'll get to know more. I don't know the answer.Meta AI and Llama 2/3Swyx [00:35:57]: Got it. Let's touch a little bit on just Meta AI and you know, stuff that's going on there. Maybe, I don't know how deeply you're personally involved in it, but you're our first guest with Meta AI, which is really fantastic. And Llama 1 was, you know, you are such a believer in open source. Llama 1 was more or less the real breakthrough in open source AI. The most interesting thing for us covering on this, in this podcast was the death of Chinchilla, as people say. Any interesting insights there around the scaling models for open source models or smaller models or whatever that design decision was when you guys were doing it?Soumith [00:36:31]: So Llama 1 was Guillaume Lample and team. There was OPT before, which I think I'm also very proud of because we bridged the gap in understanding of how complex it is to train these models to the world. Like until then, no one really in gory detail published.Swyx [00:36:50]: The logs.Soumith [00:36:51]: Yeah. Like, why is it complex? And everyone says, oh, it's complex. But no one really talked about why it's complex. I think OPT was cool.Swyx [00:37:02]: I met Susan and she's very, very outspoken. Yeah.Soumith [00:37:05]: We probably, I think, didn't train it for long enough, right? That's kind of obvious in retrospect.Swyx [00:37:12]: For a 175B. Yeah. You trained it according to Chinchilla at the time or?Soumith [00:37:17]: I can't remember the details, but I think it's a commonly held belief at this point that if we trained OPT longer, it would actually end up being better. Llama 1, I think, was Guillaume Lample and team Guillaume is fantastic and went on to build Mistral. I wasn't too involved in that side of things. So I don't know what you're asking me, which is how did they think about scaling loss and all of that? Llama 2, I was more closely involved in. I helped them a reasonable amount with their infrastructure needs and stuff. And Llama 2, I think, was more like, let's get to the evolution. At that point, we kind of understood what we were missing from the industry's understanding of LLMs. And we needed more data and we needed more to train the models for longer. And we made, I think, a few tweaks to the architecture and we scaled up more. And that was Llama 2. I think Llama 2, you can think of it as after Guillaume left, the team kind of rebuilt their muscle around Llama 2. And Hugo, I think, who's the first author is fantastic. And I think he did play a reasonable big role in Llama 1 as well.Soumith [00:38:35]: And he overlaps between Llama 1 and 2. So in Llama 3, obviously, hopefully, it'll be awesome.Alessio [00:38:42]: Just one question on Llama 2, and then we'll try and fish Llama 3 spoilers out of you. In the Llama 2 paper, the loss curves of the 34 and 70B parameter, they still seem kind of steep. Like they could go lower. How, from an infrastructure level, how do you allocate resources? Could they have just gone longer or were you just, hey, this is all the GPUs that we can burn and let's just move on to Llama 3 and then make that one better?Soumith [00:39:07]: Instead of answering specifically about that Llama 2 situation or whatever, I'll tell you how we think about things. Generally, we're, I mean, Mark really is some numbers, right?Swyx [00:39:20]: So let's cite those things again. All I remember is like 600K GPUs.Soumith [00:39:24]: That is by the end of this year and 600K H100 equivalents. With 250K H100s, including all of our other GPU or accelerator stuff, it would be 600-and-something-K aggregate capacity.Swyx [00:39:38]: That's a lot of GPUs.Soumith [00:39:39]: We'll talk about that separately. But the way we think about it is we have a train of models, right? Llama 1, 2, 3, 4. And we have a bunch of GPUs. I don't think we're short of GPUs. Like-Swyx [00:39:54]: Yeah, no, I wouldn't say so. Yeah, so it's all a matter of time.Soumith [00:39:56]: I think time is the biggest bottleneck. It's like, when do you stop training the previous one and when do you start training the next one? And how do you make those decisions? The data, do you have net new data, better clean data for the next one in a way that it's not worth really focusing on the previous one? It's just a standard iterative product. You're like, when is the iPhone 1? When do you start working on iPhone 2? Where is the iPhone? And so on, right? So mostly the considerations are time and generation, rather than GPUs, in my opinion.Alessio [00:40:31]: So one of the things with the scaling loss, like Chinchilla is optimal to balance training and inference costs. I think at Meta's scale, you would rather pay a lot more maybe at training and then save on inference. How do you think about that from infrastructure perspective? I think in your tweet, you say you can try and guess on like how we're using these GPUs. Can you just give people a bit of understanding? It's like, because I've already seen a lot of VCs say, Llama 3 has been trained on 600,000 GPUs and that's obviously not true, I'm sure. How do you allocate between the research, FAIR and the Llama training, the inference on Instagram suggestions that get me to scroll, like AI-generated stickers on WhatsApp and all of that?Soumith [00:41:11]: Yeah, we haven't talked about any of this publicly, but as a broad stroke, it's like how we would allocate resources of any other kinds at any company. You run a VC portfolio, how do you allocate your investments between different companies or whatever? You kind of make various trade-offs and you kind of decide, should I invest in this project or this other project, or how much should I invest in this project? It's very much a zero sum of trade-offs. And it also comes into play, how are your clusters configured, like overall, what you can fit of what size and what cluster and so on. So broadly, there's no magic sauce here. I mean, I think the details would add more spice, but also wouldn't add more understanding. It's just gonna be like, oh, okay, I mean, this looks like they just think about this as I would normally do.Alessio [00:42:05]: So even the GPU rich run through the same struggles of having to decide where to allocate things.Soumith [00:42:11]: Yeah, I mean, at some point I forgot who said it, but you kind of fit your models to the amount of compute you have. If you don't have enough compute, you figure out how to make do with smaller models. But no one as of today, I think would feel like they have enough compute. I don't think I've heard any company within the AI space be like, oh yeah, like we feel like we have sufficient compute and we couldn't have done better. So that conversation, I don't think I've heard from any of my friends at other companies.EleutherSwyx [00:42:47]: Stella from Eleuther sometimes says that because she has a lot of donated compute. She's trying to put it to interesting uses, but for some reason she's decided to stop making large models.Soumith [00:42:57]: I mean, that's a cool, high conviction opinion that might pay out.Swyx [00:43:01]: Why?Soumith [00:43:02]: I mean, she's taking a path that most people don't care to take about in this climate and she probably will have very differentiated ideas. I mean, think about the correlation of ideas in AI right now. It's so bad, right? So everyone's fighting for the same pie. In some weird sense, that's partly why I don't really directly work on LLMs. I used to do image models and stuff and I actually stopped doing GANs because GANs were getting so hot that I didn't have any calibration of whether my work would be useful or not because, oh yeah, someone else did the same thing you did. It's like, there's so much to do, I don't understand why I need to fight for the same pie. So I think Stella's decision is very smart.Making BetsAlessio [00:43:53]: And how do you reconcile that with how we started the discussion about intrinsic versus extrinsic kind of like accomplishment or success? How should people think about that especially when they're doing a PhD or early in their career? I think in Europe, I walked through a lot of the posters and whatnot, there seems to be mode collapse in a way in the research, a lot of people working on the same things. Is it worth for a PhD to not take a bet on something that is maybe not as interesting just because of funding and visibility and whatnot? Or yeah, what suggestions would you give?Soumith [00:44:28]: I think there's a baseline level of compatibility you need to have with the field. Basically, you need to figure out if you will get paid enough to eat, right? Like whatever reasonable normal lifestyle you want to have as a baseline. So you at least have to pick a problem within the neighborhood of fundable. Like you wouldn't wanna be doing something so obscure that people are like, I don't know, like you can work on it.Swyx [00:44:59]: Would a limit on fundability, I'm just observing something like three months of compute, right? That's the top line, that's the like max that you can spend on any one project.Soumith [00:45:09]: But like, I think that's very ill specified, like how much compute, right? I think that the notion of fundability is broader. It's more like, hey, are these family of models within the acceptable set of, you're not crazy or something, right? Even something like neural or DS, which is a very boundary pushing thing or states-based models or whatever. Like all of these things I think are still in fundable territory. When you're talking about, I'm gonna do one of the neuromorphic models and then apply image classification to them or something, then it becomes a bit questionable. Again, it depends on your motivation. Maybe if you're a neuroscientist, it actually is feasible. But if you're an AI engineer, like the audience of these podcasts, then it's more questionable. The way I think about it is, you need to figure out how you can be in the baseline level of fundability just so that you can just live. And then after that, really focus on intrinsic motivation and depends on your strengths, like how you can play to your strengths and your interests at the same time. Like I try to look at a bunch of ideas that are interesting to me, but also try to play to my strengths. I'm not gonna go work on theoretical ML. I'm interested in it, but when I want to work on something like that, I try to partner with someone who is actually a good theoretical ML person and see if I actually have any value to provide. And if they think I do, then I come in. So I think you'd want to find that intersection of ideas you like, and that also play to your strengths. And I'd go from there. Everything else, like actually finding extrinsic success and all of that, I think is the way I think about it is like somewhat immaterial. When you're talking about building ecosystems and stuff, slightly different considerations come into play, but that's a different conversation.Swyx [00:47:06]: We're gonna pivot a little bit to just talking about open source AI. But one more thing I wanted to establish for Meta is this 600K number, just kind of rounding out the discussion, that's for all Meta. So including your own inference needs, right? It's not just about training.Soumith [00:47:19]: It's gonna be the number in our data centers for all of Meta, yeah.Swyx [00:47:23]: Yeah, so there's a decent amount of workload serving Facebook and Instagram and whatever. And then is there interest in like your own hardware?MTIASoumith [00:47:31]: We already talked about our own hardware. It's called MTIA. Our own silicon, I think we've even showed the standard photograph of you holding the chip that doesn't work. Like as in the chip that you basically just get like-Swyx [00:47:51]: As a test, right?Soumith [00:47:52]: Yeah, a test chip or whatever. So we are working on our silicon and we'll probably talk more about it when the time is right, but-Swyx [00:48:00]: Like what gaps do you have that the market doesn't offer?Soumith [00:48:04]: Okay, I mean, this is easy to answer. So basically, remember how I told you about there's this memory hierarchy and like sweet spots and all of that? Fundamentally, when you build a hardware, you make it general enough that a wide set of customers and a wide set of workloads can use it effectively while trying to get the maximum level of performance they can. The more specialized you make the chip, the more hardware efficient it's going to be, the more power efficient it's gonna be, the more easier it's going to be to find the software, like the kernel's right to just map that one or two workloads to that hardware and so on. So it's pretty well understood across the industry that if you have a sufficiently large volume, enough workload, you can specialize it and get some efficiency gains, like power gains and so on. So the way you can think about everyone building, every large company building silicon, I think a bunch of the other large companies are building their own silicon as well, is they, each large company has a sufficient enough set of verticalized workloads that can be specialized that have a pattern to them that say a more generic accelerator like an NVIDIA or an AMD GPU does not exploit. So there is some level of power efficiency that you're leaving on the table by not exploiting that. And you have sufficient scale and you have sufficient forecasted stability that those workloads will exist in the same form, that it's worth spending the time to build out a chip to exploit that sweet spot. Like obviously something like this is only useful if you hit a certain scale and that your forecasted prediction of those kind of workloads being in the same kind of specializable exploitable way is true. So yeah, that's why we're building our own chips.Swyx [00:50:08]: Awesome.Open Source AIAlessio [00:50:09]: Yeah, I know we've been talking a lot on a lot of different topics and going back to open source, you had a very good tweet. You said that a single company's closed source effort rate limits against people's imaginations and needs. How do you think about all the impact that some of the Meta AI work in open source has been doing and maybe directions of the whole open source AI space?Soumith [00:50:32]: Yeah, in general, I think first, I think it's worth talking about this in terms of open and not just open source, because like with the whole notion of model weights, no one even knows what source means for these things. But just for the discussion, when I say open source, you can assume it's just I'm talking about open. And then there's the whole notion of licensing and all that, commercial, non-commercial, commercial with clauses and all that. I think at a fundamental level, the most benefited value of open source is that you make the distribution to be very wide. It's just available with no friction and people can do transformative things in a way that's very accessible. Maybe it's open source, but it has a commercial license and I'm a student in India. I don't care about the license. I just don't even understand the license. But like the fact that I can use it and do something with it is very transformative to me. Like I got this thing in a very accessible way. And then it's various degrees, right? And then if it's open source, but it's actually a commercial license, then a lot of companies are gonna benefit from gaining value that they didn't previously have, that they maybe had to pay a closed source company for it. So open source is just a very interesting tool that you can use in various ways. So there's, again, two kinds of open source. One is some large company doing a lot of work and then open sourcing it. And that kind of effort is not really feasible by say a band of volunteers doing it the same way. So there's both a capital and operational expenditure that the large company just decided to ignore and give it away to the world for some benefits of some kind. They're not as tangible as direct revenue. So in that part, Meta has been doing incredibly good things. They fund a huge amount of the PyTorch development. They've open sourced Llama and those family of models and several other fairly transformative projects. FICE is one, Segment Anything, Detectron, Detectron 2. Dense Pose. I mean, it's-Swyx [00:52:52]: Seamless. Yeah, seamless.Soumith [00:52:53]: Like it's just the list is so long that we're not gonna cover. So I think Meta comes into that category where we spend a lot of CapEx and OpEx and we have a high talent density of great AI people and we open our stuff. And the thesis for that, I remember when FAIR was started, the common thing was like, wait, why would Meta wanna start a open AI lab? Like what exactly is a benefit from a commercial perspective? And for then the thesis was very simple. It was AI is currently rate limiting Meta's ability to do things. Our ability to build various product integrations, moderation, various other factors. Like AI was the limiting factor and we just wanted AI to advance more and we didn't care if the IP of the AI was uniquely in our possession or not. However the field advances, that accelerates Meta's ability to build a better product. So we just built an open AI lab and we said, if this helps accelerate the progress of AI, that's strictly great for us. But very easy, rational, right? Still the same to a large extent with the Llama stuff. And it's the same values, but the argument, it's a bit more nuanced. And then there's a second kind of open source, which is, oh, we built this project, nights and weekends and we're very smart people and we open sourced it and then we built a community around it. This is the Linux kernel and various software projects like that. So I think about open source, like both of these things being beneficial and both of these things being different. They're different and beneficial in their own ways. The second one is really useful when there's an active arbitrage to be done. If someone's not really looking at a particular space because it's not commercially viable or whatever, like a band of volunteers can just coordinate online and do something and then make that happen. And that's great.Open Source LLMsI wanna cover a little bit about open source LLMs maybe. So open source LLMs have been very interesting because I think we were trending towards an increase in open source in AI from 2010 all the way to 2017 or something. Like where more and more pressure within the community was to open source their stuff so that their methods and stuff get adopted. And then the LLMs revolution kind of took the opposite effect OpenAI stopped open sourcing their stuff and DeepMind kind of didn't, like all the other cloud and all these other providers, they didn't open source their stuff. And it was not good in the sense that first science done in isolation probably will just form its own bubble where people believe their own b******t or whatever. So there's that problem. And then there was the other problem which was the accessibility part. Like, okay, I again always go back to I'm a student in India with no money. What is my accessibility to any of these closers models? At some scale I have to pay money. That makes it a non-starter and stuff. And there's also the control thing. I strongly believe if you want human aligned stuff, you want all humans to give feedback. And you want all humans to have access to that technology in the first place. And I actually have seen, living in New York, whenever I come to Silicon Valley, I see a different cultural bubble. Like all the friends I hang out with talk about some random thing like Dyson Spheres or whatever, that's a thing. And most of the world doesn't know or care about any of this stuff. It's definitely a bubble and bubbles can form very easily. And when you make a lot of decisions because you're in a bubble, they're probably not globally optimal decisions. So I think open source, the distribution of open source powers a certain kind of non-falsifiability that I think is very important. I think on the open source models, like it's going great in the fact that LoRa I think came out of the necessity of open source models needing to be fine-tunable in some way. Yeah, and I think DPO also came out of the academic open source side of things. So do any of the closed source labs, did any of them already have LoRa or DPO internally? Maybe, but that does not advance humanity in any way. It advances some companies probability of doing the winner takes all that I talked about earlier in the podcast.Open Source and TrustI don't know, it just feels fundamentally good. Like when people try to, you know, people are like, well, what are the ways in which it is not okay? I find most of these arguments, and this might be a little controversial, but I find a lot of arguments based on whether closed source models are safer or open source models are safer very much related to what kind of culture they grew up in, what kind of society they grew up in. If they grew up in a society that they trusted, then I think they take the closed source argument. And if they grew up in a society that they couldn't trust, where the norm was that you didn't trust your government, obviously it's corrupt or whatever, then I think the open source argument is what they take. I think there's a deep connection to like people's innate biases from their childhood and their trust in society and governmental aspects that push them towards one opinion or the other. And I'm definitely in the camp of open source is definitely going to actually have better outcomes for society. Closed source to me just means that centralization of power, which, you know, is really hard to trust. So I think it's going well
Thanks to the over 17,000 people who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out the State of AI Engineering survey! See our Community page for upcoming meetups in SF, Paris and NYC.This episode had good interest on Twitter.Fast.ai's “Practical Deep Learning” courses been watched by over >6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn't always the case. Being non-consensus and right In 2018, Jeremy and Sebastian Ruder published a paper on ULMFiT (Universal Language Model Fine-tuning), a 3-step transfer learning technique for NLP tasks: The paper demonstrated that pre-trained language models could be fine-tuned on a specific task with a relatively small amount of data to achieve state-of-the-art results. They trained a 24M parameters model on WikiText-103 which was beat most benchmarks.While the paper had great results, the methods behind weren't taken seriously by the community: “Everybody hated fine tuning. Everybody hated transfer learning. I literally did tours trying to get people to start doing transfer learning and nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning […] which I was convinced was not the right direction, but who's going to listen to me, cause as you said, I don't have a PhD, not at a university… I don't have a big set of computers to fine tune huge transformer models.”Five years later, fine-tuning is at the center of most major discussion topics in AI (we covered some like fine tuning vs RAG and small models fine tuning), and we might have gotten here earlier if Jeremy had OpenAI-level access to compute and distribution. At heart, Jeremy has always been “GPU poor”:“I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use.”This story is a good reminder of how some of the best ideas are hiding in plain sight; we recently covered RWKV and will continue to highlight the most interesting research that isn't being done in the large labs. Replacing fine-tuning with continued pre-trainingEven though fine-tuning is now mainstream, we still have a lot to learn. The issue of “catastrophic forgetting” and potential solutions have been brought up in many papers: at the fine-tuning stage, the model can forget tasks it previously knew how to solve in favor of new ones. The other issue is apparent memorization of the dataset even after a single epoch, which Jeremy covered Can LLMs learn from a single example? but we still don't have the answer to. Despite being the creator of ULMFiT, Jeremy still professes that there are a lot of open questions on finetuning:“So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do.”He now advocates for "continued pre-training" - maintaining a diversity of data throughout the training process rather than separate pre-training and fine-tuning stages. Mixing instructional data, exercises, code, and other modalities while gradually curating higher quality data can avoid catastrophic forgetting and lead to more robust capabilities (something we covered in Datasets 101).“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data….So yeah, that's now my view, is I think ULMFiT is the wrong approach. And that's why we're seeing a lot of these so-called alignment tax… I think it's actually because people are training them wrong.An example of this phenomena is CodeLlama, a LLaMA2 model finetuned on 500B tokens of code: while the model is much better at code, it's worse on generic tasks that LLaMA2 knew how to solve well before the fine-tuning. In the episode we also dive into all the places where open source model development and research is happening (academia vs Discords - tracked on our Communities list and on our survey), and how Jeremy recommends getting the most out of these diffuse, pseudonymous communities (similar to the Eleuther AI Mafia).Show Notes* Jeremy's Background* FastMail* Optimal Decisions* Kaggle* Enlitic* fast.ai* Rachel Thomas* Practical Deep Learning* fastai for PyTorch* nbdev* fastec2 (the underrated library we describe)* Can LLMs learn from a single example?* the Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model”.* Sebastian Ruder* Alec Radford* Sylvain Gugger* Stephen Merity* Chris Lattner* Modular.ai / Mojo* Jono Whittaker* Zeiler and Fergus paper* ULM Fit* DAWNBench* Phi-1* Code Llama* AlexNetTimestamps* [00:00:00] Intros and Jeremy's background* [00:05:28] Creating ULM Fit - a breakthrough in NLP using transfer learning* [00:06:32] The rise of GPT and the appeal of few-shot learning over fine-tuning* [00:10:00] Starting Fast.ai to distribute AI capabilities beyond elite academics* [00:14:30] How modern LMs like ChatGPT still follow the ULM Fit 3-step approach* [00:17:23] Meeting with Chris Lattner on Swift for TensorFlow at Google* [00:20:00] Continued pre-training as a fine-tuning alternative* [00:22:16] Fast.ai and looking for impact vs profit maximization* [00:26:39] Using Fast.ai to create an "army" of AI experts to improve their domains* [00:29:32] Fast.ai's 3 focus areas - research, software, and courses* [00:38:42] Fine-tuning memorization and training curve "clunks" before each epoch* [00:46:47] Poor training and fine-tuning practices may be causing alignment failures* [00:48:38] Academia vs Discords* [00:53:41] Jeremy's high hopes for Chris Lattner's Mojo and its potential* [01:05:00] Adding capabilities like SQL generation through quick fine-tuning* [01:10:12] Rethinking Fast.ai courses for the AI-assisted coding era* [01:14:53] Rapid model development has created major technical debt* [01:17:08] Lightning RoundAI Summary (beta)This is the first episode we're trying this. Here's an overview of the main topics before you dive in the transcript. * Jeremy's background and philosophies on AI* Studied philosophy and cognitive science in college* Focused on ethics and thinking about AI even 30 years ago* Believes AI should be accessible to more people, not just elite academics/programmers* Created fast.ai to make deep learning more accessible* Development of transfer learning and ULMFit* Idea of transfer learning critical for making deep learning accessible* ULMFit pioneered transfer learning for NLP* Proposed training general language models on large corpora then fine-tuning - this became standard practice* Faced skepticism that this approach would work from NLP community* Showed state-of-the-art results on text classification soon after trying it* Current open questions around fine-tuning LLMs* Models appear to memorize training data extremely quickly (after 1 epoch)* This may hurt training dynamics and cause catastrophic forgetting* Unclear how best to fine-tune models to incorporate new information/capabilities* Need more research on model training dynamics and ideal data mixing* Exciting new developments* Mojo and new programming languages like Swift could enable faster model innovation* Still lots of room for improvements in computer vision-like innovations in transformers* Small models with fine-tuning may be surprisingly capable for many real-world tasks* Prompting strategies enable models like GPT-3 to achieve new skills like playing chess at superhuman levels* LLMs are like computer vision in 2013 - on the cusp of huge new breakthroughs in capabilities* Access to AI research* Many key convos happen in private Discord channels and forums* Becoming part of these communities can provide great learning opportunities* Being willing to do real work, not just talk about ideas, is key to gaining access* The future of practical AI* Coding becoming more accessible to non-programmers through AI assistance* Pre-requisite programming experience for learning AI may no longer be needed* Huge open questions remain about how to best train, fine-tune, and prompt LLMsTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:21]Swyx: Hey, and today we have in the remote studio, Jeremy Howard all the way from Australia. Good morning. [00:00:27]Jeremy: The remote studio, also known as my house. Good morning. Nice to see you. [00:00:32]Swyx: Nice to see you too. I'm actually very used to seeing you in your mask as a message to people, but today we're mostly audio. But thank you for doing the very important public service of COVID awareness. It was a pleasure. [00:00:46]Jeremy: It was all very annoying and frustrating and tedious, but somebody had to do it. [00:00:52]Swyx: Somebody had to do it, especially somebody with your profile. I think it really drives home the message. So we tend to introduce people for them and then ask people to fill in the blanks on the personal side. Something I did not know about you was that you graduated with a BA in philosophy from the University of Melbourne. I assumed you had a PhD. [00:01:14]Jeremy: No, I mean, I barely got through my BA because I was working 80 to 100 hour weeks at McKinsey and Company from 19 years old onwards. So I actually didn't attend any lectures in second and third year university. [00:01:35]Swyx: Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. [00:01:39]Jeremy: I took two weeks off before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight, I would go to all my professors and say, oh, I was meant to be in your class this semester and I didn't quite turn up. Were there any assignments I was meant to have done, whatever. I can't believe all of them let me basically have it. They basically always would say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university, but. [00:02:12]Swyx: Well, it shows that, I guess, you min-maxed the opportunities. That definitely was a precursor. [00:02:18]Jeremy: I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was ethics and cognitive science. And it's kind of really amazing that it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. [00:02:38]Swyx: A lot of, yeah, a lot of relevant conversations there. So you were a consultant for a while and then in the magical month of June 1989, you founded both Optimal Decisions and Fastmeal, which I also briefly used. So thank you for that. [00:02:53]Jeremy: Oh, good for you. Yeah. Cause I had read the statistics, which is that like 90% or something of small businesses fail. So I thought if I start two businesses, I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd, but anyway. [00:03:10]Swyx: And then you were president and chief scientist at Kaggle, which obviously is the sort of composition platform of machine learning. And then Enlitic, where you were working on using deep learning to improve medical diagnostics and clinical decisions. Yeah. [00:03:28]Jeremy: I was actually the first company to use deep learning in medicine, so I kind of founded the field. [00:03:33]Swyx: And even now that's still like a pretty early phase. And I actually heard you on your new podcast with Tanish, where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. [00:03:47]Jeremy: Maybe he's too old to be called a prodigy now, ex-prodigy. No, no. [00:03:51]Swyx: I think he still counts. And anyway, just to round out the bio, you have a lot more other credentials, obviously, but most recently you started Fast.ai, which is still, I guess, your primary identity with Rachel Thomas. So welcome. [00:04:05]Jeremy: Yep. [00:04:06]Swyx: Thanks to my wife. Thank you. Yeah. Doing a lot of public service there with getting people involved in AI, and I can't imagine a better way to describe it than fast, fast.ai. You teach people from nothing to stable diffusion in seven weeks or something, and that's amazing. Yeah, yeah. [00:04:22]Jeremy: I mean, it's funny, you know, when we started that, what was that, like 2016 or something, the idea that deep learning was something that you could make more accessible was generally considered stupid. Everybody knew that deep learning was a thing that you got a math or a computer science PhD, you know, there was one of five labs that could give you the appropriate skills and that you would join, yeah, basically from one of those labs, you might be able to write some papers. So yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it. And we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs, that seemed like A, a big waste and B, kind of dangerous. [00:05:28]Swyx: Yeah. [00:05:29]Alessio: And, you know, well, I just wanted to know one thing on your bio that at Kaggle, you were also the top rank participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you were clearly building something that you knew a lot about, which is awesome. Talking about deep learning, you created, published a paper on ULM fit, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I've read back on the paper and you turned this model, AWD LSTM, which I did the math and it was like 24 to 33 million parameters, depending on what training data set you use today. That's kind of like not even small, it's like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and what people were working towards? [00:06:32]Jeremy: Yeah, the whole thing was a contrarian take, you know. So okay, so we started Fast.ai, my wife and I, and we thought, yeah, so we're trying to think, okay, how do we make it more accessible? So when we started thinking about it, it was probably 2015 and then 2016, we started doing something about it. Why is it inaccessible? Okay, well, A, no one knows how to do it other than a few number of people. And then when we asked those few number of people, well, how do you actually get good results? They would say like, oh, it's like, you know, a box of tricks that aren't published. So you have to join one of the labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but thankfully there was Theano and rappers and particularly Lasagna, the rapper, but yeah, not much software around, not much in the way of data sets, you know, very hard to get started in terms of the compute. Like how do you get that set up? So yeah, no, everything was kind of inaccessible. And you know, as we started looking into it, we had a key insight, which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about, nobody talks about called transfer learning, where you take somebody else's model, where they already figured out like how to detect edges and gradients and corners and text and whatever else, and then you can fine tune it to do the thing you want to do. And we thought that's the key. That's the key to becoming more accessible in terms of compute and data requirements. So when we started Fast.ai, we focused from day one on transfer learning. Lesson one, in fact, was transfer learning, literally lesson one, something not normally even mentioned in, I mean, there wasn't much in the way of courses, you know, the courses out there were PhD programs that had happened to have recorded their lessons and they would rarely mention it at all. We wanted to show how to do four things that seemed really useful. You know, work with vision, work with tables of data, work with kind of recommendation systems and collaborative filtering and work with text, because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And no one was doing anything much useful with text. Everybody was talking about word2vec, you know, like king plus queen minus woman and blah, blah, blah. It was like cool experiments, but nobody's doing anything like useful with it. NLP was all like lemmatization and stop words and topic models and bigrams and SPMs. And it was really academic and not practical. But I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done cognitive science at university, where we talked a lot about the CELS Chinese room experiment. This idea of like, what if there was somebody that could kind of like, knew all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese and they were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them and then they do all their rules and then they pass back a piece of paper with Chinese back. And this room with a person in is actually fantastically good at answering any question you give them written in Chinese. You know, do they understand Chinese? And is this, you know, something that's intelligently working with Chinese? Ever since that time, I'd say the most thought, to me, the most thoughtful and compelling philosophical response is yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But you know, if it looks like a duck and acts like a duck, it's a duck, you know, or to all intents and purposes. And so I always kind of thought, you know, so this is basically a kind of analysis of the limits of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw to then generate text in response to text, it could appear to be intelligent, you know. And whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know, what I learned about the universal approximation theorem and stuff, and I started thinking like, oh, I wonder if like a neural net could ever get big enough and take in enough data to be a Chinese room experiment. You know, with that background and this kind of like interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, oh, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing, you know, they were not a popular or well-known thing, but they were a thing. But language models exist to this idea that you could train a model to fill in the gaps. Or actually in those days it wasn't fill in the gaps, it was finish a string. And in fact, Andrej Karpathy did his fantastic RNN demonstration from this at a similar time where he showed like you can have it ingest Shakespeare and it will generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia effectively, to do it quite accurately quite often? I thought, geez, it would actually have to know a lot about the world, you know, it'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways and that causes proceed effects and that, you know, when there are animals and there are people and that people can be in certain positions during certain timeframes and then you could, you know, all that together, you can then finish a sentence like this was signed into law in 2016 by US President X and it would fill in the gap, you know. So that's why I tried to create what in those days was considered a big language model trained on the entirety on Wikipedia, which is that was, you know, a bit unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure, based on our work with transfer learning and vision, that I could then suck out those latent capabilities by transfer learning, you know, by fine-tuning it on a task data set or whatever. So we generated this three-step system. So step one was train a language model on a big corpus. Step two was fine-tune a language model on a more curated corpus. And step three was further fine-tune that model on a task. And of course, that's what everybody still does today, right? That's what ChatGPT is. And so the first time I tried it within hours, I had a new state-of-the-art academic result on IMDB. And I was like, holy s**t, it does work. And so you asked, to what degree was this kind of like pushing against the established wisdom? You know, every way. Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like, oh, maybe. Everybody was like, it definitely won't work. NLP is much more complicated than vision. Language is a much more vastly complicated domain. You know, and you've got problems like the grounding problem. We know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. [00:15:10]Alessio: Jeremy, had people not tried because it was like too complicated to actually get the data and like set up the training? Or like, were people just lazy and kind of like, hey, this is just not going to work? [00:15:20]Jeremy: No, everybody wasn't lazy. So like, so the person I thought at that time who, you know, there were two people I thought at that time, actually, who were the strongest at language models were Stephen Merity and Alec Radford. And at the time I didn't know Alec, but I, after we had both, after I'd released ULM Fit and he had released GPT, I organized a chat for both of us with Kate Metz in the New York Times. And Kate Metz answered, sorry, and Alec answered this question for Kate. And Kate was like, so how did, you know, GPT come about? And he said, well, I was pretty sure that pre-training on a general large corpus wouldn't work. So I hadn't tried it. And then I read ULM Fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similar with, with Stephen, you know, I asked Stephen Merity, like, why don't we just find, you know, take your AWD-ASTLM and like train it on all of Wikipedia and fine tune it? And he's kind of like, well, I don't think that's going to really lie. Like two years before I did a very popular talk at KDD, the conference where everybody in NLP was in the audience. I recognized half the faces, you know, and I told them all this, I'm sure transfer learning is the key. I'm sure ImageNet, you know, is going to be an NLP thing as well. And, you know, everybody was interested and people asked me questions afterwards and, but not just, yeah, nobody followed up because everybody knew that it didn't work. I mean, even like, so we were scooped a little bit by Dai and Lee, Kwok Lee at Google. They had, they had, I already, I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose, large language model on a general purpose corpus. They only ever tested a domain specific corpus. And I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it was that kind of 30 years of mulling over the, the cell Chinese room experiment that had convinced me that it probably would work. I don't know. Yeah. [00:17:48]Alessio: Interesting. I just dug up Alec announcement tweet from 2018. He said, inspired by Cobe, Elmo, and Yola, I'm fit. We should have a single transformer language model can be fine tuned to a wide variety. It's interesting because, you know, today people think of AI as the leader, kind of kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years, people think of it as an overnight success, but obviously it took a while. [00:18:16]Swyx: Yeah. Yeah. [00:18:17]Jeremy: No, I mean, absolutely. And I'll say like, you know, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to, to ULM fit. You know, there was these kind of like, so there was a lot of, there was a lot of activity at the same time as ULM fits released. So there was, um, so before it, as Brian McCann, I think at Salesforce had come out with this neat model that did a kind of multitask learning, but again, they didn't create a general fine tune language model first. There was Elmo, um, which I think was a lip, you know, actually quite a few months after the first ULM fit example, I think. Um, but yeah, there was a bit of this stuff going on. And the problem was everybody was doing, and particularly after GPT came out, then everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning. Everybody hated transfer learning. And like, I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years and, and not to be honest, I mean, I'm a bit sad about this now, but I kind of got so disappointed and dissuaded by like, it felt like these bigger lab, much bigger labs, you know, like fast AI had only ever been just me and Rachel were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced was the wrong way to do it. And so, yeah, for years people were really focused on getting better at zero shot and few shots and it wasn't until, you know, this key idea of like, well, let's take the ULM fit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's fine tune on an instruction corpus. And then in step three, rather than fine tuning on a reasonably specific task classification, let's fine tune on a, on a RLHF task classification. And so that was really, that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction, but who's going to listen to me, you know, cause I, as you said, I don't have a PhD, not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So yeah, it was definitely difficult. It's always been hard. You know, it's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers and I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata and it'd be like, oh, this big bank has this huge room full of computers and they have like terabytes of data available, you know, at the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then of course, students coming out of their PhDs and stuff, that's where they want to go work because that's where they read about. And to me, it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. [00:22:16]Alessio: That's awesome. And it's great to hear, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love this conversation. And the other thing before we jump into Fast.AI, actually, a lot of people that I know, they run across a new architecture and whatnot, they're like, I got to start a company and raise a bunch of money and do all of this stuff. And say, you were like, I want everybody to have access to this. Why was that the case for you? Was it because you already had a successful venture in like FastMail and you were more interested in that? What was the reasoning? [00:22:52]Jeremy: It's a really good question. So I guess the answer is yes, that's the reason why. So when I was a teenager, I thought it would be really cool to like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word VC. And I didn't really know what any of those things were really until after we started Kaggle, to be honest. Even the way it started to what we now call startups. I just thought they were just small businesses. You know, they were just companies. So yeah, so those two companies were FastMail and Optimal Decisions. FastMail was the first kind of synchronized email provider for non-businesses. So something you can get your same email at home, on your laptop, at work, on your phone, whatever. And then Optimal Decisions invented a new approach to insurance pricing. Something called profit-optimized insurance pricing. So I saw both of those companies, you know, after 10 years. And at that point, I had achieved the thing that as a teenager I had wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mom saying to me, you must be so proud. You know, because she remembered my dream. She's like, you've done it. And I kind of reflected and I was like, I'm not proud at all. You know, like people quite liked FastMail. You know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah, I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually, I haven't really helped the world very much. You know, maybe in the insurance case I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure would have a lot of value. So, you know, I took some time off. I wasn't sure if I'd ever work again, actually. I didn't particularly want to, because it felt like, yeah, it felt like such a disappointment. And, but, you know, and I didn't need to. I had enough money. Like, I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work, they all worked ridiculously hard, you know, and constantly put themselves in extremely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, Kaggle came along and I mainly kind of did that just because it was fun and interesting to hang out with interesting people. But, you know, with Fast.ai in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know? And so we definitely saw the world going in a potentially pretty dystopian direction if the world's most powerful technology was controlled by a small group of elites. So we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've helped make it a little bit less likely. So we've done our bit. [00:26:39]Swyx: You've shown that it's possible. And I think your constant advocacy, your courses, your research that you publish, you know, just the other day you published a finding on, you know, learning that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howards, furthering your mission with, you know, you don't have to do everything by yourself is what I'm saying. No, definitely. Definitely. [00:27:10]Jeremy: You know, that was a big takeaway from like, analytic was analytic. It definitely felt like we had to do everything ourselves. And I kind of, I wanted to solve medicine. I'll say, yeah, okay, solving medicine is actually quite difficult. And I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was definitely the other piece was like, yeah, you know, can we create an army of passionate domain experts who can change their little part of the world? And that's definitely happened. Like I find nowadays, at least half the time, probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say, they say, yeah, I got my start with fast.ai. So it's definitely, I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff, which, you know, there's lots of alumni there. And they say, oh my God, I got here. And like half of the people are fast.ai alumni. So it's fantastic. [00:28:13]Swyx: Yeah. [00:28:14]Jeremy: Actually, Andre Kapathy grabbed me when I saw him at NeurIPS a few years ago. And he was like, I have to tell you, thanks for the fast.ai courses. When people come to Tesla and they need to know more about deep learning, we always send them to your course. And the OpenAI Scholars Program was doing the same thing. So it's kind of like, yeah, it's had a surprising impact, you know, that's just one of like three things we do is the course, you know. [00:28:40]Swyx: Yes. [00:28:40]Jeremy: And it's only ever been at most two people, either me and Rachel or me and Sylvia nowadays, it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to make an impact. [00:28:56]Swyx: Yeah. So just to reintroduce fast.ai for people who may not have dived into it much, there is the courses that you do. There is the library that is very well loved. And I kind of think of it as a nicer layer on top of PyTorch that people should start with by default and use it as the basis for a lot of your courses. And then you have like NBDev, which I don't know, is that the third one? [00:29:27]Jeremy: Oh, so the three areas were research, software, and courses. [00:29:32]Swyx: Oh, sorry. [00:29:32]Jeremy: So then in software, you know, fast.ai is the main thing, but NBDev is not far behind. But then there's also things like FastCore, GHAPI, I mean, dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden, actually. Some of them I should try to do a better job of telling people about. [00:30:01]Swyx: What are you thinking about? Yeah, what's on the course of my way? Oh, I don't know, just like little things. [00:30:04]Jeremy: Like, for example, for working with EC2 and AWS, I created a FastEC2 library, which I think is like way more convenient and nice to use than anything else out there. And it's literally got a whole autocomplete, dynamic autocomplete that works both on the command line and in notebooks that'll like auto-complete your instance names and everything like that. You know, just little things like that. I try to make like, when I work with some domain, I try to make it like, I want to make it as enjoyable as possible for me to do that. So I always try to kind of like, like with GHAPI, for example, I think that GitHub API is incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So like GHAPI, like FastEC2, it like autocompletes both at the command line or in a notebook or whatever, like literally the entire GitHub API. The entire thing is like, I think it's like less than 100K of code because it actually, as far as I know, the only one that grabs it directly from the official open API spec that GitHub produces. And like if you're in GitHub and you just type an API, you know, autocomplete API method and hit enter, it prints out the docs with brief docs and then gives you a link to the actual documentation page. You know, GitHub Actions, I can write now in Python, which is just so much easier than writing them in TypeScript and stuff. So, you know, just little things like that. [00:31:40]Swyx: I think that's an approach which more developers took to publish some of their work along the way. You described the third arm of FastAI as research. It's not something I see often. Obviously, you do do some research. And how do you run your research? What are your research interests? [00:31:59]Jeremy: Yeah, so research is what I spend the vast majority of my time on. And the artifacts that come out of that are largely software and courses. You know, so to me, the main artifact shouldn't be papers because papers are things read by a small exclusive group of people. You know, to me, the main artifacts should be like something teaching people, here's how to use this insight and here's software you can use that builds it in. So I think I've only ever done three first-person papers in my life, you know, and none of those are ones I wanted to do. You know, they were all ones that, like, so one was ULM Fit, where Sebastian Ruder reached out to me after seeing the course and said, like, you have to publish this as a paper, you know. And he said, I'll write it. He said, I want to write it because if I do, I can put it on my PhD and that would be great. And it's like, okay, well, I want to help you with your PhD. And that sounds great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the Fast.ai library paper, which again, somebody reached out and said, please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't, other than that, I've never written a first author paper. So the research is like, well, so for example, you know, Dawn Bench was a competition, which Stanford ran a few years ago. It was kind of the first big competition of like, who can train neural nets the fastest rather than the most accurate. And specifically it was who can train ImageNet the fastest. And again, this was like one of these things where it was created by necessity. So Google had just released their TPUs. And so I heard from my friends at Google that they had put together this big team to smash Dawn Bench so that they could prove to people that they had to use Google Cloud and use their TPUs and show how good their TPUs were. And we kind of thought, oh s**t, this would be a disaster if they do that, because then everybody's going to be like, oh, deep learning is not accessible. [00:34:20]Swyx: You know, to actually be good at it, [00:34:21]Jeremy: you have to be Google and you have to use special silicon. And so, you know, we only found out about this 10 days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next 10 days and just tried to crunch through and try to use all of our best ideas that had come from our research. And so particularly progressive resizing, just basically train mainly on small things, train on non-square things, you know, stuff like that. And so, yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, oh s**t, you know, this is going to show that you have to be Google and have TPUs to being like, oh my God, even the little guy can do deep learning. So that's an example of the kind of like research artifacts we do. And yeah, so all of my research is always, how do we do more with less, you know? So how do we get better results with less data, with less compute, with less complexity, with less education, you know, stuff like that. So ULM fits obviously a good example of that. [00:35:37]Swyx: And most recently you published, can LLMs learn from a single example? Maybe could you tell the story a little bit behind that? And maybe that goes a little bit too far into the learning of very low resource, the literature. [00:35:52]Jeremy: Yeah, yeah. So me and my friend, Jono Whittaker, basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is, can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within nine hours. And Kaggle's very, very limited. So you've only got 14 gig RAM, only two CPUs, and a small, very old GPU. So this is cool, you know, if you can do well at this, then this is a good example of like, oh, you can do more with less. So yeah, Jono and I were playing around with fine tuning, of course, transfer learning, pre-trained language models. And we saw this, like, so we always, you know, plot our losses as we go. So here's another thing we created. Actually, Sylvain Guuger, when he worked with us, created called fast progress, which is kind of like TQEDM, but we think a lot better. So we look at our fast progress curves, and they kind of go down, down, down, down, down, down, down, a little bit, little bit, little bit. And then suddenly go clunk, and they drop. And then down, down, down, down, down a little bit, and then suddenly clunk, they drop. We're like, what the hell? These clunks are occurring at the end of each epoch. So normally in deep learning, this would be, this is, you know, I've seen this before. It's always been a bug. It's always turned out that like, oh, we accidentally forgot to turn on eval mode during the validation set. So I was actually learning then, or, oh, we accidentally were calculating moving average statistics throughout the epoch. So, you know, so it's recently moving average or whatever. And so we were using Hugging Face Trainer. So, you know, I did not give my friends at Hugging Face the benefit of the doubt. I thought, oh, they've fucked up Hugging Face Trainer, you know, idiots. Well, you'll use the Fast AI Trainer instead. So we switched over to Learner. We still saw the clunks and, you know, that's, yeah, it shouldn't really happen because semantically speaking in the epoch, isn't like, it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one. So there shouldn't be a clunk, you know. So I kind of asked around on the open source discords. That's like, what's going on here? And everybody was just like, oh, that's just what, that's just what these training curves look like. Those all look like that. Don't worry about it. And I was like, oh, are you all using Trainer? Yes. Oh, well, there must be some bug with Trainer. And I was like, well, we also saw it in Learner [00:38:42]Swyx: and somebody else is like, [00:38:42]Jeremy: no, we've got our own Trainer. We get it as well. They're just like, don't worry about it. It's just something we see. It's just normal. [00:38:48]Swyx: I can't do that. [00:38:49]Jeremy: I can't just be like, here's something that's like in the previous 30 years of neural networks, nobody ever saw it. And now suddenly we see it. [00:38:57]Swyx: So don't worry about it. [00:38:59]Jeremy: I just, I have to know why. [00:39:01]Swyx: Can I clarify? This is, was everyone that you're talking to, were they all seeing it for the same dataset or in different datasets? [00:39:08]Jeremy: Different datasets, different Trainers. They're just like, no, this is just, this is just what it looks like when you fine tune language models. Don't worry about it. You know, I hadn't seen it before, but I'd been kind of like, as I say, I, you know, I kept working on them for a couple of years after ULM fit. And then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of Lama fine tuning world, where else these guys had been, you know, doing it since day one. [00:39:49]Swyx: It was only a few months ago, [00:39:51]Jeremy: but it's still quite a bit of time. So, so yeah, they're just like, no, this is all what we see. [00:39:56]Swyx: Don't worry about it. [00:39:56]Jeremy: So yeah, I, I've got a very kind of like, I don't know, I've just got this brain where I have to know why things are. And so I kind of, I ask people like, well, why, why do you think it's happening? And they'd be like, oh, it would pretty obviously, cause it's like memorize the data set. It's just like, that can't be right. It's only seen it once. Like, look at this, the loss has dropped by 0.3, 0.3, which is like, basically it knows the answer. And like, no, no, it's just, it is, it's just memorize the data set. So yeah. So look, Jono and I did not discover this and Jono and I did not come up with a hypothesis. You know, I guess we were just the ones, I guess, who had been around for long enough to recognize that like, this, this isn't how it's meant to work. And so we, we, you know, and so we went back and like, okay, let's just run some experiments, you know, cause nobody seems to have actually published anything about this. [00:40:51]Well, not quite true.Some people had published things, but nobody ever actually stepped back and said like, what the hell, you know, how can this be possible? Is it possible? Is this what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time. It's like, okay, if this hypothesis is correct, that it's memorized in the training set, then we ought to see blah, under conditions, blah, but not under these conditions. And so we ran a bunch of experiments and all of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big data set, you know, which in hindsight, it's not totally surprising because the theory, remember, of the ULMFiT theory was like, well, it's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize it as like, oh, that's just a version of this. So it's not so crazy, you know, but it is, it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because like, it doesn't even matter. Like maybe it's fine. Like maybe it's fine that it's memorized the data set after one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. [00:42:20]Swyx: That's fine. [00:42:22]Jeremy: Don't, you know, don't, I keep telling people, don't track validation loss, track validation accuracy because at least that will still be useful. Just another thing that's got lost since ULMFiT, nobody tracks accuracy of language models anymore. But you know, it'll still keep learning and it does, it does keep improving. But is it worse? You know, like, is it like, now that it's kind of memorized it, it's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly and I haven't found anybody who feels like they do, like nobody really knows whether this memorization thing is, it's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably, yeah, I have a feeling it's messing up training dynamics as well. [00:43:13]Swyx: And does it come at the cost of catastrophic forgetting as well, right? Like, which is the other side of the coin. [00:43:18]Jeremy: It does to some extent, like we know it does, like look at Code Llama, for example. So Code Llama was a, I think it was like a 500 billion token fine tuning of Llama 2 using code. And also pros about code that Meta did. And honestly, they kind of blew it because Code Llama is good at coding, but it's bad at everything else, you know, and it used to be good. Yeah, I was pretty sure it was like, before they released it, me and lots of people in the open source discords were like, oh my God, you know, we know this is coming, Jan Lukinsk saying it's coming. I hope they kept at least like 50% non-code data because otherwise it's going to forget everything else. And they didn't, only like 0.3% of their epochs were non-code data. So it did, it forgot everything else. So now it's good at code and it's bad at everything else. So we definitely have catastrophic forgetting. It's fixable, just somebody has to do, you know, somebody has to spend their time training a model on a good mix of data. Like, so, okay, so here's the thing. Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it. [00:44:36]Jeremy: And that's because people are using it in a way different to why I created it. You know, I created it thinking the task-specific models would be more specific. You know, it's like, oh, this is like a sentiment classifier as an example of a task, you know, but the tasks now are like a, you know, RLHF, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task and it's a really cool approach. And so we see, for example, RLHF also breaks models like, you know, like GPT-4, RLHDEFT, we know from kind of the work that Microsoft did, you know, the pre, the earlier, less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me, the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training. And pre-training is something where from the very start, you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose document completion, whatever. And then as you train, you gradually curate that, you know, you gradually make that higher and higher quality and more and more specific to the kinds of tasks you want it to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter, you stop training on low quality data, because that's probably fine to forget how to write badly, maybe. So yeah, that's now my view, is I think ULM fit is the wrong approach. And that's why we're seeing a lot of these, you know, so-called alignment tacks and this view of like, oh, a model can't both code and do other things. And, you know, I think it's actually because people are training them wrong. [00:46:47]Swyx: Yeah, well, I think you have a clear [00:46:51]Alessio: anti-laziness approach. I think other people are not as good hearted, you know, they're like, [00:46:57]Swyx: hey, they told me this thing works. [00:46:59]Alessio: And if I release a model this way, people will appreciate it, I'll get promoted and I'll kind of make more money. [00:47:06]Jeremy: Yeah, and it's not just money. It's like, this is how citations work most badly, you know, so if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot and few shot learning, everybody was writing about that. Or, you know, with image generation, everybody just was writing about GANs, you know, and I was trying to say like, no, GANs are not the right approach. You know, and I showed again through research that we demonstrated in our videos that you can do better than GANs, much faster and with much less data. And nobody cared because again, like if you want to get published, you write a GAN paper that slightly improves this part of GANs and this tiny field, you'll get published, you know. So it's, yeah, it's not set up for real innovation. It's, you know, again, it's really helpful for me, you know, I have my own research lab with nobody telling me what to do and I don't even publish. So it doesn't matter if I get citations. And so I just write what I think actually matters. I wish there was, and, you know, and actually places like OpenAI, you know, the researchers there can do that as well. It's a shame, you know, I wish there was more academic, open venues in which people can focus on like genuine innovation. [00:48:38]Swyx: Twitter, which is unironically has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discords. I don't know if it's too, I don't know if it's a pusher to ask like what discords are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of Luther AI, which is a very hard bit. And, you know, like what is the new Luther? And you actually shouted out the alignment lab AI discord in your blog post. And that was the first time I even knew, like I saw them on Twitter, never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay, yeah. [00:49:23]Jeremy: And then even then, if you do know about that and you go there, it'll look like it's totally dead. And that's because unfortunately, nearly all the discords, nearly all of the conversation happens in private channels. You know, and that's, I guess. [00:49:35]Swyx: How does someone get into that world? Because it's obviously very, very instructive, right? [00:49:42]Jeremy: You could just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so like, at least, to be fair, one of Emma's bustling channels is private. [00:49:57]Swyx: I guess. [00:49:59]Jeremy: So I'm just thinking. [00:50:01]Swyx: It's just the nature of quality discussion, right? Yeah, I guess when I think about it, [00:50:05]Jeremy: I didn't have any private discussions on our discord for years, but there was a lot of people who came in with like, oh, I just had this amazing idea for AGI. If you just thought about like, if you imagine that AI is a brain, then we, you know, this just, I don't want to talk about it. You know, I don't want to like, you don't want to be dismissive or whatever. And it's like, oh, well, that's an interesting comment, but maybe you should like, try training some models first to see if that aligns with your intuition. Like, oh, but how could I possibly learn? It's like, well, we have a course, just actually spend time learning. Like, you know, anyway. And there's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit, that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join a Discord and you can hit the like participants or community or whatever button, you can see who's in it. And then you'll see at the top, who the admins or moderators or people in the dev role are. And just DM one of them and say like, oh, here's my GitHub. Well, here's some blog posts I wrote. You know, I'm interested in talking about this, you know, can I join the private channels? And I've never heard of anybody saying no. I will say, you know, Alutha's all pretty open. So you can do the Alutha Discord still. You know, one problem with the Alutha Discord is it's been going on for so long that it's like, it's very inside baseball. It's quite hard to get started. Yeah. Carpa AI looks, I think it's all open. That's just less stability. That's more accessible. [00:52:03]Swyx: Yeah. [00:52:04]Jeremy: There's also just recently, now it's research that does like the Hermes models and data set just opened. They've got some private channels, but it's pretty open, I think. You mentioned Alignment Lab, that one it's all the interesting stuff is on private channels. So just ask. If you know me, ask me, cause I've got admin on that one. There's also, yeah, OS Skunkworks, OS Skunkworks AI is a good Discord, which I think it's open. So yeah, they're all pretty good. [00:52:40]Swyx: I don't want you to leak any, you know, Discords that don't want any publicity, but this is all helpful. [00:52:46]Jeremy: We all want people, like we all want people. [00:52:49]Swyx: We just want people who like, [00:52:51]Jeremy: want to build stuff, rather than people who, and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like, here's a really small kind of task that as somebody who doesn't know anything is going to take you a really long time to do, but it would still be helpful. Then, and then you go and do it. That would be great. The truth is, yeah, [00:53:19]Swyx: like, I don't know, [00:53:20]Jeremy: maybe 5% of people who come in with great enthusiasm and saying that they want to learn and they'll do anything. [00:53:25]Swyx: And then somebody says like, [00:53:25]Jeremy: okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity. And everybody will then want to help you do more work. [00:53:41]Swyx: So yeah. [00:53:41]Jeremy: So just, yeah, just do work and people will want to support you. [00:53:47]Alessio: Our Discord used to be referral only for a long time. We didn't have a public invite and then we opened it and they're kind of like channel gating. Yeah. A lot of people just want to do, I remember it used to be like, you know, a forum moderator. [00:54:00]Swyx: It's like people just want to do [00:54:01]Alessio: like drive-by posting, [00:54:03]Swyx: you know, and like, [00:54:03]Alessio: they don't want to help the community. They just want to get their question answered. [00:54:07]Jeremy: I mean, the funny thing is our forum community does not have any of that garbage. You know, there's something specific about the low latency thing where people like expect an instant answer. And yeah, we're all somehow in a forum thread where they know it's like there forever. People are a bit more thoughtful, but then the forums are less active than they used to be because Discord has got more popular, you know? So it's all a bit of a compromise, you know, running a healthy community is, yeah, it's always a bit of a challenge. All right, we got so many more things [00:54:47]Alessio: we want to dive in, but I don't want to keep you here for hours. [00:54:50]Swyx: This is not the Lex Friedman podcast [00:54:52]Alessio: we always like to say. One topic I would love to maybe chat a bit about is Mojo, modular, you know, CrystalLiner, not many of you on the podcast. So we want to spend a little time there. You recently did a hacker's guide to language models and you ran through everything from quantized model to like smaller models, larger models, and all of that. But obviously modular is taking its own approach. Yeah, what got you excited? I know you and Chris have been talking about this for like years and a lot of the ideas you had, so. [00:55:23]Jeremy: Yeah, yeah, yeah, yeah, no, absolutely. So I met Chris, I think it was at the first TensorFlow Dev Summit. And I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know, you know, certainly nothing had been mentioned. So I, you know, I admired him from afar with LLVM and Swift and whatever. And so I saw him walk into the courtyard at Google. It's just like, oh s**t, man, that's Chris Latner. I wonder if he would lower his standards enough to talk to me. Well, worth a try. So I caught up my courage because like nobody was talking to him. He looked a bit lost and I wandered over and it's like, oh, you're Chris Latner, right? It's like, what are you doing here? What are you doing here? And I was like, yeah, yeah, yeah. It's like, oh, I'm Jeremy Howard. It's like, oh, do you do some of this AI stuff? And I was like, yeah, yeah, I like this AI stuff. Are you doing AI stuff? It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool. And it's like, wow. So like, I spent the next half hour just basically brain dumping all the ways in which AI was stupid to him. And he listened patiently. And I thought he probably wasn't even remember or care or whatever. But yeah, then I kind of like, I guess I re-caught up with him a few months later. And it's like, I've been thinking about everything you said in that conversation. And he like narrated back his response to every part of it, projects he was planning to do. And it's just like, oh, this dude follows up. Holy s**t. And I was like, wow, okay. And he was like, yeah, so we're going to create this new thing called Swift for TensorFlow. And it's going to be like, it's going to be a compiler with auto differentiation built in. And blah, blah, blah. And I was like, why would that help? [00:57:10]Swyx: You know, why would you? [00:57:10]Jeremy: And he was like, okay, with a compiler during the forward pass, you don't have to worry about saving context, you know, because a lot will be optimized in the backward. But I was like, oh my God. Because I didn't really know much about compilers. You know, I spent enough to kind of like, understand the ideas, but it hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. I was like, wow, that's amazing. Okay, you do know, right, that nobody's going to use this unless it's like usable. It's like, yeah, I know, right. So I was thinking you should create like a fast AI for this. So, okay, but I don't even know Swift. And he was like, well, why don't you start learning it? And if you have any questions, ask me. It's just like, holy s**t. Like, not only has Chris Latner lowered his standards enough to talk to me, but he's offering me personal tutoring on the programming language that he made. So I was just like, I'm not g
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
10 Best Open-Source Deep Learning Tools to Know in 2023:TensorFlow, PyTorch, Keras, MXNet, Caffe, Theano, Torch, Chainer, DeepLearning4j, Caffe2Google says it'll scrape everything you post online for AI;Microsoft uses ChatGPT to instruct and interact with robots;Will.i.am hails AI technology as ‘new renaissance' in music;Benchmarking LLMs searching scientific evidence;MIT Unveils Revolutionary AI Tool: Enhancing Chart Interpretation and Accessibility with Adaptive, Detail-Rich Captions for Users of All AbilitiesMoonlander launches AI-based platform for immersive 3D game developmentMozilla adds AI Help that does the oppositePanic about overhyped AI risk could lead to the wrong kind of regulationIt only took five hours for an AI model to design a functional computerDaily AI Update News from Microsoft, Humane, Nvidia, and MoonlanderUS Senator Believes AI Should Be Aligned With Democratic ValuesThis podcast is generated using the Wondercraft AI platform, a tool that makes it super easy to start your own podcast, by enabling you to use hyper-realistic AI voices as your host. Like mine!Attention AI Unraveled podcast listeners!Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," by Etienne Noumen, now available at Google, Apple and Amazon! This engaging read answers your burning questions and provides valuable insights into the captivating world of AI. Don't miss this opportunity to elevate your knowledge and stay ahead of the curve.Get your copy Apple, Google, or Amazon today!
Do you keep promising yourself to write but never quite get around to it? Do you delete almost as many words as you write? Do you write things that never get shared? Nobody is born knowing how to write. Like any skill, writing improves with deliberate practice and attention. With growing skill often comes heightened enjoyment. This book will help you develop writing skill so you can share your message. There is no single writing recipe that works for everybody, but successful writers rely on common ingredients. Play with the experiments in this book to find what works for you. There is a free workbook to take stock and find the next best experiment for you, available at the book web site, sitwriteshare.com. 13 Sit experiments will help you get your writing started, escape writers' block, defeat internal gremlins, build habits, and find inspiration. 26 Write experiments will help you imagine your message, create a rough draft, and then edit in phases until your polished version emerges. 16 Share experiments will help you get support, publish, and spread your message to those who need it. Sit Write Share: Practical Writing Strategies to Transform Your Experience Into Content that Matters (Theano Press, 2022) will help you build your own unique writing practice. Kathryn Britton's clients call her the brilliant midwife of words. She has helped hundreds of people become word crafters who complete writing projects, big and small. Her own publications include books and articles about computer science, coaching, and applied positive psychology. After earning a Master of Applied Positive Psychology degree at the University of Pennsylvania, she founded Theano Coaching LLC to coach writers and run writing workshops. Kathryn has witnessed the power of her writing experiments to help authors find joy, build confidence, and get writing done that changes the world. For more information and for a workbook to help you move through the 55 experiments, go here. Elizabeth Cronin, Psy.D., is a licensed clinical psychologist and mindfulness meditation teacher with offices in Brookline and Norwood, MA. You can follow her on Instagram or visit her website. Learn more about your ad choices. Visit megaphone.fm/adchoices
Do you keep promising yourself to write but never quite get around to it? Do you delete almost as many words as you write? Do you write things that never get shared? Nobody is born knowing how to write. Like any skill, writing improves with deliberate practice and attention. With growing skill often comes heightened enjoyment. This book will help you develop writing skill so you can share your message. There is no single writing recipe that works for everybody, but successful writers rely on common ingredients. Play with the experiments in this book to find what works for you. There is a free workbook to take stock and find the next best experiment for you, available at the book web site, sitwriteshare.com. 13 Sit experiments will help you get your writing started, escape writers' block, defeat internal gremlins, build habits, and find inspiration. 26 Write experiments will help you imagine your message, create a rough draft, and then edit in phases until your polished version emerges. 16 Share experiments will help you get support, publish, and spread your message to those who need it. Sit Write Share: Practical Writing Strategies to Transform Your Experience Into Content that Matters (Theano Press, 2022) will help you build your own unique writing practice. Kathryn Britton's clients call her the brilliant midwife of words. She has helped hundreds of people become word crafters who complete writing projects, big and small. Her own publications include books and articles about computer science, coaching, and applied positive psychology. After earning a Master of Applied Positive Psychology degree at the University of Pennsylvania, she founded Theano Coaching LLC to coach writers and run writing workshops. Kathryn has witnessed the power of her writing experiments to help authors find joy, build confidence, and get writing done that changes the world. For more information and for a workbook to help you move through the 55 experiments, go here. Elizabeth Cronin, Psy.D., is a licensed clinical psychologist and mindfulness meditation teacher with offices in Brookline and Norwood, MA. You can follow her on Instagram or visit her website. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Do you keep promising yourself to write but never quite get around to it? Do you delete almost as many words as you write? Do you write things that never get shared? Nobody is born knowing how to write. Like any skill, writing improves with deliberate practice and attention. With growing skill often comes heightened enjoyment. This book will help you develop writing skill so you can share your message. There is no single writing recipe that works for everybody, but successful writers rely on common ingredients. Play with the experiments in this book to find what works for you. There is a free workbook to take stock and find the next best experiment for you, available at the book web site, sitwriteshare.com. 13 Sit experiments will help you get your writing started, escape writers' block, defeat internal gremlins, build habits, and find inspiration. 26 Write experiments will help you imagine your message, create a rough draft, and then edit in phases until your polished version emerges. 16 Share experiments will help you get support, publish, and spread your message to those who need it. Sit Write Share: Practical Writing Strategies to Transform Your Experience Into Content that Matters (Theano Press, 2022) will help you build your own unique writing practice. Kathryn Britton's clients call her the brilliant midwife of words. She has helped hundreds of people become word crafters who complete writing projects, big and small. Her own publications include books and articles about computer science, coaching, and applied positive psychology. After earning a Master of Applied Positive Psychology degree at the University of Pennsylvania, she founded Theano Coaching LLC to coach writers and run writing workshops. Kathryn has witnessed the power of her writing experiments to help authors find joy, build confidence, and get writing done that changes the world. For more information and for a workbook to help you move through the 55 experiments, go here. Elizabeth Cronin, Psy.D., is a licensed clinical psychologist and mindfulness meditation teacher with offices in Brookline and Norwood, MA. You can follow her on Instagram or visit her website. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/psychology
India's state-run Oil and Natural Gas Corp. (ONGC) has announced plans to invest Rs. 1 trillion ($12.1 billion) by 2030 to increase its renewable energy capacity and reduce direct emissions. The company aims to scale up its renewable energy capacity to 10 gigawatts (GW) by 2030, a significant increase from the current 189 megawatts. The world's biggest AI experts have captured in one sentence the risk of AI. Blackrock, the world's biggest money manager, has slashed the valuation of BYJU'S by 62 percent. Also in this brief, the C919, China's challenge to Airbus-Boeing takes flight. Notes: Some of the world's top AI experts, including Sam Altman, CEO of OpenAI, have signed a short statement to make it easy for everyone to understand that the risk of extinction of the human race due to artificial intelligence is very real. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads, published by the Center for AI Safety. Among the other signatories on the list, which runs into a couple of hundred experts from around the world, are Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto, A pioneer in the field of deep learning and neural networks; Yoshua Bengio, Professor of Computer Science, U. Montreal / Mila, a leading researcher in the field of deep learning and co-founder of the deep learning framework, "Theano"; Demis Hassabis, CEO, Google DeepMind, an influential figure in the development of artificial general intelligence (AGI) and deep reinforcement learning; and Dario Amodei, CEO, Anthropic. India's state-run Oil and Natural Gas Corp. (ONGC) has announced plans to invest Rs. 1 trillion ($12.1 billion) by 2030 to increase its renewable energy capacity and reduce direct emissions. The company aims to scale up its renewable energy capacity to 10 gigawatts (GW) by 2030, a significant increase from the current 189 megawatts. ONGC's chairman, Arun Kumar Singh, said that while fossil fuel demand in India may continue to grow until 2040, the company is committed to balancing its portfolio with green energy projects. Amazon India has launched a limited introduction of bill payments at select restaurants using Amazon Pay. The feature, currently available in certain areas of Bengaluru, allows users to make payments using various methods such as credit/debit cards, net banking, UPI, Amazon Pay Later, TechCrunch reports. China's domestically built passenger jet, the C919, completed its maiden commercial flight, on May 28, marking a significant milestone in the country's ambition to challenge the aircraft manufacturing duopoly of Boeing and Airbus, Quartz reports. The Commercial Aircraft Corporation of China (Comac) developed the C919, which flew from Shanghai to Beijing with approximately 130 passengers on board. Despite facing delays and design flaws, Comac has signed contracts for 1,035 C919 jets with several dozen customers. Blackrock, the world's biggest money manager, has slashed the valuation of BYJU'S by 62 percent to $8.4 billion in the March quarter, down from its previous valuation of $22 billion, The Hindu Business Line reports. This marks the second markdown by Blackrock, which holds a 0.9 percent stake in the company. The funding winter and worsening macroeconomic conditions have prompted private investors to reduce valuations of several start-up unicorns, with cuts ranging from 30-50 percent, according to the report.
Summary The focus of machine learning projects has long been the model that is built in the process. As AI powered applications grow in popularity and power, the model is just the beginning. In this episode Josh Tobin shares his experience from his time as a machine learning researcher up to his current work as a founder at Gantry, and the shift in focus from model development to machine learning systems. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Josh Tobin about the state of industry best practices for designing and building ML models Interview Introduction How did you get involved in machine learning? Can you start by describing what a "traditional" process for building a model looks like? What are the forces that shaped those "best practices"? What are some of the practices that are still necessary/useful and what is becoming outdated? What are the changes in the ecosystem (tooling, research, communal knowledge, etc.) that are forcing teams to reconsider how they think about modeling? What are the most critical practices/capabilities for teams who are building services powered by ML/AI? What systems do they need to support them in those efforts? Can you describe what you are building at Gantry and how it aids in the process of developing/deploying/maintaining models with "modern" workflows? What are the most challenging aspects of building a platform that supports ML teams in their workflows? What are the most interesting, innovative, or unexpected ways that you have seen teams approach model development/validation? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gantry? When is Gantry the wrong choice? What are some of the resources that you find most helpful to stay apprised of how modeling and ML practices are evolving? Contact Info LinkedIn (https://www.linkedin.com/in/josh-tobin-4b3b10a9/) Website (http://josh-tobin.com/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Gantry (https://gantry.io/) Full Stack Deep Learning (https://fullstackdeeplearning.com/) OpenAI (https://openai.com/) Kaggle (https://www.kaggle.com/) NeurIPS == Neural Information Processing Systems Conference (https://nips.cc/) Caffe (https://caffe.berkeleyvision.org/) Theano (https://github.com/Theano/Theano) Deep Learning (https://en.wikipedia.org/wiki/Deep_learning) Regression Model (https://www.analyticsvidhya.com/blog/2022/01/different-types-of-regression-models/) scikit-learn (https://scikit-learn.org/) Large Language Model (https://en.wikipedia.org/wiki/Large_language_model) Foundation Models (https://en.wikipedia.org/wiki/Foundation_models) Cohere (https://cohere.com/) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Feature Store (https://www.featurestore.org/) dbt (https://www.getdbt.com/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Google Trends data: ChatGPT is 8x more popular than Bing, 33x more popular than Bard, and only growing.Could an AI learn things or discover things humans have not been able to understand or not discovered yet?Synthetic data could be better than real data: Amazon Is The Latest Major Ad Platform Going All-In On Machine Learning Tech:7 popular tools and frameworks for developing AI applications:TensorFlow is an open-source platform developed by Google, which provides a comprehensive framework for building and deploying machine learning models across multiple platforms.PyTorch is another popular open-source machine learning framework, widely used for developing AI applications such as image recognition, natural language processing and reinforcement learning.Keras is an open-source neural network library that runs on top of TensorFlow or Theano. It is a user-friendly platform that allows developers to create and train deep learning models with just a few lines of code.Caffe is a deep learning framework developed by Berkeley AI Research (BAIR) and community contributors. It is designed for fast training of convolutional neural networks and is commonly used for image and speech recognition.CNTK is an open-source framework developed by Microsoft that provides a scalable and efficient platform for building deep learning models.Theano is a popular Python library for numerical computation, specifically designed for building and optimizing deep neural networks.Apache MXNet is a scalable and efficient open-source deep learning framework, which supports multiple programming languages, including Python, R and Scala. It is widely used for computer vision, NLP and speech recognition applications."Liquid" Neural Network Adapts on the Go:Drones equipped with liquid neural networks edged out other AI systems when navigating unknown territory....AI could be the secret weapon in preventing the next global pandemic:AI Unraveled Book: Attention AI Unraveled podcast listeners! Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," now available on Amazon! This engaging read answers your burning questions and provides valuable insights into the captivating world of AI. Don't miss this opportunity to elevate your knowledge and stay ahead of the curve.Get your copy on Amazon today at https://amzn.to/40HXDEl
Todo el mundo hablando de #IA, de #ChatGTP, de lo que viene, de lo que podría ser. Pero lo cierto es que la revolución comenzó hace tiempo y existen herramientas muy potentes que te gusten o no, harán la diferencia en la industria TI. En este episodio exploramos conceptos y asistentes, utilidades que van a permitirte iniciar o destacar en tu rama, te invito a escuchar con atención. Algunas de las apps nombradas en este episodio: - GitHub Copilot: https://github.com/features/copilot - Replit : https://replit.com/ - Helper Programming: https://www.programming-helper.com/ - AutoRegex: https://www.autoregex.xyz/ - Amazon CodeWhisperer: https://aws.amazon.com/es/codewhisperer/ Ahora si querés desarrollar vos mismo estas herramientas o modelos de IA, existen plataformas como : Google ML Kit, Auto ML, TensorFlow, Theano, MxNet, PyTorch, Auto ML, OpenNN entre otros. Si conoces alguna otra aplicación que utilice la Inteligencia Artificial y consideres que se debe mencionar en estos comentarios, será un gusto poder compartirla con la comunidad. Te invito a sumarte a mis contactos en : https://www.linkedin.com/in/soleralejandro Gracias por estar allí como cada semana y si este podcast te impactó o te pareció útil, la mejor forma de colaborar es valorarlo o compartirlo con alguien mas, así puede llegar a mas personas. - https://www.facebook.com/codigotecno - https://www.instagram.com/codigotecno Sumate a la comunidad en Youtube: https://bit.ly/2JLaKRj En Telegram estamos empezando a armar el canal donde compartimos material que puede aportar a tu formación, recursos, ofertas de empleo y cosillas interesantes. Te esperamos en : https://t.me/codigotecno Y si querés participar en la comunidad : https://t.me/elgrupodecodigotecno Envíame un email : codigotecno (arroba) hotmail.com Seguinos en las redes de podcast mas populares: * En Spotify : https://spoti.fi/31Dp4Sq * En Ivoox : https://bit.ly/2JoLotl * En Itunes: https://apple.co/2WNKWHV * En Anchor.fm: https://bit.ly/3OiVCsN Te espero, animate . ! Muy buen código para todos y hasta la próxima. !
Hoy dedicamos el programa al 8 de marzo, el día de la mujer y, en concreto, a las primeras mujeres matemáticas: Theano de Crotarco e Hipatia de Alejandría. Hablamos de sus historias e ideas sobre el geocentrismo de Ptolomeo, o el heliocentrismo de Aristarco de Samos. También damos unos datos negativos en cuanto a la mujer en el mundo de la ciencia exacta, con BayesAna. Puedes participar en el programa con un audio de Whatsapp al 687229373, en el twitter @raizde5RNE o en el mail raizde5@rtve.es Seguimos la siguiente semana, por inducción, n+1...
Welcome to The Gorgeous Grey Podcast, I am so excited you are here! My name is Nicole Scott, your host, Registered Holistic Nutritionist, rREST Mindset Coach, Author of The Gorgeous Grey Movement and Co Creator of The Menopause Reset Program. In this episode, Sexologist Theano Evagelou joins me to discuss the ins and outs of sex during menopause. Heralded as a 'modern day love doctor', Theano Evagelou is a Certified Sexologist, Authentic Tantra Practitioner, and Women's Health Specialist who champions the way for visionary women to live in their unique genius in every area of their life by creating the most incredible connections to their bodies, their sexuality and the source of their divine intelligence. Recognized as a Woman Of Inspiration and 2022 award recipient in the category of Transformational Leader, Theano is dedicated to helping women defy the norms of society that don't fit their dream life - breaking the chains that bind them to the belief that success and love can't exist simultaneously and bridging the gap between sexuality and leadership. Through this subtle but potent alchemy, women create what has not yet been created by living confidently in their feminine essence with their body as a source of power. Her insights on sexuality, love, and desire have been featured in Forbes, Cosmopolitan, Authority Magazine, RedX, and Disrupt. When she's not helping others love themselves unconditionally and connect to themselves, their bodies and their intuition using love as the vehicle, you can find her in a book or course continuously upleveling her knowledge and personal growth (in sexuality work, women's health, real estate investing, and/or sacred relationship, and parenting), in practices and rituals that keep her powerfully connected to her body, nature, and a life of adventure and travel as she builds her future with the two loves of her life, her partner and teenage daughter. ~What We Discuss~ What is a Sexologist and what do they do? It's not just about sex What happens to women during Menopause that impacts sex Living with intention, gratitude and focus Tools you can use to help your libido The importance of lowering stress and improving sleep Why putting yourself first is so important ~Connect With Me~ Thank you for listening to today's episode! If you enjoyed today's episode, feel free to rate and review The Gorgeous Grey Podcast and share this episode with a friend. While you're here, don't forget to subscribe to The Gorgeous Grey Podcast to be notified when the next episode is released! Instagram: @gorgeousgreymovement Facebook: https://www.facebook.com/groups/250601575656819 Youtube: https://www.youtube.com/channel/UC9KTkJrHix1pN5gAzHV2kpQ Tik Tok: @gorgeousgreymovement Website: www.nicolescott.ca ~Connect With Theano~ Website: thetheano.com Instagram: https://www.instagram.com/the.theano/ Facebook: https://www.facebook.com/theanoevagelou ~Listener Freebie~ https://www.thetheano.com/embodied-menopause-meditation
Jedeneen hett je al mol wat vun „Philosophie“ heuert. Bi Mennige weer dat de Ünnerricht in de School, den se noch weniger mucht hebbt as all den annern Ünnerricht. Overs, wat is dat eegentli, Philosophie? Heeten deiht dat je, wat man de Weisheit leev hett. „Philos“ is de Leevde un „Sophos“ de Weisheit. Un de Philosophen hebbt al jümmers doröver nodacht, wat dat allns schall. Also würkli allns! Worüm gifft dat de Welt, worüm sünd wi hier, wo kümmt wi her, wo goht wi hen un wat för Schoh schüllt wi dorto antrecken? Tscha. Gor ni so eenfach. Nu gifft dat den ol'n Snack ut de ole Griechiche Philosophie, de heet: „Ick weet, dat ick nix weet.“ Sokrates ut Athen schall dat fröher mol seggt hebben. So hett ook de Philosoph Cicero dat schreeven, de je overs bummeli 300 Johr no Sokrates levt hett. Sokrates sülms hett gor nix opschreeven. De weer mehr för't Snacken. He keem noher jümmers blots in de Schriften vun anner Philosophen vör, Platon to'n Bispeel. De weer 'n Schööler vun Sokrates. Nu hett Platon düt över Sokrates schreeven un annere hebbt dat över em schreeven un bums, weer de Striedt dor. Blangbi: In de Philosophie hett dat noch nie 'n Fruunquote geeven. Dat weern meist allns Mannslüüd. Blots üm de 15 Perzent weern Fruunslüüd. Theano to'n Bispeel. Kinnt Ju ni? Overs ehrn Mann kinnt Ju! Ut'n Mathe-Ünnerricht: Pythagoras! Na jo, dat is so, as dat is. Overs nochol trüch to dat Nix weeten. Sowiet as sick dat rekontrueern lött, hett Sokates nie seggt, wat he nix weet. He hett anschiend mol seggt, dat he 'n ganzen Barg ni wusst hett. Over nix hett he even ni wusst, sünnern 'n lütt beten mehr. Nu kunn man sick frogen, wann man genog weet oder sogor to veel. Ook dat sünd philosophische Frogen. Un dat is je dat Scheune an de Philosophie. Man kann dat jümmers un överall moken. Dat kost nix, un in de Tied, in de man an Sinneern is, kann man ook keen dumm‘ Tüch anstelln. Worüm de Minsch in't Allgemeine overs doch veel dumm‘ Tüch mokt anstatt mehr notodinken, dat weet ick ook ni... In düssen Sinn
Sign up for our brand new 14-day Credit Hero Challenge: http://creditherochallenge.com/Could you imagine going from having to sell your car and get a loan for your business…To close the deal with the bank worker…Falling in love with each other…Starting a business together…And 18 months later you're both millionaires from it?That's the remarkable story that Dylan & Theano Shively are here to tell us about today. Dylan went from six years in the army to a business managing partner at Verizon, while Theano worked at her banking job for 12 years.Today they are here to tell their story. They will be sharing what they have learned about credit repair, starting a business, scaling and mindset hacks that they wish they knew at the start of their journey.Now, the married couple have joined the Credit Repair Cloud Millionaire's Club alongside countless other 7-figure business owners.Make sure to check it out!Key TakeawaysDavid & Theano's story (00:00)The couple's background (01:18)How their early life looked (05:18)How expensive was it to launch their business (15:46)The power of word of mouth (21:28)Best social platforms for growth (30:41)Best follow up strategy for leads (32:22)How to delegate (37:39)Top tips for starting out (42:40)The importance of culture (47:01)What motivates you? (56:18)Episode wrap-up (1:02:09)Additional Resources:- Get a free trial to Credit Repair Cloud- Get my free credit repair trainingMake sure to subscribe so you stay up to date with our latest episodes!
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Bummed about missing Rhinebeck? So are we! Lisa Souza dyeworks specializes in dyeing mill spun yarns and fabulously decadent luxury handspun yarns, for discerning collectors. Soothe your Rhinebeck FOMO with some delicious decadence at Lisaknit.com Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! The Sandpiper Gift & Yarn Boutique and Hoof-to-Hanger Fiber Mill in Bridgman, MI. The Sandpiper offers artisan - designed gifts (local, regional, national artists and Fair Trade products). The boutique also offers their cottage industry fiber mill products - yarn, roving, corespun, batting, knit/woven garments, accessories, home goods and an exclusive Yarn and Fiber Club. Explore www.thesandpiper.biz! On the Needles:(0:41) Gigi is working on autopilot socks for Jasmin: Online SuperSocke 2317 from Black Squirrel in Berkeley. Foot is done, need to start the toe Jasmin finished the knitting for her #RhinebeckSweater- Bare Branches by Alana Dakos in Little Skein in the Big Wool, Targhee Sweater “Cider Donuts”. Gigi is working on another Musselberg hat by Ysolda out of Oink Pigments in the Halloween colorway This is the Weigh. Jasmin finished her silk embroidery project with Little Skein silk embroidery floss from “The Embroidered Garden” by Kazuko Aoki (Join Anne's mailing list!) Pellon 541 stabilizer Gigi frogged the second sleeve of the Rocky Coast Cardigan. Jasmin is finished the knitting on the the Pyramis test knit for Ainur Berkambayeva Jasmin swatched for and started her Rainbowgan by Saffiyah/TheDrunkKnitter In Stitches:(18:27) Gigi snuggled under the Halloween quilt. She wore the flannel shirt, and started wearing hand knit socks, the Pointed Firs shawl, and the Carli cardigan from Cocoknits Genevieve wore her Anna cardigan, Gryffindor hat and scarf set, and hand sewn masks, Also: Pantasic hoodie, Waters Edge cardigan, Payne Pullover, Hearthstone cardigan, Coronation cardigan Rex wore his fox hat, hand sewn masks and the Lion heart Hoodie Jasmin wore her Barberry cardigan and her Hamilknit hat, and her fuzzy beanie with the orange pom pom, her Theano, and her Bare Branches Events:(24:24) RHINEBECK! October 16-17 2021, and Indie Untangled ! Dutch Alehouse, and the Hudson Valley Dessert Company Stitches West 2022, Sacramento CA (March 3-6, 2022) Digital COVID19 Vaccine Record website Mother Knows Best:(46:44) Do the things you want to do when you get the chance to do them. When Knitting Attacks:(51:07) We talk about accessibility Knits in Space:(58:48) All things HALLOWEENY!!! And Sew On: (1:02:17) Trouser Drafting class at Cañada College : Gigi drafted a jeans pattern and sewed the muslin. Assignment was to draw yoke , fly, fly shield, pocket, pocket lining and jeans front
Some personal challenges forced attention to loved ones, and we are now on our way back! Arrays of Living's Series Connect to Thrive Morning Reflections addressed how so many of us build up an externally facing façade to have onlookers see a perfect life. Yet many of those are deeply miserable behind that strategically orchestrated camouflage. Listen in as Jenn and Theano address the many questions that arise. Why are we choosing to live that way? What are the impacts of living that way? Isn't this something we all do? When does it become too much?
Welcome to the third episode of Speaking to the Dead. Where Doug Rooney and Will Stafford read historic texts and put them in conversation with the modern-day. The only rule: the author must be dead! In this episode, we are joined by Theano, an early disciple of Pythagoras, to hear what the Greeks have got wrong about her cult.
This episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices We'd like to tell you about one of our favorite makers: Little Skein Anne.Anne is a knitter, designer, artist and storyteller. Little Skein is her shop. She makes beautiful handmade knitting kits, hand-dyes yarn, illustrates project bags, and is slowly designing a handmade wardrobe in her San Francisco home studio. She inspires YOUR creative journey by sharing her own. Her yarn and kits support you to make beautiful things, including a more sustainable, nourishing and equitable world. Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! The Sandpiper Gift & Yarn Boutique and Hoof-to-Hanger Fiber Mill in Bridgman, MI. The Sandpiper offers artisan - designed gifts (local, regional, national artists and Fair Trade products). The boutique also offers their cottage industry fiber mill products - yarn, roving, corespun, batting, knit/woven garments, accessories, home goods and an exclusive Yarn and Fiber Club. Explore www.thesandpiper.biz! One minute Health update On the Needles:(0:45) Genevieve worked on Jasmin's Theano test knit Jasmin FINISHED! her Theano test knit Gigi cast on a pair of autopilot socks for Jasmin Gigi Genevieve wound Subito Farms, Estabrook in "Marigold" Genevieve wound Tess Yarns,Kitten,Sliver Gigi is working on the Bramwell shawl out of Tess Yarns Kitten for Grandma Naheed Jasmin is making good progress on the Pyramis test knit for Ainur Berkambayeva Jasmin changed the collar on the Cabled Raglan pullover for Dr Gemma Genevieve worked on her hat Gigi cast on another Musselberg hat by Ysolda out of Oink Pigments Ladybug Love Jasmin is test knitting the Ursa Canis Dog sweater by Jacqueline Cieslak for Han in Little Skein Targhee Sweater "Juniper" In Stitches:(15:18) Genevieve wore her Gryffindor Hat, her Hearthstone, her Manos Sweater, her Water's Edge cardigan(our of Abstrct Fibers Perfect red), her Sweet Im-peach-ment Hat, her scarf she made herself, her Barberry Sweater, and her Anna Cardigan Gigi snuggled under the Halloween quilt, and the KAL King quilt Jasmin wore her Odds and ends cardigan and her Bandit cardigan Events:(19:20) Stash Dash is over! Knitgirllls are giving away prizes Jasmin has posted her Stash Dash spreadsheet here Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 RHINEBECK! October 16-17 2021, and Indie Untangled ! Mother Knows Best:(24:17) When you get it done, is when you get it done! Purloined From Sheepspot When Knitting Attacks(28:33) Jasmin neglected to add the bust darts to her Pyramis. Rrrrrip Gigi is foiled by a model, the Roomba, and not using the KnitCompanion app. Straw into Gold(37:41) Finished plying the Lisa Souza Ultrafine Australian Merino Roving (Emerald city) , for the INCREDIBLES set of family sweaters Finished plying the white Sue Reuser cormo (“Farrah”) Genevieve wound Skeins of finished Sue Reuser cormo 3ply and 4ply (4 skeins wound,3 left to wind) Genevieve counted yardage for her mom's Liza Souza, Australian Superfine Merino, 4ply,in Emerald City(four skeins) (total yardage for this is 1,386 (not multiplied by 5 yet)) Genevieve practiced winding bobbins Knits in Space:(43:03) Visit with Dr Gemma Wollkanal. Frieda und Laura, Saga of about 300kg colored wool, Röhnschaf, And Sew On: (42:55) Trouser Drafting class at Cañada College started. Jasmin measured Mood mystery box! Gigi need s to consult Fabric for Fashion The Swatch Book, so she can figure out what they got Jamin installed a nose wire for Dr Gemma's mask
Professor Kozlowski crosses the Rubicon to discuss the rise of the Roman Empire, its widespread (and politically-motivated?) embrace of stoicism, and how that informs Roman attitudes on hero-worship and suspicion against Love. Today we're reading Theano's "Letter on Marriage and Fidelity", selections from Ovid's The Art of Love, Lucretius' argument against love from On the Nature of Things, and the first third of Cicero's De Amicitia. If you have questions or topic suggestions for Professor Kozlowski, e-mail him at profbkozlowski2@gmail.com To see what else Professor Kozlowski is up to, visit his webpage: https://professorkozlowski.wordpress.com/
This week's episode is sponsored by: If you're ready for a streaming service that offers new stories, new characters, and breathtaking sceneries every week, do what I did and get Acorn TV! Try Acorn TV free for 30 days, by going to Acorn dot TV and use my promo code knitmore. But you HAVE to enter the code in all lowercase letters. That's A-C-O-R-N dot T-V, code knitmore to get your first 30 days for free! Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(2:51) Genevieve worked on Jasmin's Ripple Crop Top Jasmin finished her Ripple Crop top by Jessie Maed Designs Gigi worked on the sleeve of the Rocky Coast cardigan, knitted the cuff Genevieve wound Artful Yarns,Fable,Color 96 Jasmin cast on her Rhinebeck sweater! Bare branches by Alana Dakos in Little Skein in the Big Wool, Targhee Sweater “Cider Donuts”. Jasmin's modifications so far include: Judy's magic cast on, knitting it in one piece, and German short rows. Gigi cast on the Bramwell shawl out of Tess Yarns Kitten for Grandma Nahid Jasmin is making progress on Sleeve #1 of her Theano test knit Jasmin re-swatched for the Pyramis test knit for Ainur Berkambayeva, starting it today In Stitches:(13:47) Genevieve wore her Gryffindor Hat and her Hearthstone Gigi:Halloween quilt Jasmin wore her Ripple Crop Top Events:(16:02) Stash Dash is being hosted on DISCORD until August 31st Jasmin has posted her Stash Dash spreadsheet here Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 RHINEBECK! October 16-17 2021, and Indie Untangled ! Mother Knows Best:(19:46) Make a list and get it out of your head Accidentally Purloined this from Dr Gemma and the Cogknitive Podcast Genevieve mentions Jasmin's sloth notepad. When Knitting Attacks(30:52) Gigi managed to make mistakes on the Bramwell, and Forgot to take a picture Straw into Gold(35:35) Finished plying the Lisa Souza Ultrafine Australian Merino Roving (Emerald city) Spun some more of the white Sue Reuser cormo (“Farrah”) Knits in Space:(37:29) Trip to Black Squirrel in Berkeley! Zachary's Pizza! And Sew On: (42:55) Checked on class. Book is from Suzy Furrer that I bought years ago. Jasmin sewed masks! Using the Japanese Sewing Books pattern, added nose wires. Jasmin mentions adding nose wire to the masks
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Go to HelloFresh dot com slash knitmore14 and use code knitmore14 for up to 14 free meals, plus free shipping! Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:44) Genevieve wound Oink Pigments,Mystic DK,in Mother Knows Best x2 Jasmin finished her Nydia test knit for Vanessa Smith Designs, in Oink Pigments Mystic DK “Mother Knows Best Gigi cast on a Musselberg hat by Ysolda with the Oink Pigments "The Mane Event" Genevieve wound Oink Pigments,Targhee Sock, "The Mane Event" (for Gigi) Gigi:wove in ends on remaining socks and Musselburg Jasmin is making progress on Sleeve #1 of her Theano test knit Genevieve wound Tess Yarns, Kitten, in the color Sliver for the Bramwell shawl Jasmin finished the left front and is nearly done with the right front on her Ripple Crop top by Jessie Maed Designs Gigi worked on the forearm decreases of the sleeve of the Rocky Coast cardigan, In Stitches:(10:57) Genevieve wore her Anna Cardigan Genevieve wore her pink beanie she made herself Genevieve wore the scarf she wove herself Gigi:Halloween quilt Events:(12:23) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics are over Gigi mentions the Dressed podcast Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 RHINEBECK! October 16-17 2021 Mother Knows Best:(15:45) If you're on a roll, roll with it If you Give a Mouse a Cookie Straw into Gold(24:12) Plied half of the Lisa Souza Ultrafine Australian Merino Roving (Emerald city) Blair Auclair's Ranch (Radicle Herbs) Knits in Space:(29:12) True Crime and Knit podcast with Saffiyyah (Jasmin guest hosts on this episode) Apple TV Wainwright Walks Coast to Coast And Sew On: (34:42) Genevieve went to Jo Anne's and got mask fabric Fabric has been washed
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices We'd like to tell you about one of our favorite makers: Little Skein Anne.Anne is a knitter, designer, artist and storyteller. Little Skein is her shop. She makes beautiful handmade knitting kits, hand-dyes yarn, illustrates project bags, and is slowly designing a handmade wardrobe in her San Francisco home studio. She inspires YOUR creative journey by sharing her own. Her yarn and kits support you to make beautiful things, including a more sustainable, nourishing and equitable world. Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! The Sandpiper Gift & Yarn Boutique and Hoof-to-Hanger Fiber Mill in Bridgman, MI. The Sandpiper offers artisan - designed gifts (local, regional, national artists and Fair Trade products). The boutique also offers their cottage industry fiber mill products - yarn, roving, corespun, batting, knit/woven garments, accessories, home goods and an exclusive Yarn and Fiber Club. Explore www.thesandpiper.biz! On the Needles:(0:46) Genevieve wound Oink Pigments, Mystic DK, in Mother Knows Best Jasmin finished the body of her Nydia test knit for Vanessa Smith Designs, in Oink Pigments Mystic DK “Mother Knows Best” Gigi finished the Musselberg hat by Ysolda with the orange and pink Oink yarn Genevieve wound Liza Souza, Hardtwist Merino, Elsa Blue Jasmin split for the fronts and backs on her her Ripple Crop top by Jessie Maed Designs Gigi: cast on a Vanilla is the new Black sock Genevieve wound Louet Gems, Gems Merino, in Apple Blossom Jasmin has finished the body of her Theano test knit, and has started the first sleeve. Gigi worked on the sleeve of the Rocky coast cardigan Jasmin sewed the pockets to the body of Genevieve's Pantastic Hoodie, sewed the steek (here it is on Instagram), steeked it (and posted to IGTV), knitted the hood, and blocked it again. Taught some young friends to knit Jasmin mentioned Brittany needles Rules of Knitting in Public In Stitches:(20:24) Genevieve wore her Bandit Cardigan, and the pink beanie she made herself Gigi: Cal King quilt, Halloween quilt Events:(22:48) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics will began on Friday, July 23, 2021 Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 RHINEBECK! October 16-17 2021 Mother Knows Best:(25:44) Best practices when photographing a kid When Knitting Attacks:(32:00) Tantalus, but with tape measures. PanathenicGames!35:58) #PanatheKnit #TeamSasquatch: A little spinning for the Incredibles sweater set Got the Lisa Souza Ultrafine Australian Merino Roving (Emerald city) ready to ply Blair Auclair's Ranch (Radicle Herbs) Different switch for the Device Knits in Space:(41:20) School shopping (Genevieve) Did photo Shoot (Genevieve) Gentleman knitting at the Olympics And Sew On: (50:45) Jasmin picked up fabric for masks for Rex for school. Jasmin bought a Burda pants pattern
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! The Sandpiper Gift & Yarn Boutique and Hoof-to-Hanger Fiber Mill in Bridgman, MI. The Sandpiper offers artisan - designed gifts (local, regional, national artists and Fair Trade products). The boutique also offers their cottage industry fiber mill products - yarn, roving, corespun, batting, knit/woven garments, accessories, home goods and an exclusive Yarn and Fiber Club. Explore www.thesandpiper.biz! On the Needles:(0:40) Genevieve wound Seismic Yarns,Seismic Butter Sock, in "OOak" (Jasmin suspects it's a prototype of "Escape") Gigi finished the Musselberg hat by Ysolda with the blue Oink yarn (January's Yarn of the Month) Genevieve wound Oink Pigments, Dapper, in "Give Peeps a Chance" Gigi finished a sock for Andrew, knitted the foot up to toe decreases. Jasmin has finished the body of her Theano test knit, and is almost at the ribbing. Gigi cast on a Musselburg hat out of pink and orange (June's Yarn of the Month) Oink yarn, and is getting close to the decreases. Jasmin sewed the pockets to the body of Genevieve's Pantastic Hoodie, and picked up and knit the zipper facings. Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie In Stitches:(0:32) Genevieve wore her Zebra Socks, her Anna Cardigan, her Gryffindor Hat , and her pink beanie she made herself. Gigi used her Cal King quilt and her Halloween quilt. Jasmin wore her Blossom pullover. Events:(12:39) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics will began on Friday, July 23, 2021 Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 RHINEBECK! October 16-17 2021 Mother Knows Best:(14:58) Keep wearing your masks. When Knitting Attacks:(23:16) Jasmin's Akerworks bobbin breaks WHILE she's spinning on it. PanathenicGames!(26:36) #PanatheKnit #TeamSasquatch: A little spinning for the Incredibles sweater set Lisa Souza Ultrafine Australian Merino Roving, some white cormo Blair Auclair's Ranch (Radicle Herbs) Knits in Space:(30:02) OYSTER SHUCKING! And Sew On: (40:13) Jasmin is hand sewing a buttonhole on her friend's Favorite Sweater Ray's Sewing
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! The Sandpiper Gift & Yarn Boutique and Hoof-to-Hanger Fiber Mill in Bridgman, MI. The Sandpiper offers artisan - designed gifts (local, regional, national artists and Fair Trade products). The boutique also offers their cottage industry fiber mill products - yarn, roving, corespun, batting, knit/woven garments, accessories, home goods and an exclusive Yarn and Fiber Club. Explore www.thesandpiper.biz! On the Needles:(0:46) Genevieve wound Artful Yarns,Fable,color 96 Jasmin is mostly up the body of her Ripple Crop top by Jessie Maed Designs Gigi: kitchenered three socks Jasmin is nearly done with the body of her Theano test knit, and is almost at the ribbing. Gigi did the decreases for the crown of the Musselberg hat by Ysolda with the blue Oink yarn Genevieve wound Oink Pigments, Dapper, in "Don't Touch The Hat" for Genevieve's Louise Belcher Bunny Ears hat Jasmin is nearly done with the body of the Nydia test knit for Vanessa Smith Designs, in Oink Pigments, Mystic DK, “Mother Knows Best” Genevieve wound Schaefer yarns Elaine at knitting group Gigi: working on a sock for Andrew, knitted the foot up to toe decreases Genevieve taught her best friend to knit Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie In Stitches:(12:31) Genevieve wore her Hogwarts Socks, Colorblock Pullover, and her pink beanie. Gigi slept under her Cal King quilt and Halloween quilt Events:(14:24) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece is over Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 Fiberworld July 21-25! RHINEBECK! October 16-17 2021 Mother Knows Best:(19:20) Take a break, take a nap, take a breath. Time spent resting isn't wasted. When Knitting Attacks:(29:40) Gigi was working on a sock for Andrew. Jasmin mentioned Carson Demers ErgoIKnit Gigi talks about her Egg crate Genevieve was attacked by yarn she was winding Tour de Fleece(36:52) #PanatheKnit #TeamSasquatch: A little spinning for the Incredibles sweater set Lisa Souza Ultrafine Australian Merino Roving Blair Auclair's Ranch (Radicle Herbs) Different switch for the Device Knits in Space:(40:31) Shark week True Crime and Knit podcast! And Sew On: (50:45) Gigi has not figured out how to sit and work at sewing machine with one leg sticking out
This week's episode is sponsored by: Escape to Britain and beyond without leaving your seat. Try Acorn TV free for 30 days, by going to Acorn dot TV and use my promo code knitmore. That's A-C-O-R-N dot T-V, code knitmore to get your first 30 days for free! Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. Go to HelloFresh dot com slash knitmore14 and use code knitmore14 for up to 14 free meals, including free shipping! When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:40) Genevieve taught her Bestie how to knit! She used Brittany Needles, and Liza Souza, Hardtwist, in Robins Egg/Elsa Blue. Gigi is working on another pair of Vanilla is the new Black socks in some deep stash Regia silk. Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin has finished the body, sleeves, pockets, and has blocked Genevieve's Pantastic Hoodie in Oink Pigments Dapper “Give Peeps a Chance”. Gigi is almost finished with the Musselberg hat by Ysolda with the blue January Year of Yarn color from Oink Pigments. Genevieve wound Artful Yarns, Fable, color 96 from Jasmin's deeeeep stash for a Ripple Crop Top by Jessie Maed Designs. Genevieve wound Subito Farms, Estabrook in "Marigold". Jasmin has finished the increases on the body of her Theano test knit, and is almost at the ribbing. Gigi: wants to work on the sleeves of the Rocky coast cardigan Gigi is working on a sock for Andrew, and has finished the gusset. In Stitches:(18:58) Genevieve wore her Hearthstone, and her pink beanie she made herself Gigi: A line skirt , Cal King quilt, Halloween quilt Events:(21:18) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Bay Area Fiber Fair - JUNE 25 - SEPTEMBER 23, 2021 Fiberworld July 21-25! RHINEBECK! October 16-17 2021 Mother Knows Best:(25:25) You feel good when you wear your favorite clothes or clothes that you feel in love with. (From Genevieve) Tour de Fleece(31:09) #PanatheKnit #TeamSasquatch: A little spinning for the Incredibles sweater set Lisa Souza Ultrafine Australian Merino Roving Blair Auclair's Ranch (Radicle Herbs) Different switch for the Device Jasmin wound bobbins of Lisa Souza, Superfine Australian Merino, “Squash Blossom”, “marionberry”, “emerald city” Jasmin wound bobbins of Merino Corriedale from Mary the Sheep Genevieve assisted with bobbin winding ( Detached and Attached) Gigi spun on her Turkish drop spindle Knits in Space:(34:21) Book: Once Upon a Tale by Mercedes Lackey And Sew On: (37:40) Gigi 4 more A line skirts, they are in various states of construction Plan B for buying invisible zippers Jasmin wants to sew a button and buttonhole onto her friend's sweater.
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. Go to HelloFresh dot com slash knitmore14 and use code knitmore14 for up to 14 free meals, including free shipping! When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:46) Genevieve wound Oink Pigments, Mystic DK, in Mother Knows Best for Jasmin Jasmin is past the armhole divide on the Nydia test knit for Vanessa Smith Designs, in Oink Pigments Mystic DK “Mother Knows Best” Gigi: working on Vanilla is the new Black and she cast on another pair in some deep stash Regia silk Jasmin has finished the body of Genevieve's Pantastic Hoodie in Oink Pigments Dapper “Give Peeps a Chance”, and finished the cap of the first sleeve. Genevieve wound Oink Pigments, Dapper, Tumbling Turquoise Gigi finished the Musselberg hat by Ysolda with the Emily Ocker cast on, in Lolabean Yarn Co thunder and lightning. Genevieve wound Subito Farms, Estabrook, in “Marigold” Jasmin is still plugging along down the body of her Theano test knit. Gigi: dug out the Rocky coast cardigan and is working on a sleeve Genevieve wound Oink Pigments, Dapper, in "Dijon Vu” Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin did some visible mending on a fast fashion favorite sweater for a friend Katrinkles mending loom Gigi: cast on a sock for Andrew, finished the heel flap Gigi: not sure what she wants to make out of the Oink yarn, trying to knit as it comes in. Jasmin suggested Treppenviertel In Stitches:(12:27) Genevieve wore her pink Beanie Gigi: A line skirt , Cal King quilt, Halloween quilt Jasmin wore her Rebound cowl, and the Flying Home Tank Events:(16:23) Stash Dash is being hosted on DISCORD Jasmin has posted her Stash Dash spreadsheet here 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! RHINEBECK! October 16-17 2021 Mother Knows Best:(21:25) Knit with the seasons. Knit summer stuff in the summer, winter stuff in winter. Strapless bra When Knitting Attacks:(27:34) Gigi: Rocky Coast cardigan One ply of two ply yarn was torn, or chewed. Tour de Fleece(29:58) #PanatheKnit #TeamSasquatch: A little spinning for the Incredibles sweater set Lisa Souza Ultrafine Australian Merino Roving Blair Auclair's Ranch (Radicle Herbs) Different switch for the Device Knits in Space:(32:44) Grocery run, Joanne's , Bar visit Book: Once Upon a Tale by Mercedes Lackey And Sew On: (50:45) Gigi cut 3 more A line skirts, they are in various states of construction Light Sticker with measurements When sewing attacks : Folded fabric for lining off the skirts and promptly cut the front on the selvage and cut the backs on the fold
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices We'd like to tell you about one of our favorite makers: Little Skein Anne.Anne is a knitter, designer, artist and storyteller. Little Skein is her shop. She makes beautiful handmade knitting kits, hand-dyes yarn, illustrates project bags, and is slowly designing a handmade wardrobe in her San Francisco home studio. She inspires YOUR creative journey by sharing her own. Her yarn and kits support you to make beautiful things, including a more sustainable, nourishing and equitable world. Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:40) Genevieve has been balling up yarn for Jasmin for Genevieve's Pantastic hoodie, out of Oink Pigments Give Peeps a Chance, Jasmin cast on and has semi-finished the body of Genevieve's Pantastic Hoodie in Oink Pigments Dapper “Give Peeps a Chance”. Custom Croquis site. Here's the image for the image that Genevieve filled in . Pattern from Ann Budd's Handy Book of Sweaters. Gigi: finished a pair of Vanilla is the new Black socks out of Oink yarn, and cast on another pair in some deep stash Regia silk. Jasmin has finished- finished with the Flying Home (the tucks tank) test knit for Ainur Berkambayeva in raw silk from TessYarns. On size 0s. Genevieve wound Oink yarn for Gigi in Peridont Genevieve wound Ladybug love from Oink Pigments for her Gramzie Gigi finished the Musselberg hat by Ysolda, in Lolabean Yarn Co thunder and lightning. Jasmin is still plugging along down the body of her Theano test knit. We mention Dr Gemma Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin swatched and cast on the Nydia test knit for Vanessa Smith Designs, in Oink Pigments Mystic DK “Mother Knows Best” Gigi: dug out the Rocky Coast cardigan and picked up underarm stitches Genevieve worked on her hat at knitting group, and helped Gigi sort needles Jasmin found a solution for her Comfort Fade cardigan, to accommodate all the colors (the Lesley Tee shaping) Gigi: cast on a sock for Andrew and promised him a liqueur pie In Stitches:(23:33) Genevieve wore her pink beanie, Hearthstone, and the Zebra socks Jasmin wore her Dissent cardigan and the rolled brim black cloche Gigi :A line skirt, Cal King quilt, Halloween quilt . and the Vitamin D cardigan Events:(29:03) Stash Dash is being hosted on DISCORD 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! Mother Knows Best:(30:18) When knitting for others, watch their body language when they try it on. When Knitting Attacks:(35:11) Gigi: measurements have changed since she started the Rocky Coast cardigan Tour de Fleece(37:08) #panatheknit #Team Sasquatch Knits in Space:(38:06) Grocery run Knitting group in person Jaws at Kitchen Cinema And Sew On: (45:45) Gigi worked on the leopard print A-line skirt, and on the shiny A-line skirt
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:36) Genevieve finished her weaving project on the Schacht Cricket loom Jasmin is nearly finished with the Flying Home (the tucks tank) test knit for Ainur Berkambayeva in raw silk from TessYarns. Gigi finished the Musselberg hat by Ysolda with the Emily Ocker cast on. Cast on another one in LolaBean Yarn Co in Thunder and Lightning . We mention the Yarniacs Jasmin mentions the artist who does Persian and Arabic calligraphy Ocean by the Sea has the best wool wash. Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin is still plugging along down the body of her Theano test knit. Genevieve is going to order a custom label. Gigi: all winter socks are washed In Stitches:(16:52) Genevieve wore her Hogwarts socks, and runway modeled her new hand woven scarf Gigi: wore her A line skirt Jasmin wore her Modern Art Pullover Events:(19:32) Stash Dash in May is hosted on DISCORD LINK 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! Mother Knows Best:(21:51) Put your feet up! When Knitting Attacks:(29:54) Gigi: Split a stitch 15 rows down on the Musselburg hat By Ysolda Dropped down and fixed it Pan-Athenic Games:(28:49) and Tour de Fleece training #panatheknit Team Sasquatch Week 6: Find your support system Knits in Space:(32:14) Dressed podcast Mr. Rogers neighborhood The Good Neighbor, Fred Rogers Biography. (Here's the audiobook link with LeVar Burton reading it.) Brian Posehn comedy And Sew On: (39:18) Why an A-line skirt ? Janome refused to sew. Singer was cooperative. Gigi sewed a few steps on the skirt, and did some cutting
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:38) Genevieve has been balling up yarn for Jasmin Jasmin finished the reversed Rebound cowl/poncho thing in LolaBean Yarn Co Diptera, contrasted with the Black Trillium Fibres Indra 2.0 gradient set. Gigi finished weaving in ends on all the socks Genevieve knitted a few stitches onto Jasmin's Rebound cowl. Jasmin cast on the tucks tank test knit for Ainur Berkambayeva in raw silk from TessYarns. On size 0s. Gigi finished the Musselberg hat by Ysolda with the Emily Ocker cast on, in Oink Pigments Deux or Dye, then, she cast on another one Genevieve wound her mom's Seismic yarn in the "Black Widow" colorway, and is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin is still plugging along down the body of her Theano test knit. In Stitches:(12:37) Genevieve wore her Hearthstone, coronation cardigan and the Zebra socks Gigi wore her A-line Skirt from the Beginning Clothing Construction class, and slept on a Burrito pillow case Events:(17:18) Stash Dash in May is hosted in other places besides ravelry. DISCORD LINK Juneteenth events with Abstract Fiber and Lady Dye Yarns June 19th 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! Mother Knows Best:(20:02) We are not powerLESS; we are powerFUL. Use your skills to make meaningful change Spray on Sidewalk Chalk When Knitting Attacks:(28:38) Jasmin's swatch lied Pan-Athenic Games (32:31) and Tour de Fleece Training: #panatheknit Team Sasquatch Week 5: Read directions, highlight size, update your Knitcompanion (ICLOUD SYNC!!!). Clean your wheel. Jasmin recommends Murphy's Oil Soap, and Wood Beams wood conditioner. Song of the Sirens from Brother Where Art Thou? (Didn't Leave Nobody But The Baby (From “O Brother, Where Art Thou” Soundtrack) Eva's instructions on the "classic ravelry" theme. Knits in Space:(35:12) In the Heights! And Sew On: (38:30) Sewing with Genevieve Gigi :Threaded serger with black thread. A line skirt in black on black leopard print. Invisible zipper. Kenneth d King video tutorial
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. Go to HelloFresh dot com slash knitmore12 and use code knitmore12 for 12 free meals, including free shipping! When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:44) Genevieve worked on Jasmin's victory speech for Stash Dash. Jasmin finished-finished her “Modern Art” pullover (cover sweater from the Art of Circular Yokes) in Magpie Fibers Swanky DK in “Bougie Beaver” for the #MagpieBeavalong2021 Gigi is weaving in ends on all the socks Genevieve has been balling up yarn for Jasmin Jasmin is nearly finished with the reversed Rebound cowl/poncho thing in LolaBean Yarn Co Diptera, contrasted with the Black Trillium Fibres Indra 2.0 gradient set. Jasmin mentions Lisa Souza's Polwarth/Silk in the "Styx" colorway that she knit her Jurek pullover in. Jasmin is swatching raw silk from TessYarns Jasmin mentions Tyvek Tags. Gigi finished two pairs of blue socks for high school class mate Genevieve knitted a few stitches onto Jasmin's Rebound cowl. Jasmin is past the armholes and is plugging along down the body of her Theano test knit. Gigi cast on the Musselberg hat by Ysolda with the Emily Ocker cast on, in Oink Pigments Deux or Dye Genevieve is working on her Lisa Souza BFL "Spruce" colored Beanie Jasmin blocked her worsted sock arms cardigan in Knitcircus Yarn Ringmaster in “We scare because we care” with the “Monstropolis” gradient for the sleeves In Stitches:(30:02) Genevieve wore her pink beanie, Hearthstone, coronation cardigan, the sweet impeachment hat, and the Griffindor scarf #MeMadeMay Gigi wore the Flannel shirt from clothing construction class, and night shirts, Halloween quilt She washed all the socks Events:(33:50) Stash Dash in May is hosted in other places besides ravelry. DISCORD LINK Juneteenth events with Abstract Fiber and Lady Dye Yarns June 19th 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! Mother Knows Best:(35:12) Plant seeds and nurture them to grow. (A little interest + support to try new things) Pan-Athenic Games and Tour de Fleece training:(47:04) #panatheknit Team Sasquatch Week 4: Hunt for errata Eva's instructions on the "classic ravelry" theme. Knits in Space:(51:59) Persian cooking!!!! Anything by Najmieh Batmanglij We also mention the Persian Cuisine cookbook. The Revenge Meatloaf recipe/post. We all like Amazon Soup cubes And Sew On: (50:45) Hemming Sam's pants. Ultimate 32 Piece Presser Feet Set by Madame Sews
This week's episode is sponsored by: Carry your creativity with Erin Lane Bags! Whether you show your fiber fandom with the woolly wonder Sheepleverse, or dive into history with the Curiosities collection, our project bags, totes, and hook and needle organizers are at the ready to keep your hobby happy. When was the last time your knitting yarn was a work of art? Infinite Twist produces one-of-a-kind semi-solid gradients featuring speckles, high-lights, low-lights, and gorgeous color transitions. From 700 y Giant Gradients to 200 y matching sock sets, Infinite Twist Gradients will hold your interest from cast on to bind off. See the currently available gradients at infinitetwist.com, or be the first to know when new colors are posted by signing up for our newsletter at infinitetwist.com/newsletter-signup Have you ever had to frog because you forgot a step several rows back? Or lost your spot because you dropped your magnet board or lost track with your highlighter tape? Instead of wrestling with paper, use the knitCompanion app. It keeps you on track so you can knit more and frog less. knitCompanion works with ALL your patterns and is available for Apple, Android, and Kindle Fire Devices Are you feeling dis-GRUNT-eled about your stash? Are you browsing Insta-HAM looking for knitting inspiration? Is color "kind of a PIG deal" in your life? Oink Pigments offers over one hundred forty PIG-ture perfect colorways to make you SQUEAL with delight. For a limited time only, bring home the bacon with code KNITMORE and get fifteen percent off in-stock yarns and fibers at oinkpigments dot com. Shop soon, because these pigs will FLY! On the Needles:(0:39) Gigi finished socks out of ancient Panda Silk. Genevieve worked on her hat. Jasmin finished her worsted sock arms cardigan in Knitcircus Yarn Ringmaster in “We scare because we care” with the “Monstropolis” gradient for the sleeves, except for the zipper facings and collar/hood. Gigi cast on three socks in the Vanilla is the new black pattern, from the Oink Yarn of the Month club in Sirens Call, Woolly Bully, and Live Laugh Lava. Jasmin cast on the Theano test knit for Subito Farms in their yarn. Jasmin has made some progress on the body of her “Modern Art” pullover (cover sweater from the Art of Circular Yokes) in Magpie Fibers Swanky DK in “Bougie Beaver” for the #MagpieBeavalong2021 Project file in Google Drive, project template pages. All public! In Stitches:(30:17) #MeMadeMay Genevieve wore her Hearthstone pullover, red and gold woven scarf and hat, Zebra socks, the Olivia hat, and her beanie. Gigi wore the Flannel shirt night shirts, Camp shirt, and A-line skirt from clothing construction class Jasmin wore her Carli, and her Odds and Ends cardigan, and her $27 Sweater Events:(36:23) Knitgirllls Fail- Along is ongoing Stash Dash in May is hosted in other places besides ravelry. 2020 Summer Olympics will begin on Friday, July 23, 2021 Tour de Fleece in July (Sat, Jun 26, 2021 – Sun, Jul 18, 2021) Fiberworld July 21-25! Mother Knows Best:(38:01) Know your own strength (Jasmin bent size 0 circs) Upholstery needle set When Knitting Attacks:(48:56) Genevieve found my missing flexi-flip. They mention the Couch covers Genevieve's weaving shuttles got tangled Panathenic Games and Tour de Fleece training:(48:40) #panatheknit Week 2: Buy/Unearth materials Knits in Space:(50:16) REX LEARNED TO KNIT! Rex will talk about his knitting in future episodes, Genevieve will interview Rex in Short Rows Straw into Gold:(52:16) Drive bands, dusting, and oiling. Japanese kitchen twine And Sew On: (54:34) Classes at Cañada continue, the classes are recorded. Learning techniques on how to apply elastic. Edging technique Picked final project, ordered silk and notions. I need to make a muslin and not ruin silk charmeuse.
Welcome to Episode 290 of the Yeukai Business Show! In this episode, Theano Evagelou and Trevor Stockwell discuss relationship strategies that work. So if you want to learn how to identify your pleasure points, create meaningful connections, and nurture relationships so you can have a fruitful and successful life, tune in now! In this episode, you'll discover: Advantages of women in leadershipRelationships and confidence in life and in workplaceCommunication strategies that always work Streamlining desires to nurture your business relationships About Theano Theano Evagelou is an expert in connections and relationships whose accomplishments include: A certified Authentic Tantra Practitioner and Sex, Love, & Relationship coach – heralded as a ‘modern-day love doctor’ for purpose driven leaders who want to be master their love life to the same expert level they have mastered their professional life.In Theano’s hands, these men and women finally learn that the answer to their intimacy problems is not to be found in endless, empty escape – but in extraordinary, delicious surrender. A surrender that Tia masterfully imparts with graceful guidance and uncompromising, penetrating courage.Practioner of Natural Healing and certified Sex, Love & Relationship coach – heralded as a ‘modern-day love doctor’ for alpha individuals who want to create their most powerful and sexually alive relationships. More Information Learn more about how you can improve your results with connections and relationships: https://www.thetheano.com https://www.instagram.com/theanoevagelou Thanks for Tuning In! Thanks so much for being with us this week. Have some feedback you'd like to share? Please leave a note in the comments section below! If you enjoyed this episode on relationship strategies that work , please share it with your friends by using the social media buttons you see at the bottom of the post. Don't forget to subscribe to the show on iTunes to get automatic episode updates for our "Yeukai Business Show!" And, finally, please take a minute to leave us an honest review and rating on iTunes. They really help us out when it comes to the ranking of the show and I make it a point to read every single one of the reviews we get. Please leave a review right now. Thanks for listening!
PRE-FACE: This episode is not for the 'faint of heart.' You're not going to want to skip through this conversation. Theano Evagelou is a certified Authentic Tantra Practitioner and Sex, Love, & Relationship Coach – heralded as a ‘modern-day love doctor’ guiding the love lives of entrepreneurs and leaders who want to connect to their sexual essence and create their most powerful and thriving relationships from the bedroom to the boardroom. She shares her personal journey to self-love after marriage ended and navigating through relationships. I ask her about sex, intimacy and we even openly discuss common challenges in relationships. In this episode we talk about: What intimacy means How to enhance your relationship How to add more pleasure to your life and much more.... Theano is an Authentic Tantra Practitioner (provisional) with the Institute of Authentic Tantra Education, the only Government-accredited professional-training institute using the Tibetan Five Element Tantric Practices for holistic healing. To learn more about Theano's practice and her programs, visit: www.thetheano.com
Youla Pandazis spoke to Mrs Theano Fygkiori, music leader, about the Saheti Music Department.
This episode was In Conversation With Theano Evagelou, Sex, Love and Relationship Coach who spoke with Jenn from Canada about losing sight of intimacy and what fun it can be to slow our roll, and opt for flirting and courting our desires. The chat flows into other areas as well. Theano's specialty is authentic Tantra practices within her coaching to guide busy, purpose-driven leaders to master their personal lives to the same expert level they’ve mastered their professional lives. Announcement: Jenn and Theano are launching a monthly morning series on March 16th, 2021. You will be able to grab the podcasts here or join the ladies live ON AIR from Facebook, YouTube or Twitch.
It’s funny how powerful symbols are, right? The Eiffel Tower makes you think of Paris, the Statue of Liberty is New-York, and the Trevi Fountain… is Rome of course! Just with one symbol, you can invoke multiple concepts and ideas. You probably know that symbols are omnipresent in mathematics — but did you know that they are also very important in statistics, especially probabilistic programming? Rest assured, I didn’t really know either… until I talked with Brandon Willard! Brandon is indeed a big proponent of relational programming and symbolic computation, and he often promotes their use in research and industry. Actually, a few weeks after our recording, Brandon started spearheading the revival of Theano through the JAX backend that we’re currently working on for the future version of PyMC3! As you guessed it, Brandon is a core developer of PyMC, and also a contributor to Airflow and IPython, just to name a few. His interests revolve around the means and methods of mathematical modeling and its automation. In a nutshell, he’s a Bayesian statistician: he likes to use the language and logic of probability to quantify uncertainty and frame problems. After a Bachelor’s in physics and mathematics, Brandon got a Master’s degree in statistics from the University of Chicago. He’s worked in different areas in his career – from finance, transportation and energy to start-ups, gov-tech and academia. Brandon particularly loves projects where popular statistical libraries are inadequate, where sophisticated models must be combined in non-trivial ways, or when you have to deal with high-dimensional and discrete processes. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Jon Berezowski, Robin Taylor, Thomas Wiecki, Chad Scherrer, Vincent Arel-Bundock, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho and Colin Carroll. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Brandon's website: https://brandonwillard.github.io/ (https://brandonwillard.github.io/) Brandon on GitHub: https://github.com/brandonwillard (https://github.com/brandonwillard) The Future of PyMC3, or "Theano is Dead, Long Live Theano": https://pymc-devs.medium.com/the-future-of-pymc3-or-theano-is-dead-long-live-theano-d8005f8a0e9b (https://pymc-devs.medium.com/the-future-of-pymc3-or-theano-is-dead-long-live-theano-d8005f8a0e9b) New Theano-PyMC library: https://github.com/pymc-devs/Theano-PyMC (https://github.com/pymc-devs/Theano-PyMC) Symbolic PyMC: https://pymc-devs.github.io/symbolic-pymc/ (https://pymc-devs.github.io/symbolic-pymc/) A Role for Symbolic Computation in the General Estimation of Statistical Models: https://brandonwillard.github.io/a-role-for-symbolic-computation-in-the-general-estimation-of-statistical-models.html (https://brandonwillard.github.io/a-role-for-symbolic-computation-in-the-general-estimation-of-statistical-models.html) Symbolic Math in PyMC3: https://brandonwillard.github.io/symbolic-math-in-pymc3.html (https://brandonwillard.github.io/symbolic-math-in-pymc3.html) Dynamic Linear Models in Theano: https://brandonwillard.github.io/dynamic-linear-models-in-theano.html (https://brandonwillard.github.io/dynamic-linear-models-in-theano.html) Symbolic PyMC Radon Example in PyMC4: https://brandonwillard.github.io/symbolic-pymc-radon-example-in-pymc4.html Support this podcast
List of Python Programming Language Libraries You Should Know in 2020. • Numpy. • TensorFlow. • Theano. • SciPy. • eli5 0.10.1. • PyTorch. • LightGBM. • Keras. • Pandas. --- Send in a voice message: https://anchor.fm/the-ddsry-show/message
The podcast where we get personal with notable Winnipeggers Nicolas Bueno and Kanen Ling are back this week with a new episode of Winnipeg's Finest! Today's guest is Makeup Artist and entrepreneur Taina Theano, who talks about a lot of race-related issues with us. Catch a new episode every Monday and Friday! We are growing so fast and appreciate every single fan and act of support and appreciation! Make sure to get more content by following us on social media: Twitter and Instagram: @wpgsfinestpod Unity Underwear: https://unityunderwear.com/ Use our code "WPGSFINEST" for 20% off all products!! JELLYFISH FLOAT SPA: CODE FOR 15% OFF EVERY FLOAT: "wpgsfinest" Instagram: @jellyfishfloatspa Twitter: @floatwinnipeg website: jellyfishfloatspa.com (204) 294-9890 894 St. Mary's Interviewee's social media: Instagram: @tainatheano @taina_mua @shop.tainascloset BEATS: Kav Gandhi: (https://soundcloud.com/kavgandhi) --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/nicolas-bueno/support
Montréal isn't just a beautiful city, it's also a hotbed of activity for deep learning. Fueled by ambitious entrepreneurs and some ground-breaking research coming out of the University of Montréal, the city is at the forefront of machine learning, deep learning, and artificial intelligence. In this episode, Jon Prial is joined by Georgian Partners' Ben Wilde for a fascinating discussion with two machine learning practitioners: Nicolas Chapados, Chief Science Officer at Imagia, and Jean-François Gagné, Entrepreneur in Residence at Real Ventures. Learn from these experts about what it takes to successfully leverage this technology in your own business. You'll hear about: Why Montréal is one of the world's most dynamic cities for machine learning (0:50) The types and stage of companies that Real Ventures typically invests in (7:51) What it takes to incorporate machine learning into an early stage company (8:53) Data acquisition and the transferability of deep learning models to other domains (12:04) The University of Montréal's use of models for and research around image analysis (15:40) How the University's research is coming to market and the opportunities that creates (19:17) The availability of data and its implications and opportunities are for early-stage companies (22:10) How deep learning can facilitate smarter decision-making (24:57) What the open sourcing of Google's TensorFlow means for deep learning (27:40) Typical applications for Theano (29:45)
En este capítulo presento qué son los Módulos, cómo se usan, los disponibles directamente con el lenguaje de programación Python, y una mención a las más destacables disponibles en la comunidad. . Pagina de información oficial: https://docs.python.org/3/library/index.html . Modulos de Python: TIME, DATETIME, RANDOM, MATH, STATISTICS, OS, OS.PATH, PATHLIB, SYS, SQLITE3, HASHLIB, CSV, GZIP, ZLIB, BZ2, LZMA, ZIPFILE, TARFILE, TKINTER,... . Módulos para Python: NumPy, SciPy, SymPy, BioPython, SQLAlchemi, Colorama, wxPython, PyQT, PyGTK, Kivy, Matplotlib, Seaborn, Bokeh, PyGame, PyGlet, Twisted, Scrapy, NLTK, Request, Pillow, Keras, Pytorch, Scikit-Learn, Pandas, Theano, TensorFlow,... . Aquí tenéis mi página web: https://unosycerospatxi.wordpress.com/ . UN SALUDO!!!!! Espero que os guste!!!
This summer (aka Australian winter) a new Cloud Region was announced in Australia and today Francesc and Mark talk to two Australian engineers, Andrew Walker founder of 3wks and Graham Polley, about how this new region has changed the way they think about the cloud down under. About Andrew Walker Andrew is the founder of 3wks who have delivered 190 projects on Google Cloud platform for enterprise customers in Australia. He loves everything serverless, from App Engine through to BigQuery. About Graham Polley Graham is a senior software engineer based out of Melbourne Australia, and works for Shine Solutions. Shine are a enterprise digital consultancy with offices in Melbourne & Sydney. Being an official Google Developer Expert, he's passionate about promoting the adoption of cloud technologies into software development, and regularly blogs and gives presentations. He has extensive experience in building big data solutions for clients using the Google technology stack, and in particular with BigQuery & Dataflow. Graham works very closely with the Google cloud engineering teams in the US, where he is a member of their cloud platform trusted tester program, and the solutions he helps build are used as internal exemplars of developer use cases. Cool things of the week How we built a brand new bank on GCP and Cloud Spanner: Shine blog post Now shipping: Compute Engine machine types with up to 96 vCPUs and 624GB of memory announcement Google Cloud Dataprep - Data Handling Made Easier Medium Interview Sydney Cloud Region docs Google Cloud Platform expands to Australia with new Sydney region - open now announcement Google Cloud Platform Geography and Regions docs Google Cloud Dataflow docs Google BigQuery docs Question of the week Is Tensorflow good for general math computation? Yes! It's great for any linear algebra programs. Linear Algebra Shootout: NumPy vs. Theano vs. TensorFlow blog post Where can you find us next? Francesc just released the second part of this #justforfunc code review. Next week he will be presenting at Go Meetup London, Velocity London, and Google Cloud Summit Paris. Mark is heading to Australia for GDG Devfest Melbourne and Game Connect Asia Pacific and will be hanging out at Unite Melbourne and PAX Australia.
Taro Minowa さんをゲストに迎えて、ボット、機械学習、AI などについて話しました。 Show Notes seq2seq の chatbot を日本語で動かしてみた - Higepon’s blog ひげみbot (@higepon_bot) Convolutional neural network Sequence-to-Sequence Models ゼロから作るDeep Learning ―Pythonで学ぶディープラーニングの理論と実装 TensorFlow Keras Theano Chainer 意味分からない。最初からKeras使った方が良くない?流石日本人。Chainer好きすぎでしょ。 MeCab: Yet Another Part-of-Speech and Morphological Analyzer りんな Twitter taught Microsoft’s AI chatbot to be a racist asshole deepmind/sonnet: TensorFlow-based neural network library FaceApp apologizes for building a racist AI Google Photos labeled black people 'gorillas' 新海誠監督の映画から無断使用 「Everfilter」をアニメ会社が調査 Is Expensify using Mechanical Turk for reading my receipts? Introducing Echo Look - Hands-Free Camera and Style Assistant Your Samsung TV is eavesdropping on your private conversations Google Home now supports multiple users Google shuts down Burger King's cunning TV ad Facebook is developing a way to read your mind
Languages & frameworks comparison. Languages: Python, R, MATLAB/Octave, Julia, Java/Scala, C/C++. Frameworks: Hadoop/Spark, Deeplearning4J, Theano, Torch, TensorFlow. ocdevel.com/mlg/10 for notes and resources
Machine learning is fast becoming a part of our lives. From the order in which your search results and news feeds are ordered to the image classifiers and speech recognition features on your smartphone. Machine learning may even have had a hand in choosing your spouse or driving you to work. As with cars, only the mechanics need to understand what happens under the hood, but all drivers need to know how to operate the steering wheel. Listen to this podcast to learn how to interact with machines that can learn, and about the implications for humanity. My guest is Dr. Pedro Domingos, Professor of Computer Science at Washington University. He is the author or co-author of over 200 technical publications in machine learning and data mining, and the author of my new favourite book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Here’s the outline of this interview with Dr. Pedro Domingos, PhD: [00:01:55] Deep Learning. [00:02:21] Machine learning is affecting everyone's lives. [00:03:45] Recommender systems. [00:03:57] Ordering newsfeeds. [00:04:25] Text prediction and speech recognition in smart phones. [00:04:54] Accelerometers. [00:04:54] Selecting job applicants. [00:05:05] Finding a spouse. [00:05:35] OKCupid.com. [00:06:49] Robot scientists. [00:07:08] Artificially-intelligent Robot Scientist ‘Eve’ could boost search for new drugs. [00:08:38] Cancer research. [00:10:27] Central dogma of molecular biology. [00:10:34] DNA microarrays. [00:11:34] Robb Wolf at IHMC: Darwinian Medicine: Maybe there IS something to this evolution thing. [00:12:29] It costs more to find the data than to do the experiment again (ref?) [00:13:11] Making connections people could never make. [00:14:00] Jeremy Howard’s TED talk: The wonderful and terrifying implications of computers that can learn. [00:14:14] Pedro's TED talk: The Quest for the Master Algorithm. [00:15:49] Craig Venter: your immune system on the Internet. [00:16:44] Continuous blood glucose monitoring and Heart Rate Variability. [00:17:41] Our data: DUTCH, OAT, stool, blood. [00:19:21] Supervised and unsupervised learning. [00:20:11] Clustering dimensionality reduction, e.g. PCA and T-SNE. [00:21:44] Sodium to potassium ratio versus cortisol. [00:22:24] Eosinophils. [00:23:17] Clinical trials. [00:24:35] Tetiana Ivanova - How to become a Data Scientist in 6 months a hacker’s approach to career planning. [00:25:02] Deep Learning Book. [00:25:46] Maths as a barrier to entry. [00:27:09] Andrew Ng Coursera Machine Learning course. [00:27:28] Pedro's Data Mining course. [00:27:50] Theano and Keras. [00:28:02] State Farm Distracted Driver Detection Kaggle competition. [00:29:37] Nearest Neighbour algorithm. [00:30:29] Driverless cars. [00:30:41] Is a robot going to take my job? [00:31:29] Jobs will not be lost, they will be transformed [00:33:14] Automate your job yourself! [00:33:27] Centaur chess player. [00:35:32] ML is like driving, you can only learn by doing it. [00:35:52] A Few Useful Things to Know about Machine Learning. [00:37:00] Blood chemistry software. [00:37:30] We are the owners of our data. [00:38:49] Data banks and unions. [00:40:01] The distinction with privacy. [00:40:29] An ethical obligation to share. [00:41:46] Data vulcanisation. [00:42:40] Teaching the machine. [00:43:07] Chrome incognito mode. [00:44:13] Why can't we interact with the algorithm? [00:45:33] New P2 Instance Type for Amazon EC2 – Up to 16 GPUs. [00:46:01] Why now? [00:46:47] Research breakthroughs. [00:47:04] The amount of data. [00:47:13] Hardware. [00:47:31] GPUs, Moore’s law. [00:47:57] Economics. [00:48:32] Google TensorFlow. [00:49:05] Facebook Torch. [00:49:38] Recruiting. [00:50:58] The five tribes of machine learning: evolutionaries, connectionists, Bayesians, analogizers, symbolists. [00:51:55] Grand unified theory of ML. [00:53:40] Decision tree ensembles (Random Forests). [00:53:45] XGBoost. [00:53:54] Weka. [00:54:21] Alchemy: Open Source AI. [00:56:16] Still do a computer science degree. [00:56:54] Minor in probability and statistics.
What can we teach machines? And what can they teach us? Chantal, Kenneth & Len are joined by Guillaume Belrose to chat broadly about machine learning. Guillaume is currently living in Johannesburg, but hails from the Caribbean. After having studied in France he went onto an internship at HP in Bristol, before moving to Durban and finally up to the city of gold. Guillaume is very involved in the community, being a regular attendee at meetups and having presented at local conferences like Devconf and Tech4Africa. We had a great time meandering through the field of machine learning, talking about everything from machines learning to play games (and win), to how big companies like Google apply machine learning to their data centers to save millions of dollars in cooling expenses to interesting medical applications. Guillaume also offered Kenneth some tips for getting started, including some great online courses to follow. He even gave some examples of practical tools that could be built to help us in our day jobs! We also round it out with a mention of how TensorFlow is being used in farming! Follow Guillaume online: * https://twitter.com/gbelrose * https://github.com/kafecho * http://kafecho.blogspot.co.za/ Some resources mentioned during the show: * This AI "solves" Super Mario Bros and other NES games - http://arstechnica.com/gaming/2013/04/this-ai-solves-super-mario-bros-and-other-classic-nes-games/ * Deep Reinforcement Learning: Pong from Pixels * Google uses DeepMind AI to cut data center energy bills - http://www.theverge.com/2016/7/21/12246258/google-deepmind-ai-data-center-cooling * Tasting the Light: Device Lets the Blind "See" with Their Tongues - http://www.scientificamerican.com/article/device-lets-blind-see-with-tongues/ * TensorFlow - https://www.tensorflow.org/ * Theano - http://deeplearning.net/software/theano/ * Theano - https://deepmind.com/ * Big Data's Mathematical Mysteries - https://www.quantamagazine.org/20151203-big-datas-mathematical-mysteries/ * Machine Learning on Coursera - https://www.coursera.org/learn/machine-learning * IBM's artificial phase-change neurons - http://arstechnica.com/gadgets/2016/08/ibm-phase-change-neurons/ * Cucumber sorting with TensorFlow - http://qz.com/771921/the-ultimate-promise-of-artificial-intelligence-lies-in-sorting-cucumbers/ And finally our picks Guillaume: * Batman and Ethan - http://www.bbc.co.uk/programmes/b0709v4m * Neural Networks and Deep Learning - http://neuralnetworksanddeeplearning.com/ * Chasing Cats - http://myplace.frontier.com/~r.bond/cats/cats.htm Len: * Yetibot - yetibot.com Chantal: * DESIGN for HACKERS - http://designforhackers.com/ Kenneth: * Deep Learning for Java - http://deeplearning4j.org/ * The Man Who Knew Infinity - https://en.wikipedia.org/wiki/The_Man_Who_Knew_Infinity_(film) Thanks for listening! Stay in touch: * Website & newsletter - https://zadevchat.io * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - http://bit.ly/zadevchat-itunes
Deepjazz is a project from Ji-Sung Kim, a computer science student at Princeton University. It is built using Theano, Keras, music21, and Evan Chow's project jazzml. Deepjazz is a computational music project that creates original jazz compositions using recurrent neural networks trained on Pat Metheny's "And Then I Knew". You can hear some of deepjazz's original compositions on soundcloud.
Après Nipdev 37 où nous avons vu la théorie des algorithmes de Machine Learning Antoine nous décrit quelques librairies Pythons utilisées pour implémenter ces algorithmes.
In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.
McDonald's si vaccinarea http://imgur.com/SB8cVmn Skeptic's Guide to the Universe sunt dați în judecată http://www.theskepticsguide.org/legaldefense Pericolele lipsei de scepticism Mai bine moartă decât să facă avort http://adevarul.ro/locale/iasi/o-tanara-fost-pas-moarte-pastorul-i-a-interzis-faca-avort-medicii-trag-semnal-alarma-nu-trebuie-intoarcem-evul-mediu-1_54378cc00d133766a8d196ec/index.html Femei în știință: Theano http://en.wikipedia.org/wiki/Theano_(philosopher) Subiecte: Religiozitatea este în corelație inversă cu inovația (Ascultă doar acest segment aici) http://www.motherjones.com/environment/2014/09/religion-quashes-innovation-patents http://www.princeton.edu/~rbenabou/papers/Religion%20December%201i_snd1.pdf ... și uneori în conflict cu natura http://spp.sagepub.com/content/early/2011/08/15/1948550611420303 Târgul ...continue reading "Ep.101 – Andulația și inovația"
İnsan esasen ne erkektir ne de dişi. Cinsiyetin farklı olmasının amacı, cinse özgü biçim farkını oluşturmak olmayıp yalnızca üremeye yarar... ” (Marie Le Jars de Gournay: Kadınlarla Erkeklerin Eşitliği Üzerine) Bu satırlar, Rönesans döneminin filozoflarından; Marie Le Jars de Gournay’a ait. Bu ismi belki duymamış olabilirsiniz. Gournay, erkek egemen bir dönemde geri planda kalan kadın düşünürlerden sadece birisi. Günümüzde kadın ve erkek eşitliğini savunan bizler, kadınların pek çok konuda ön planda olmasını savunuyoruz fakat tarihte yolculuk yaptığımızda savunduğumuz bu fikirlerden uzak yaşantılar karşımıza çıkıyor. Okuyacağınız bu yazı dizisinde felsefe dünyasının kadın düşünürlerine kısaca değineceğiz. Bilim dünyasına baktığımızda ünlü fizikçi Marie Curie gibi az da olsa kadın bilim insanları karşımıza çıkıyor. Peki, felsefe dünyasına baktığımızda durum nasıl acaba? Aristotales, Platon, Descartes… Filozoflar dediğimizde aklımıza gelen ilk isimlerin ortak bir özelliği var; hepsinin erkek olması. Bunun nedenini, düşünebilmenin erkeklere atfedilen bir yetenek olması ile değil, kadınların düşüncelerini erkekler kadar sistematik bir şekilde aktarabilmek için yeterli zaman ve imkânlara sahip olamaması şeklinde açıklamamız mümkündür. Diğer bir neden de kadınların ürettikleri belgelere erkeklerden daha az özen gösterilmesine bağlı olarak savundukları fikirlerin yitip gitmesi olarak açıklanabilir. Daha önce yayınladığımız “Cadılık mı kötü yoksa insanlık mı? Avrupa’da cadı (kadın) avları” yazısında, kadının cadı olarak yakılmasının ardındaki nedenler üzerinde dururken, kadının nasıl ikinci plana itildiğini ifade etmiştik. Aslında bu yazı da kadın olmanın erkeklere göre daha zor koşullarda mücadele etmek olduğunu anlatıyor. Bu konuda araştırma yaparken faydalandığım en önemli kaynağım; “Kadın Filozoflar Tarihi” isimli eserde, toplamda 44 kadın filozofun hayatından kesitlere ulaşabildim. Bu düşünürlerin tamamını ele almaktan ziyade dikkat çekici isimler üzerinde durmak istiyorum. Kadın filozofları ele alırken kuşkusuz, dönemler bazında incelemek daha iyi olacaktır. Antik Çağ’ın Güçlü Kadınları: Felsefenin başlangıcı pek çoğumuzun bildiğin gibi Antik Çağ dönemine dayanmaktadır. Yaşadığı hayatı sorgulama, insan ve dünyayı nitelendirme çabaları ilk bu dönemde, Eski Yunan’da başlar. Bu dönemde hem kadın hem erkek filozoflar için önemli sorular vardır, şöyle ki insan neden dünyaya gönderilmiştir, dünyadaki görevi nedir, düşünme ve eylem ilişkisi nedir?… vb. gibi. Bu sorulara bağlı olarak oluşan düşünceler felsefenin temeli olarak tarih sayfalarında yerini almıştır. Krotonlu Theano ve eşi Pisagor'un temsili bir resmi. Daha önce de belirttiğimiz gibi bu dönemin ünlü düşünürleri denildiğinde karşımıza erkek düşünürler çıkmaktadır ama bu dönemde düşünce dünyasının önemli kadın isimleri de var olmuştur. Bu isimlere ilk vereceğimiz örnek; Krotonlu Theano’dur. Krotonlu Theano en ünlü Pisagorcu kadın olarak tarihteki yerini almıştır. Neden Pisagorcu olduğunu belirtirsek; ilk kadın düşünürlerin Pisagor’un çevresinden çıktığı inancı vardır.Bu çevredeki düşünürlerin, onun matematik bilgilerini ve felsefeye dair düşüncelerinin destekleyicisi ve yayıcısı olduğu kabul edilmektedir. İşte, Krotonlu Theano da bu düşünürlerden birisidir. İÖ. 550 yılından sonra yaşayan Theano, aynı zamanda Pisagor’un eşidir. Eşi gibi matematiğe meraklı olan Theano Pisagor’dan felsefe dersleri de almış ve eşinin ölümünden sonra Pisagor Okulu’nu yönetmiştir. Peki, Krotonlu Theano’nun düşünceleri nasıldır? Theano’ya göre, ruh yeniden doğacaktır, bunun için kişi erdemli bir hayat sürmelidir. Sırf madde diye bir şey yoktur, ruh ön planda olmalıdır. Matematik ve müzik önemlidir çünkü ikisinde de sayılar vardır. Öyle ki sayılar düzeni sağlayan tek unsurdur. Theano Pisagor Okulu’nda kızlara ders vermiştir. Bu derslerinin büyük bir bölümünü ahlak üzerinedir.