Podcasts about gradio

  • 25PODCASTS
  • 240EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 4, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about gradio

Latest podcast episodes about gradio

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
AI Magic: Shipping 1000s of successful products with no managers and a team of 12 — Jeremy Howard of Answer.ai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 16, 2024 58:56


Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe

GPT Reviews
Figure AI Robots

GPT Reviews

Play Episode Listen Later Mar 18, 2024 14:17


Figure, a leading AI robotics company, is making significant advancements in creating robots that can perceive their environment, make decisions, and take action, all in a way that aligns with human expectations. OpenAI may have accidentally leaked details about a new AI model called GPT-4.5 Turbo, which could level the playing field with Google's AI model Gemini. Two papers explore the development and evaluation of large language models (LLMs) for code-related tasks, and propose simple and scalable strategies to continually pre-train LLMs to save on compute. Another paper investigates scaling in the over-trained regime and relates language model perplexity to downstream task performance via a power law, providing useful insights into how language models can be scaled and evaluated more effectively. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:28 Figure AI 03:02 Did OpenAI just accidentally leak the next big ChatGPT upgrade? 04:48 Gradio's Grog 05:47 Fake sponsor 07:37 Simple and Scalable Strategies to Continually Pre-train Large Language Models 09:31 LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code 11:11 Language models scale reliably with over-training and on downstream tasks 13:07 Outro

CADE_DIVERSIDADE EDUCATIVA
Generative AI | Aprendendo a modelagem LLM com Deploy em Gradio

CADE_DIVERSIDADE EDUCATIVA

Play Episode Listen Later Feb 26, 2024 4:43


The top AI news from the past week, every ThursdAI

Holy SH*T, These two words have been said on this episode multiple times, way more than ever before I want to say, and it's because we got 2 incredible exciting breaking news announcements in a very very short amount of time (in the span of 3 hours) and the OpenAI announcement came as we were recording the space, so you'll get to hear a live reaction of ours to this insanity. We also had 3 deep-dives, which I am posting on this weeks episode, we chatted with Yi Tay and Max Bane from Reka, which trained and released a few new foundational multi modal models this week, and with Dome and Pablo from Stability who released a new diffusion model called Stable Cascade, and finally had a great time hanging with Swyx (from Latent space) and finally got a chance to turn the microphone back at him, and had a conversation about Swyx background, Latent Space, and AI Engineer. I was also very happy to be in SF today of all days, as my day is not over yet, there's still an event which we Cohost together with A16Z, folks from Nous Research, Ollama and a bunch of other great folks, just look at all these logos! Open Source FTW

god american spotify time world google hollywood ai disney apple interview education japan talk magic news french germany san francisco phd german russian microsoft holy blog professional hawaii dive 3d video games tokyo humans chatgpt sweden silicon valley champions os pc apologies ga cheers discord cloud singapore reactions honestly west coast windows investments context alignment mixed newsletter sold hebrew chat dom tap helped developers breaking news dms ram buzz folks vc highest bloom substack whispers react siri newton lyon andrews sf goats munich gemini openai anton labs stability nvidia api arabic generally decided kd documents alphabet bard open source needle north star ml aws gpt incredibly lama mosaic github llama dome slightly apis jarvis soaring farrell runway pico vcs javascript eureka attached html apache temporal biases tl 2k sora rugs 10m weights tab pharrell xl colbert llm chinese new year gpu pica cascade kb nps rahul dali enrico oss agi fairly yarn dx eeg horowitz ocr cloudflare benchmarks rag curation gpus 7b singaporean ilya deepmind lambda rtx gtm watchos tldr v2 alessio satya nadella lms satya buster keaton fmri mephisto andrej retrieval apple news 8b yam lex fridman mixture sundar pichai googlers yi series c 60k past week sura haystack lumiere smoothly a16z latent wrecker mpt chroma flan cursor svd hacker news tensor dalmatian reca devrel datasets netlify imad reka tesla autopilot nvidia gpus cohere google brain andrew chen robert scoble vae yann lecun matryoshka instructure daniel gross discords loras neurips jeff dean andrej karpathy 128k nlw george hotz ai engineer nat friedman drew houston entropic lachanze word2vec rekka latent space swix hayes valley breca gradio max wolf ingra neuros
Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Dan Kovalik, Joan Russow November 29, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Nov 29, 2023 59:59


Welcome to Gorilla Radio, recorded November 29th, 2023 Following fifty days of Israel's scorched earth attacks against Gaza a four day "pause" was brokered into being. As much as allowing humanitarian relief to the people of Gaza, the temporary respite in aerial and artillery bombing is meant too to interrupt the growing and increasingly adamant demonstrations of outrage around the World - most importantly within Western countries still unanimously supporting Israel despite the carnage. Meanwhile, in an unmistakable sign of cynicism, Israel's military is taking as many new prisoner hostages in West Bank home raids as it releases in exchange for its citizens captured October 7th. Dan Kovalik is a lawyer, educator, labour, peace, and justice activist, democracy defender, journalist, author, and filmmaker. His book titles include: ‘Cancel This Book: The Progressive Case Against Cancel Culture' the “Plot to” series on American efforts to undermine the governments and economies of Iran, Venezuela, and Russia, ‘No More War: How the West Violates International Law by Using ‘Humanitarian' Intervention to Advance Economic and Strategic Interest', ‘Nicaragua: A History of US Intervention & Resistance', and his latest, ‘The Case for Palestine: Why It Matters and Why You Should Care'. Dan Kovalik in the first half. And; The 2023 United Nations Climate Change Conference or Conference of the Parties of the UNFCCC, or COP 28 is ready to get underway tomorrow, November 30th. The two-week confab conference is hosted this year in Dubai, one of the Arab Gulf States' fossil-fuel superpowers. That fact is not lost on critics, who already charge plans to allow oil companies into the meetings effectively makes The COPs less an environmental emergency meeting than oil dealers' bazaar. Dr. Joan Russow is former leader of the Green Party of Canada who since stepping down from the Greens has spent her time keeping the United Nations' feet to fire as a reporter, and filmmaker recording past climate change conferences. She's also producer of the film, ‘Cooperatives: Counterpoint to Capitalism', and has served as the editor and driving force behind Peace Earth and Justice News, aka PEJNews. Joan Russow and is COP 28 the Conference of the Parties' shark-jumping moment in the second half. But first, Dan Kovalik making the Case for Palestine. Song: Humanitarian Pause Artist: David Rovics Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Kathy Kelly, Victoria Gaza Solidarity Rally & March November 22, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Nov 22, 2023 59:43


Welcome to Gorilla Radio, recorded November 22nd, 2023 This assault being waged against human decency in Palestine is a test; a test to gauge how much atrocity we watching will countenance. What we've allowed befall others in Yugoslavia, and Afghanistan, Libya, Somalia, Yemen, Syria, Ukraine, and elsewhere has metastasized, becoming finally the full-blown horror of Gaza. And it's a horror that will, if we allow it continue over there, in due course return to be visited upon us as well. So, why is it allowed time and again? And, who profits this belittlement of humanity? Kathy Kelly is a long-time peace and justice activist, essayist, author, and recipient of numerous awards for her peace service, including multiple nominations for the Nobel Peace prize. Kathy's book titles include, ‘Prisoners on Purpose: a Peacemakers Guide to Jails and Prison,' and ‘Other Lands Have Dreams: from Baghdad to Pekin Prison.' These days she's serving as Board President at World BEYOND War, where among other things, she's been busy co-coordinating the November 2023 Merchants of Death War Crimes Tribunal. The Tribunal launched Sunday, November 12, with the first segment examining the wanton and repeated criminality of the destruction of Gaza. Kathy Kelly in the first half. And; for the last six weeks, millions have gathered in cities and towns across the World to express their collective outrage at what is happening right now in Palestine, and in an effort to pressure their respective governments to demand Israel stop its indefensible destruction of Gaza and its people. Victoria, British Columbia is no different, where every week since Israel's onslaught began citizens have come to the Legislative Buildings, seat of the provincial government, to raise their voices against the statuary and granite facades in hopes of moving their representatives. Soundscapes from the Palestine solidarity manifestations in the second half. But first, Kathy Kelly and trying the profiteers at 2023's Merchants of Death War Crimes Tribunal. Song: Nails in the Wall Artist: Speedy J. - Kait Gray From Grant Wakefield's, The Fire This Time, 2002. Photograph: Munition workers painting shells at the National Shell Filling Factory No.6, Chilwell, Nottinghamshire in 1917. This was one of the largest shell factories in the country, circa 1917. Photo by Horace Nicholls, Public domain via Wikimedia Commons. Photograph: https://www.instagram.com/khaledbeydoun/p/CzG4c2tATXn/   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Jeremy Kuzmarov, James Bissett November 15, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Nov 15, 2023 58:45


Welcome to Gorilla Radio, recorded November 15th, 2023 If there is any value in staying informed: reading the papers, watching television reports, subscribing to online journalists and aggregators, it is to be prepared. But even the most assiduous, meticulously matriculated self-education can lead to where the familiar becomes alien, and once confident knowledge incognizance. It's then, when the world fails to make sense, we ask, "How did we get here?" Answering that requires a broader perspective than is possible with our noses pressed against the present; for that, a little historical distance is prescriptive. For example, we can't appreciate why the United States is where it is in November, 2023 without knowing what happened November 22nd, 1963. Likewise, understanding Israel's actions today means revealing the real events of November 4th, 1995. Jeremy Kuzmarov is a journalist and author who also serves as Managing Editor at CovertActionMagazine.com. His book titles include, ‘Obama's Unending Wars', ‘The Russians Are Coming, Again', written with John Marciano, and his latest, fresh from the printer's, 'Warmonger: How Clinton's Malign Foreign Policy Launched the US Trajectory from Bush II to Biden'. Jeremy's recent article at CAM, 'Yigal Amir is Israel's Oswald' examines the day prime minister, Yitzhak Rabin was assassinated and how that foul deed helped make Israel what it is today. Jeremy Kuzmarov in the first half. And; for millions inside the country and out, Canada seems unrecognizable today. From saluting Nazis in Parliament, to standing in opposition to peace and human rights resolutions at the United Nations, whither the familiar northern beacon of bland? James Bissett is a former Canadian Ambassador whose tenure in Yugoslavia coincided with that country's 1991 dissolution. And, at century's end he was one of the very few government insiders to oppose NATO's 78 day bombardment of Serbia in the name of “humanitarian intervention.” James Bissett and finding Canada in the second half. But first, Jeremy Kuzmarov and Israel's infamous sacrifice. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, William S. Geimer, Andy Worthington November 8, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Nov 8, 2023 59:18


Welcome to Gorilla Radio, recorded November 8th, 2023 A tapestry of Pablo Picasso's haunting Guernica hangs mutely in the foyer of the United Nations Security Council. The 25 by 11 foot recreation of the Spanish artist's cri de coeur recalling the ruthless bombing of civilians in that Basque town in 1937 had served as backdrop for press conferences by that august body until February 3rd, 2003 when, fearing life imitating art in the form of George Bush's coming "shock and awe" destruction of Baghdad, the UN custodians of decorum covered it from the cameras, preemptively shielding Secretary of State, Colin Powell from any embarrassment the image might cause America and its accomplices as he laid forth his bogus casus belli for the second Iraq war. The "Blitzkrieg" of Guernica inspiring Picasso so long ago has been reenacted daily in Gaza for the last month; with many times more men, women, children, and animals killed, injured, made homeless, and traumatized. Thousands are dead. Hundreds of thousands wounded, but the great resistance feared by Bush and his claque way back when is apparently not a worry for current leaders of the Western world, who remain unanimous and unapologetic in their support of Israel and its project for the new Israel despite the horrendous cost. William S. Geimer is a peace activist, Professor Emeritus of Law at Washington and Lee University, military veteran who resigned his 82nd Airborne commission in opposition to the war against Vietnam, and author of the book, 'Canada: The Case for Staying Out of Other People's Wars'. In 2020 Bill founded the Greater Victoria Peace School, which he says has, since his retirement, "been a success and my board members are carrying on admirably with peace education. William S. Geimer in the first half. And; most here have probably forgotten entirely about Guantanamo Bay prison, set up so long ago to confine "the worst of the worst" in George Bush junior's Global War on Terror. Incredibly, all these years later, prisoners still languish there, without the benefit of the protection of laws that were once the pride of Western Civilization. Andy Worthington remembers though; and more, he's been working all these years to both publicize the plight of those held, and get them real justice. Last week he attended in London one of the "ten coordinated global vigils for the closure of Guantanamo". He reminds: Sixteen of the thirty men are still held, after having been approved for release, as of November 1st, between 404 and 5,031 days. Andy Worthington and remembering Guantanamo in the second half. But first, Bill Geimer and the Gaza atrocity happening before our eyes. Links: Combatants for Peace American Friends of Combatants for Peace World Beyond War Merchants of Death Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, John Helmer (extended) October 29, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Oct 30, 2023 58:08


Welcome to Gorilla Radio, recorded October 29th, 2023 Is there method in this madness? Was the ruthless destruction of decency and the foundational principles of the West, and presumably Israel, the intention of the current onslaught in Gaza, (and increasingly underscored by the marauding bands of murderous "settlers" in the West Bank)? Or has it all just gotten away from the perennial felon, prime minister Benjamin Netanyahu, originally hoping perhaps just for a distraction from the determined - and massive - demonstrations against his government? Whither Israel then, and where does the disaster it has wrought for Palestine's people lead it now? John Helmer is a journalist and author who's spent decades living in and reporting from Russia. Principle behind the web news site, Dances with Bears, Helmer has too been a professor of political science, sociology, and journalism, and served as advisor to governments at the highest levels. Among his many book titles are: ‘Skripal in Prison,' ‘The Man Who Knows Too Much About Russia,' ‘The Jackals' Wedding: American Power, Arab Revolt', ‘The Lie That Shot Down MH-17', and his latest, ‘SOVCOMPLOT: How Pirates Tried to Capture the Treasure of the Russian Seas, and Were Caught Out'. John's latest article at Dances with Bears examines the question of what becomes of "the only democracy in the Middle East" now that its long-promised war is begun. John Helmer and 'THE GODS ARE GOING AGAINST THE CHOSEN PEOPLE — MAMMON AGAINST ISRAEL, MARS AGAINST THE PENTAGON'.   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Dimitri Lascaris, John Helmer October 28/29, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Oct 30, 2023 59:43


Welcome to Gorilla Radio, recorded October 28th and 29th, 2023 Having watched the incremental disintegration of Western Civilizations, such as it was, over the last forty and something years, it still comes as a surprise to witness the depravity, viciousness, and yes, banality of the evil now run rampant. And this is without regarding the unspeakable Israel - enacting a literal genocide even now, every moment - but the revelation of the West's moral bankruptcy Israel's ultimate crime illustrates. Certainly the blame for the killing of Gaza is on the governments and institutions of America, Europe, the NATO nations - Canada, the rest of the Anglo-American Alliance, and their satraps around the World - but the blood of innocents stains our hands too; we the people who allowed this happen, and allow it continue. So, now what are you and I and we to do? Dimitri Lascaris is a Montreal-based activist, journalist, and lawyer. He served as Justice Critic in the Shadow Cabinet of the Green Party of Canada and likewise for the Green Party of Quebec. Dimitri's interviews for TRNN are at TheRealNews.com. Dimitri's articles appear at various sites online and at his website, DimitriLascaris.org, where I found his latest piece, 'If We Have Any Honour, We Will Defend the Palestinian People'. Dimitri Lascaris in the first half. And; is there method in this madness? Was the ruthless destruction of decency and the foundational principles of the West, and presumably Israel, the intention of the current onslaught in Gaza, (and increasingly underscored by the marauding bands of murderous "settlers" in the West Bank)? Or has it all just gotten away from the perennial felon, prime minister Benjamin Netanyahu, originally hoping perhaps just for a distraction from the determined - and massive - demonstrations against his government? Whither Israel then, and where does the disaster it has wrought for Palestine's people lead it now? John Helmer is a journalist and author who's spent decades living in and reporting from Russia. Principle behind the web news site, Dances with Bears, Helmer has too been a professor of political science, sociology, and journalism, and served as advisor to governments at the highest levels. Among his many book titles are: ‘Skripal in Prison,' ‘The Man Who Knows Too Much About Russia,' ‘The Jackals' Wedding: American Power, Arab Revolt', ‘The Lie That Shot Down MH-17', and his latest, ‘SOVCOMPLOT: How Pirates Tried to Capture the Treasure of the Russian Seas, and Were Caught Out'. John's latest article at Dances with Bears examines the question of what becomes of "the only democracy in the Middle East" now that its long-promised war is begun. John Helmer and 'THE GODS ARE GOING AGAINST THE CHOSEN PEOPLE — MAMMON AGAINST ISRAEL, MARS AGAINST THE PENTAGON' in the second half. But first, Dimitri Lascaris and the great unraveling beginning in the navel of the World. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

The top AI news from the past week, every ThursdAI

ThursdAI October 26thTimestamps and full transcript for your convinience## [00:00:00] Intro and brief updates## [00:02:00] Interview with Bo Weng, author of Jina Embeddings V2## [00:33:40] Hugging Face open sourcing a fast Text Embeddings## [00:36:52] Data Provenance Initiative at dataprovenance.org## [00:39:27] LocalLLama effort to compare 39 open source LLMs +## [00:53:13] Gradio Interview with Abubakar, Xenova, Yuichiro## [00:56:13] Gradio effects on the open source LLM ecosystem## [01:02:23] Gradio local URL via Gradio Proxy## [01:07:10] Local inference on device with Gradio - Lite## [01:14:02] Transformers.js integration with Gradio-lite## [01:28:00] Recap and bye byeHey everyone, welcome to ThursdAI, this is Alex Volkov, I'm very happy to bring you another weekly installment of

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Dan Kovalik, Tarek Loubani October 25, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Oct 25, 2023 59:39


Welcome to Gorilla Radio, recorded October 25th, 2023 Eighteen days into Israel's campaign of destruction in Gaza thousands are dead, tens of thousands wounded, orphaned, and rendered homeless. And still the bombings continue, with the full support of every Western government. The slaughter of innocents through air attacks and distant artillery parallels the experience of civilians in Donbas, but elicits the opposite reaction in establishment media and Parliaments like Canada's, where unconditional public support of Israel is, as of this date, unanimous. The federal government's minority coalition partner, the NDP has proven most strident in its statements encouraging Israel, with its foreign affairs critic, Heather McPherson saying in the House, "Israel has every right to eradicate Hamas." Dan Kovalik is a lawyer, educator, labour, peace, and justice activist, democracy defender, journalist, author, and filmmaker. His book titles include: 'Cancel This Book: The Progressive Case Against Cancel Culture' the “Plot to” series on American efforts to undermine the governments and economies of Iran, Venezuela, Russia, 'No More War: How the West Violates International Law by Using ‘Humanitarian' Intervention to Advance Economic and Strategic Interest', and his latest, 'Nicaragua: A History of US Intervention & Resistance', which "explores the pernicious nature of US engagement with Nicaragua from the mid-19th century to the present in pursuit of control and domination rather than in defense of democracy". Dan Kovalik in the first half. And; Israel's stated determined erasure of Hamas - and the apparent complete destruction of civic life on the Gaza Strip it says that ambition necessitates - is destroying more than the lives of the tens of thousands of captured Palestinians living in the besieged enclave. Across the Western nations allied with Israel's project, draconian anti-democratic laws are being drafted forbidding demonstrating in support of the Palestinian people and their just resistance to the brutal occupation, while social media outlets cancel those in support, and employers are pressured to fire people who attend rallies, or exercise their rights to free speech online. The latter is just such a case, where physician, Dr. Ben Thomson was recently suspended by his employer, Ontario based, Mackenzie Richmond Hill Hospital for, as they put it, "social media posts... that do not reflect our views or values as an organization." Tarek Loubani is a London, Ontario-based doctor and humanitarian. Tarek runs the Glia Project, which seeks to provide medical supplies to impoverished locations, one of which is the al-Shifa Hospital in Gaza. Dr. Ben Thomson Tarek Loubani and the silencing of Canadian humanitarian dissenters in the second half. But first, Dan Kovalik on war, more war, and nothing but war. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ken Stone September 20, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Sep 20, 2023 30:05


Welcome to Gorilla Radio, recorded September 20th, 2023 Earlier this year, Britain decided to include depleted uranium shells with its deliveries of their Challenger II tanks gifted to Ukraine. Russia charged the so-called CHARM3 munitions are in effect "dirty bombs" and would decision effectively "nuclearized" the war there. DU munitions were first used in the Gulf War of 1991, again in the Former Yugoslavia, Afghanistan, in the 2003 invasion of Iraq, and Syria. The deleterious, long-term effects of DU is well documented, yet still contested by NATO and America, who recently announced it too would send depleted uranium weaponry to Ukraine. Ken Stone is an activist and author working for peace with the Syria Support Movement International and Hamilton Coalition to Stop the War. His book, 'Defiant Syria: Dispatches from the Second International Tour of Peace to Syria' is a compilation of his dispatches from the Second International Tour of Peace which visited Syria from April 12-18, 2016. Ken and the Hamilton Coalition are hosting the webinar, 'Cluster Bombs & Depleted Uranium Weapons in Ukraine: 2 More Reasons to End the War Now' today, September 20th. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Talk Python To Me - Python conversations for passionate developers
#430: Delightful Machine Learning Apps with Gradio

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Sep 19, 2023 59:43


So, you've got this amazing machine learning model you created. And you want to share it and let your colleagues and users experiment with it on the web. How do you get started? Learning Flask or Django? Great frameworks, but you might consider Gradio which is a rapid development UI framework for ML models. On this episode, we have Freddy Boulton, to introduce us all to Gradio. Links from the show Freddy on Twitter: @freddy_alfonso_ Gradio: gradio.app Use as API Example: huggingface.co Components: gradio.app Svelte: svelte.dev Flutter UI/Code structure: docs.flutter.dev XKCD Matplotlib Theme: matplotlib.org Gradio XKCD Full Theme: huggingface.co PrivateGPT: ai.meta.com Langchain: docs.langchain.com pipdeptree: pypi.org Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Mastodon: talkpython Follow Michael on Mastodon: mkennedy Sponsors PyCharm Sentry Error Monitoring, Code TALKPYTHON Talk Python Training

Python Bytes
#353 Hatching Another Episode

Python Bytes

Play Episode Listen Later Sep 19, 2023 29:27


Topics covered in this episode: OverflowAI Switching to Hatch Alpha release of the Ruff formatter What is wrong with TOML? Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training Python Testing with pytest, full course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: OverflowAI Integration of generative AI into our public platform, Stack Overflow for Teams, and brand new product areas, like an IDE integration. Have a conversation about the search results and proposed answer with GenAI Coming with IDE integration too. Check out the video on their page for some more detail than the article. Brian #2: Switching to Hatch Oliver Andrich Hatch has some interesting features Template built from hatch new myproject includes isolating dev, test, lint virtual environments. Each env can have scripts Test matrix ala tox, but possibly easier to express complex matrices. May not even need tox then, but then now you have hatch. A way to specify which optional dependencies needed for default environment. Notes from Brian One premise is that lots of projects are now using hatch. I don't know if that's true. A quick spot check of a few projects include projects that use hatchling. While hatchling is the back end to hatch, they are not the same. I use hatchling a lot now, but haven't picked up using hatch. But I do want to try it more after reading this article. Michael #3: Alpha release of the Ruff formatter vis Sky Kasko Charlie Marsh announced that an alpha version of a Ruff formatter has been released in Ruff v0.0.289. The formatter is designed to be a drop-in replacement for Black, but with an excessive focus on performance and direct integration with Ruff. Sky says: I can't find any benchmarks that have been released yet, but I did some extremely unscientific testing and found the Ruff formatter to be around 5 to 10 times faster than Black when running on already-formatted code or in a small codebase, and 75 times faster when running on a large codebase of unformatted code. (The second outcome probably isn't very important since most people would not often be formatting thousands of lines of completely unformatted code.) For more info, see the README: https://github.com/astral-sh/ruff/blob/main/crates/ruff_python_formatter/README.md Brian #4: What is wrong with TOML? Colm O'Connor Suggested by Will McGugan This is a comparison of TOML vs StrictYAML under the use case of “readable story tests”. TLDR; For smallish things like pyproject.toml, toml is fine. For huge files, something like StrictYAML may be less horrible. from Brian: Short answer: Nothing, unless you're doing crazy things with it. Re “readable story tests”: WTF? Neither of these are something I'd like to maintain. Extras Brian: Python Testing with pytest, the course New intro video to explain what the course is about Using Teachable video like notes, mini-viewer, and speed controls Chapter on “Testing Strategy” is next Michael: HTMX + Django: Modern Python Web Apps, Hold the JavaScript Course Coding in Rust? Here's a New IDE by JetBrains Delightful Machine Learning Apps with Gradio out on Talk Python Joke: The 5 stages of debugging

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
"Third wave" of AI: machines talking to machines and people; AI for hyper-personalized Maps; The Rise and Potential of LLM-Based Agents; Humans have five senses. How many does AI have?; AI artists banned by Google

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Sep 19, 2023 15:40


In today's episode, we'll cover DeepMind's prediction of a "third wave" of AI, Google and DeepMind's AI algorithm for personalized route suggestions, a comprehensive survey on LLM-based agents, MIT's Style2Fab AI tool for 3D printing, Meta's efforts in multimodal learning, Google's restriction on Gradio for free users, DeepMind's Optimization by PROmpting, Meta's AI-powered tools for marketers, SoftBank's consideration of investment or partnership with OpenAI, the partnership between Anthropic and BCG for enterprise AI solutions, DeepMind's vision of interactive AI chatbots, a roundup of AI-related news, and a recommendation of the book "AI Unraveled."Video: https://youtu.be/15OaAO9qsHAMustafa Suleyman, co-founder of DeepMind, believes that we are on the cusp of a new era in artificial intelligence (AI). In what he refers to as the "third wave" of AI evolution, machines will not only communicate with humans but also with other machines. To understand this progression, let's take a quick look at the previous phases. The initial phase was focused on classification, specifically deep learning algorithms that could classify different types of data. Then came the generative phase, where AI systems used input data to create new information. But now, we're heading into the interactive phase. This is where machines will be capable of carrying out tasks by conversing not only with humans but also with other AI systems. Users will be able to provide high-level objectives to their AI and let it take the necessary actions, involving dialogue with both machines and individuals. This interactive AI has the potential to be more than just a tool for automation. It will possess the freedom and agency to execute tasks, bringing us closer to the AI we see in science fiction. Instead of being static, it will be dynamic and adaptable, much like the depictions of AI in movies. Interestingly, despite the excitement surrounding generative AI, there seems to be a decline in its popularity. User growth and web traffic for tools like ChatGPT have decreased. DeepMind itself has released a rival to ChatGPT called Pi, which emphasizes its polite and conversational nature. Overall, it's clear that AI is rapidly advancing, and the future holds great promise for machines that can interact not only with humans but also with their own kind.Full transcript at: https://enoumen.com/2023/09/02/emerging-ai-innovations-top-trends-shaping-the-landscape-in-september-2023/Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," available at Apple, Google, or Amazon today!

TNT Radio
Chris Cook on War of the Worlds - 29 April 2023

TNT Radio

Play Episode Listen Later Apr 29, 2023 55:39


On today's show we discuss what has happened to dissent, political resistance, truth telling in the Canadian media - a history of descent into Hades? GUEST OVERVIEW: After being disenchanted with corporate news, Chris Cook began a career producing alternative media; first in radio, then migrating into the nascent online world through the Independent Media Center movement, and Peace Earth & Justice (PEJNews) site. In 2006 he took on the editor's role at Pacific Free Press news, sister site to Richard Kastelein's Atlantic Free Press. Now cancelled from his long-time radio sinecure, the Ape endures, producing weekly programs at Gorilla-Radio.com and GRadio.Substack.com 

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Yves Engler (extended) March 29, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 29, 2023 60:00


Welcome to Gorilla Radio, recorded March 29th and April 2nd, 2023. And welcome to the Spring of discontent! While leaders In the capitals of Europe and America promise to carry on the Ukraine proxy war no matter the price, the People are already in the streets protesting the real and rising costs of that belligerence. For Canadians, whose government is already full-throatedly behind: the bellicose stance against Russia, prospects of an expanded war against China and Iran, battles by sanction in Venezuela, Nicaragua, and elsewhere, there is too Haiti. Last week, the Trudeau Liberals blithely announced another $100 million dollars to be delivered to the tiny island nation's military and police effort to keep the restive population under heel. This on top of millions already delivered. It's almost as if Ottawa believes the fatted tax goose's golden eggs will never flag. Yves Engler is an independent, Montreal-based journalist and author. He's written twelve books on Canadian foreign policy, including ‘Canada in Haiti: Waging War on the Poor Majority', co-authored with Anthony Fenton. His recent article, 'Canadian government prioritizes war over climate crisis' is a troubling portrait of a country few here in our home and native land would recognize. Yves Engler and Trudeau of the Tropics, taking Haiti. Song: The Panic Is On Artist: David Rovics   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Jillian Maguire, Paul Watson March 22, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 23, 2023 58:51


Welcome to Gorilla Radio, recorded March 22, 2023 This weekend past, the British Columbia Federation of Teachers held its Annual General Meeting. On the agenda: the election of a new president, and an opportunity the BCTF Divest Now campaign to platform its efforts to pursue a practical pathway to cease their pension plan's investments in "fossil fuels, and other life-killing industries." Jillian Maguire was a presidential candidate in the poll, held March 21st. Maguire's a longtime teacher who over a 27+ year career estimates she's launched more than 5,000 students into the World. She says she was shocked to discover the teachers' pension plan was heavily invested in some of the most ecologically destructive businesses, and so co-founded BCTF Divest Now. Jillian Maguire in the first half. And; we often forget, everything beginning on the land ends in the sea. As desperate the situation of terrestrial pollution is, in the ocean the effects are amplified, concentrated, and brought back to us as inevitably as the tides. Paul Watson has spent a lifetime on and in the service of the ocean. A founding member of Greenpeace, he left that iconic organization at the height of its effectiveness to found the Sea Shepherd Society. Renowned for its direct action against whaling, most spectacularly intercepting the massive factory fleets of Japan, Norway, and Russia, Sea Shepherd has also exposed and shut down hundreds of other ecologically destructive commercial ventures and activities. Now Watson is helming another venture, the Captain Paul Watson Foundation, whose mission to save life in the sea is simply stated: "If the Ocean Dies, We Die!" Paul Watson and finding a way to protect and defend life in the sea in the second half. But first, Jillian Maguire and British Columbia's teachers instructing their union to listen and learn to divest. Song: Captain Artist: Tiny Milkshake Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Dimitri Lascaris, Brad Wolf March 15, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 15, 2023 61:16


Welcome to Gorilla Radio, recorded March 15th, 2023. Last week, Canada's Ministry of National Defence announced a "review of our Defence Policy". Though only five years into the Liberals' 20-year Defence Policy Update, or "(DPU", DoD insists a new DPU is necessary if Canada is to, "be ready, resilient, and relevant to meet any threat in this changed global security environment". Dimitri Lascaris is a Montreal-based activist, journalist, and lawyer. He served as Justice Critic in the Shadow Cabinet of the Green Party of Canada and likewise for the Green Party of Quebec (PVQ). In 2020, Dimitri very nearly became leader of the Green Party of Canada, finishing second in a tightly-contested race with the now-departed Annamie Paul. Dimitri's interviews for TRNN are at TheRealNews.com, and his articles appear at his website, DimitriLascaris.org. This Saturday, March 18th, Dimitri will host 'The Art of Peace: Seeing the World Through the Eyes of Our 'Enemies'. The Special Webinar is his way of engaging with Canadians before embarking on a mission of peace to Russia next month. Dimitri Lascaris in the first half. And; there was a time, not so long ago, war profiteers where held to be exemplars of humanity's basest instincts: Antitheses of Virtue, the very worst of the worst of Evil Doers, they were rightly and roundly despised. Now though, CEO's of Lockheed Martin and Raytheon and Boeing and Northrop Grumman and General Dynamics and all their lesser factotums are welcome and well-treated in the halls of power and major media teevee studios alike. Today, the profiteers need never fear official opprobrium, or being called to account for the bitter harvest of their dark seeds - at least not in the courts of the land. But there's another weighing of justice at hand. The Merchant of Death War Crimes Tribunal is coming - soon - and it promises to hold accountable the manufacturers of the weapons that kill combatants and non-combatants alike through the testimony of witnesses to the destruction wreaked and the crimes committed against Humanity with them. Brad Wolf is co-founder of the Peace Action Network of Lancaster, Pennsylvania, an affiliate of Peace Action and a partner of World BEYOND War. He's a lawyer, former prosecutor, professor, community college dean, and full-time activist for peace and justice. His writings are published at The Progressive, Common Dreams, CounterPunch, Antiwar.com, Consortium News, and Dappled Things among others. He recently authored a book on former priest, Philip Berrigan's collected writings, 'A Ministry of Risk'. He's also a key organizer with the Tribunal. Brad Wolf and attaching the human costs to the Merchants of Death in the second half. But first, Dimitri Lascaris and seeing the World and ourselves through the eyes of our "enemies". Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ken Stone (extended) March 8th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 9, 2023 59:59


Welcome to Gorilla Radio recorded March 8th, 2023. March 20th marks the twentieth anniversary of America's second invasion of Iraq. Despite the tens of millions of people across the globe coming into the streets to hold at bay the dogs of George Bush's "generational war" Operation Iraqi Freedom's "shock and awe" - called "blitzkrieg" in another era - was launched. We all know what happened, and the failure of the People to stop the slaughter then and in Afghanistan in 2001 seemed to be the end of hope for the Peace Movement; but the flame for a World without War didn't die, and has in fact recently been spotted flickering in the capitals of Europe, Canada, and even in Washington, D.C. Ken Stone is an executive member of both the Syria Support Movement International and Hamilton Coalition to Stop the Wars. Today, Ken Stone and the smouldering desire for peace. Song: Work for Peace Artist: Gil Scott-Heron Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/ Correction: The U.S. House APPROVED a resolution to MAINTAIN the Caesar Syria Civilian Protection Act (2019) by an “overwhelming” margin (414-2), and not DEFEAT a motion to LIFT sanctions as reported. See: https://scheerpost.com/2023/03/02/house-overwhelmingly-approves-resolution-to-maintain-syria-sanctions-after-earthquake/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ken Stone, Dan Kovalik March 8, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 8, 2023 59:59


Welcome to Gorilla Radio recorded March 8th, 2023. March 20th marks the twentieth anniversary of America's second invasion of Iraq. Despite the tens of millions of people across the globe coming into the streets to hold at bay the dogs of George Bush's "generational war" Operation Iraqi Freedom's "shock and awe" - called "blitzkrieg" in another era - was launched. We all know what happened, and the failure of the People to stop the slaughter then and in Afghanistan in 2001 seemed to be the end of hope for the Peace Movement; but the flame for a World without War didn't die, and has in fact recently been spotted flickering in the capitals of Europe, Canada, and even in Washington, D.C. Ken Stone is an executive member of both the Syria Support Movement International and Hamilton Coalition to Stop the War. Ken Stone in the first half. And; far from fulfilling its mandate to first be an agent opposing war in the World the United Nations' repeated failures in that seminal mission are now manifest in its endeavoring the opposite, the promotion of economic sanctions and military intervention. At least the recently released 'Group of Experts on Human Rights on Nicaragua' report leaves little else to conclude. Dan Kovalik is a lawyer, educator, labour, peace, and justice activist, democracy defender, journalist, author, and filmmaker. His book titles include: ‘Cancel This Book: The Progressive Case Against Cancel Culture,' the “Plot to” series on American efforts to undermine the governments and economies of Iran, Venezuela, Russia, (and the World generally) and ‘No More War: How the West Violates International Law by Using ‘Humanitarian' Intervention to Advance Economic and Strategic Interest.' His latest is the recently released, 'Nicaragua: A History of US Intervention & Resistance', which "explores the pernicious nature of US engagement with Nicaragua from the mid-19th century to the present in pursuit of control and domination rather than in defense of democracy". Dan Kovalik and the latest chapter in the hybrid war* against Nicaragua in the second half. But first, Ken Stone and the smouldering desire for peace. Song: Work for Peace Artist: Gil Scott-Heron Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/ Correction: The U.S. House APPROVED a resolution to MAINTAIN the Caesar Syria Civilian Protection Act (2019) by an “overwhelming” margin (414-2), and not DEFEAT a motion to LIFT sanctions as reported. See: https://scheerpost.com/2023/03/02/house-overwhelmingly-approves-resolution-to-maintain-syria-sanctions-after-earthquake/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ray McGinnis (extended) March 4th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 5, 2023 59:26


Welcome to Gorilla Radio, recorded March 1st and 4th, 2023. Last month, Justice Paul Rouleau held his nose and blessed Justin Trudeau's invocation of the Emergencies Act to shut down the Ottawa Anti-Vaccine Mandate protest in 2022. This though none of the “tests” within the Act for doing so were met by the actions of ‘Freedom Convoy'. And, though providing a post-imprimatur to the government's actions, even Rouleau says the means used to punish Canadians exercising their “democratic rights” to express political opposition to government policy, (like freezing bank accounts of participants' non-participating spouses) was “flawed”, he provides no legal remedies. Ray McGinnis is an author and retired educator. He says he “became concerned” with the disconnect between mainstream media and alternative livestream coverage of the Freedom Convoy. He subsequently attended the Public Order Emergency Commission hearings for a week in Ottawa last November, and his article on the event and its aftermath, ‘Commission Reveals that Trudeau Government Lied About Nature of Truckers Protests in Ottawa Last February to Justify Invocation of Emergencies Act‘ is published at CovertAction Magazine.   Today, Ray McGinnis and the Freedom Convoy's hard-learnt lessons for Canadians. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ray McGinnis, Robert Freeman March 4th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 4, 2023 59:59


Welcome to Gorilla Radio, recorded March 1st and 4th, 2023. Last month, Justice Paul Rouleau held his nose and blessed Justin Trudeau's invocation of the Emergencies Act to shut down the Ottawa Anti-Vaccine Mandate protest in 2022. This though none of the "tests" within the Act for doing so were met by the actions of 'Freedom Convoy'. And, though providing a post-imprimatur to the government's actions, even Rouleau says the means used to punish Canadians exercising their "democratic rights" to express political opposition to government policy, (like freezing bank accounts of participants' non-participating spouses) was "flawed", he provides no legal remedies. Ray McGinnis is an author and retired educator. He says he "became concerned" with the disconnect between mainstream media and alternative livestream coverage of the Freedom Convoy. He subsequently attended the Public Order Emergency Commission hearings for a week in Ottawa last November, and his article on the event and its aftermath, 'Commission Reveals that Trudeau Government Lied About Nature of Truckers Protests in Ottawa Last February to Justify Invocation of Emergencies Act' is published at CovertAction Magazine. Ray McGinnis in the first half. And; much has been made of the first anniversary of Russia's so-called "Special Operation" in Ukraine by the western press. Countless hours of television, and oceans of ink have been spilt to convince citizens in NATO nations of the righteousness of Kyiv's cause - and more importantly - of "our" noble motives in supplying its army with billions of dollars and an incomprehensible amount of high-tech weaponry. Robert Freeman is Founder and Executive Director of The Global Uplift Project. He's a past educator, and author of 'The Best One Hour History' series of books covering history from 'The Renaissance' and 'The Scientific Revolution' to 'The Protestant Reformation', 'French Revolution' and great wars of the last century. Robert's recent article, published at CommonDreams.org, 'Ukraine and the Tunnel at the End of the Light' is a hard-eyed assessment of both the disaster that is the Ukraine/Russia war, and the doomed political and economic dynamics behind the conflict. Robert Freeman and shedding the rosy aura around Ukraine's war prospects in the second half. But first, Ray McGinnis and the Freedom Convoy's hard-learnt lessons for Canadians.   Song: After the Revolution (from the album Return) Artist: David Rovics   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/  

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Robert Freeman March 1st, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Mar 1, 2023 27:34


Welcome to Gorilla Radio, recorded March 1st, 2023. Welcome back to GR, etc. Much has been made of the first anniversary of Russia's so-called "Special Operation" in Ukraine by the western press. Countless hours of television, and oceans of ink have been spilt to convince citizens of NATO nations of the righteousness of Kyiv's cause, and more importantly, our noble motives in supplying its army with billions of dollars and an incomprehensible amount of high-tech weaponry. In fact, Washington et al has invested so much in the successful outcome of the war - for now - officially accepting the effort has been a tragic folly is all but impossible; but, just off-camera, the reality chorus is growing more voluble. Robert Freeman is Founder and Executive Director of The Global Uplift Project. He's a past educator, and author of 'The Best One Hour History' series of books, covering history from 'The Renaissance' and 'The Scientific Revolution', to 'The Protestant Reformation', 'French Revolution', and the great wars of the last century. Robert's recent article, published at CommonDreams.org, 'Ukraine and the Tunnel at the End of the Light' is a hard-eyed assessment of both the disaster that is the Ukraine/Russia war, and the doomed political and economic dynamics behind the conflict. Today, Robert Freeman and shedding the rosy aura around Ukraine's war prospects. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, David Rovics (Extended) February 25th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 27, 2023 60:00


Welcome to Gorilla Radio, recorded February 25th, 2023 On February 9th, a train derailed outside East Palestine, Ohio. The toxic cargo, purposefully ignited in a disastrously ill-advised attempt at environmental remediation. David Rovics‘ new song, East Palestine is a modern ballad of an old American story of the venality and base corruption culminating in the Ohio catastrophe. David's recent article too, ‘Communications for Indy Musicians, Then and Now‘ is a reflection not only on his quarter century making a living as a touring musician, but of living and working in our mediated times. His new album, Killing the Messenger is due out any day, telling the stories of East Palestine and others.   David Rovics, Robert Hoyt, and Claude share a moment. Today, David Rovics on Killing the Messenger in these end days of media in an extended talk. Artist: David Rovics Song: East Palestine from the album, Killing the Messenger Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Jeremy Kuzmarov, David Rovics February 25, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 26, 2023 60:00


Welcome to Gorilla Radio, recorded February 25th, 2023 Last weekend saw mass anti-war demonstrations around the United States and Europe. In Washington, DC 'Rage Against the War Machine' manifested in the very heart - if not the soul - of the nation with speakers and activists from a variety of pursuits and political persuasions agreeing to forego their disagreements on other issue for the moment to focus on the rolling disaster that is the global war machine. Jeremy Kuzmarov is a journalist and author who also serves as Managing Editor at CovertActionMagazine.com and he was there. Jeremy Kuzmarov and dispatch from DC in the first half. And; On February 9th, a train derailed outside East Palestine, Ohio. The toxic cargo, purposefully ignited in a disastrously ill-advised attempt at environmental remediation. David Rovics' new song, East Palestine is a modern ballad of an old American story of the venality and base corruption culminating in the Ohio catastrophe. David's recent article too, 'Communications for Indy Musicians, Then and Now' is a reflection not only on his quarter century making a living as a touring musician, but of living and working in our mediated times. His new album, Killing the Messenger is due out any day, telling the stories of East Palestine, and the other on. David Rovics on Killing the Messenger in these end days of media in the second half. But first, Jeremy Kuzmarov and raging for peace in America.   Artist: David Rovics Song: East Palestine from the album, Killing the Messenger Artist: Robert Hoyt Song: This Star from the album, Mind's Eye Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, John Helmer February 19th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 20, 2023 60:00


Welcome to Gorilla Radio, recorded February 19th, 2023. On February 8th of this year, Dutch police presented new evidence to the District Court of The Hague regarding the July 17th, 2014 downing of Malaysia Airlines Flight MH17. Upon entering their evidence, the Dutch government and an Australian investigator on the case admitted to having held American military satellite images from the court, the press, and defence lawyers throughout the two-year trial for the wrongful deaths of the two hundred and ninety-eight people who perished aboard that ill-fated flight. John Helmer is a journalist and author who's spent decades living in and reporting from Russia. The principle behind the web news site, Dances with Bears, Helmer has too been a professor of political science, sociology and journalism, and served as advisor to governments at the highest levels. Among his many book titles are: ‘Skripal in Prison,' ‘The Man Who Knows Too Much About Russia,' ‘The Jackals' Wedding: American Power, Arab Revolt', his latest, ‘Australian Fascism: How It Destroyed the Courts,' and ‘The Lie That Shot Down MH-17'. His recent article on the latest from The Hague and the MH-17 disaster is, 'US SATELLITE PHOTOS REVEALED AT LAST – NOW THEY INCRIMINATE THE DUTCH POLICE, PROSECUTORS, AND JUDGES IN THE MH17 SHOW TRIAL'. Today, John Helmer and the JIT's latest shot in the West's war against Russia. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Jennifer Tynan February 18th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 19, 2023 25:34


Welcome to Gorilla Radio, recorded February 18th. BC's Ministry of Environment and Climate Change Strategy has granted permits for the Ministry of Forests to begin a Spring spraying campaign across swathes of Vancouver Island. The plan is to use Foray 48B, (better known as Btk) in an attempt to eradicate the "Spongy Moth"; and though the ministries say Btk is "harmless", there is contention to those claims. Dr. Jennifer Tynan is a physician and radiology specialist. Jennifer's also a mom whose child's school is in one of the proposed spray zones, and serves as spokesperson for Communities United for Clean Air, a grassroots initiative to stop the spray. Today, Jennifer Tynan, fighting to keep Vancouver Island's air clear of Btk. Contact: communitiesunitedforcleanair@gmail.com Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Yves Engler, Tyan Cherepuschak, Tyson Strandlund February 11th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 12, 2023 60:00


Welcome to Gorilla Radio, recorded February 11th, 2023. This past week, Canadian Forces redeployed a CP-140 Aurora surveillance plane from its dedicated mission assisting U.S.-led drugs smuggling interdiction efforts in the Caribbean to spend two days flying reconnaissance over Haiti. Foreign Affairs Minister, Melanie Joly characterized the escalation of Canada's military involvement in the country as a "demonstration of Canada's commitment to Haiti". Last month, Canada demonstrated that commitment in the form of an unspecified number and type of armored vehicles being sold to a government that has already called for foreign military intervention to quell wide-spread public discontent with its corrupt and entirely unelected leadership. Yves Engler is an independent, Montreal-based journalist and author. He's written twelve books on Canadian foreign policy, including ‘Canada in Haiti: Waging War on the Poor Majority', co-authored with Anthony Fenton. His recent article, ‘Ottawa's support for repressive Haitian police grows as democracy fades' appears at his website, YvesEngler.com, and puts to the lie - again - Canada's "concern for the people" of that benighted island. Yves Engler in the first half. And; the University of Victoria's Student Union has become another front in the Ukraine/Russia war. Last month, UVic's Ukrainiain Students' Society alleged harassment of its members and intimidation in the form of "hate crime" graffiti scrawled on one of its posters. They've also made allegations of "ongoing hate and harassment demonstrated by the YCL [Young Communist League]..." UVic's USS is, with similar student societies on campuses across the country, associated with the politically influential Ukrainian Canadian Congress. Tyan Cherepuschak is an undergraduate student at the University of Victoria, and is a Ukrainian-Canadian who until recently served as Vice-President to the UVic chapter of the Ukrainian Canadian Students Union (SUSK). And; Tyson Strandlund is a member of both the UVic chapter of the Young Communist League and the Vancouver Island Peace Council, a local chapter of the Canadian Peace Congress. He too is a Canadian of Ukrainian descent, who has studied in and visited Ukraine both before and after the 2014 Maidan coup. Tyan Cherepuschak & Tyson Strandlund on bringing the Ukraine conflict to Canadian campuses in the second half. But first, Yves Engler and Canada's caring military gestures to Haiti.   Song: Watch the Buildings Crumble Album: May Day Artist: David Rovics   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/  

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Yves Engler February 11th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 11, 2023 59:25


Welcome to Gorilla Radio, recorded February 11th, 2023. This past week, Canadian Forces redeployed a CP-140 Aurora surveillance plane from its dedicated mission assisting U.S.-led drugs smuggling interdiction efforts in the Caribbean to spend two days flying reconnaissance over Haiti. Foreign Affairs Minister, Melanie Joly characterized the escalation of Canada's military involvement in the country as a "demonstration of Canada's commitment to Haiti". Last month, Canada demonstrated that commitment in the form of an unspecified number and type of armored vehicles being sold to a government that has already called for foreign military intervention to quell wide-spread public discontent with its corrupt and entirely unelected leadership. Yves Engler is an independent, Montreal-based journalist and author. He's written twelve books on Canadian foreign policy, including ‘Canada in Haiti: Waging War on the Poor Majority', co-authored with Anthony Fenton. His recent article, ‘Ottawa's support for repressive Haitian police grows as democracy fades' appears at his website, YvesEngler.com, and puts to the lie - again - Canada's "concern for the people" of that benighted island. Today Yves Engler and Canada's caring military gestures to Haiti.   Song: Watch the Buildings Crumble Album: May Day Artist: David Rovics   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/  

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Glenn Michalchuk, Tamara Lorincz February 4, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Feb 5, 2023 59:27


Welcome to Gorilla Radio, recorded February 4th, 2023. Last month, the German government of Olaf Scholz relented to NATO pressure, agreeing to donate Leopard 2 tanks to the Kiev regime's war effort. The United States was quick to follow, delivering the first 60 of a promised 109 Bradley M2A2-ODS Fighting Vehicles to Ukraine. Canada's Trudeau administration in the last week too pledged tanks, the minister of defense, Anita Anand saying four Leopard 2's would be given over, including ammunition, parts, and a deployment of Canadian soldiers to "train" Ukrainian tank crews. This while countries like Turkey work for a ceasefire agreement to end the near year-long conflict between Ukraine and its neighbour, Russia. Glenn Michalchuk is chair of Peace Alliance Winnipeg, and president of the Association of United Ukrainian Canadians. Glenn's been active in the peace movement since the Dirty Wars of Ronald Reagan in the 1980's. Glenn Michalchuk in the first half. And; throwing tanks, ammunition, spare parts, and soldiers onto the fires of a hot war is not the only evidence of Canada's new self-understanding as a warrior nation. In January too the contentious purchase of a fleet of American F-35 jets was approved by the Trudeau government. A press release from Public Services and Procurement Canada reads in part, "Through Canada's defence policy, Strong, Secure, Engaged, the Government of Canada is acquiring modern military equipment to keep Canadians safe and protected, and to support the security of our international allies and partners." The release did not explain just how expediting nuclear winter makes Canadians strong, safe, or protected. Tamara Lorincz is a fellow with the Canadian Foreign Policy Institute, member of Canadian Voice of Women for Peace, and a PhD candidate in Global Governance at the Balsillie School for International Affairs at Wilfrid Laurier University, where her thesis focus is climate and the environmental impact of the military. Tamara's also served on the advisory committee of the Global Network Against Weapons and Nuclear Power in Space, and contributed her knowledge and talents to the organizations World Beyond War, and the No to NATO Network. Tamara Lorincz and militarism's two ecological doomsday scenarios in the second half. But first, Glenn Michalchuk and Canada's all-in move on Ukraine. Artist: David Rovics Song: 116 Degrees Album: May Day Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Saul Arbess, Ken Stone January 28th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 28, 2023 60:00


Welcome to Gorilla Radio, recorded January 22 & 28th, 2023. More than two years after BC's former premier, John Horgan's re-election promise to implement the recommendations of his own government's Old Growth Strategy Review panel it's business as usual in the woods. While some of the OGSR's 14 point plan was implemented, big trees are still falling and the forests are in peril. The NDP also, after Horgan's snap election win, ceded policing authority to deal with old growth forest defenders near the capital to the federal RCMP, whose paramilitary tactics and brutality at the Fairy Creek encampments elicited international expressions of disgust and condemnation. Now, a grand manifestation of First Nations, conservation organizations, ecological agencies, and citizens concerned about the loss of an irreplaceable biological legacy is planned to surround the seat of government to state demands for policy change in more fervent language. Saul Arbess is a long-time peace, justice, and environmental champion. He describes himself as a cultural anthropologist and futurist, dedicated to "creating a new architecture of peace in the world". Saul served as National Co-chair of the Canadian Department of Peace Initiative, was co-founder and chair of the Global Alliance for Ministries and Departments of Peace, and currently works for peace, non-violence, and protecting the wildlands around his home city of Victoria. Photo: Bill Johnston Saul Arbess in the first half. And, January on Canada's west coast means Season's change is soon; but even as we ready for Spring, east of here the cruelest months of Winter still lay ahead. That cold reality isn't however deterring citizen demonstration of dissatisfaction with the ongoing NATO wars and occupations. Monday, January 23, as part of the week-long protests around North America, Hamiltonians picketed the prime minister's cabinet meeting taking place in the Hamilton Convention Centre. One focus of their vigil being the "purchase of the obscenely-expensive F-35 fighter jets". Ken Stone is an executive member of both the Syria Support Movement and Hamilton Coalition to Stop the Wars. Ken Stone and Canada's Winter war resisters in the second half. But first, Saul Arbess and Uniting for Old Growth in British Columbia. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, John Helmer January 22, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 24, 2023 59:36


Welcome to Gorilla Radio, recorded January 22, 2023 Approaching the first anniversary of Russia's "Special Operations" in the Donbass region of Ukraine prospects for Russia's proposed ceasefire are tremulous, and the chance of peace breaking out soon remote. The problem for those seeking an end to the killing is, as ever, finding willing partners on both sides of the conflict. For now, Ukraine's backers in the United States and the NATO countries seem content to allow the war continue, sapping Russia's political and economic resources while providing a lucrative market for their weapons-making industries. But, and it is an enormous one, elements within Germany's uber military echelons are pushing America to put an end to its project in Ukraine - before it's too late! John Helmer is a journalist, author, and principle behind the web news site, Dances with Bears. He's a past professor of political science, sociology, and journalism and has served as advisor to governments at the highest level. Helmer has spent decades living in and reporting from Russia, and among his many book titles are: ‘The Lie That Shot Down MH-17,' ‘Skripal in Prison,' ‘The Man Who Knows Too Much About Russia,' ‘The Jackals' Wedding: American Power, Arab Revolt', and his latest, ‘Australian Fascism: How It Destroyed the Courts.'. John's recent article at Dances with Bears, ‘GERMAN GENERAL TELLS U.S. GENERALS TO LOSE THE UKRAINE WAR AS SOON AS POSSIBLE TO PREVENT LOSING THE EMPIRE IN EUROPE‘ reveals Germany's evolving position on the Ukraine conflict. Today, John Helmer and abandoning the Ukraine battle to save the Russia war. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Ken Stone January 22, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 22, 2023 30:30


Welcome to Gorilla Radio, recorded January 22, 2023. January on Canada's west coast means Season's change is soon; but even as we ready for Spring, east of here the cruelest months of Winter still lay ahead. That cold reality isn't however deterring citizen demonstration of dissatisfaction with the ongoing NATO wars and occupations. Monday, January 23, as part of week-long protests around North America, Hamiltonians will picket the prime minister's cabinet meeting taking place at the Hamilton Convention Centre. One focus of their vigil will be the "purchase of the obscenely-expensive F-35 fighter jets". Ken Stone is an executive member of both the Syria Support Movement and Hamilton Coalition to Stop the War. He'll be there Monday, and insists, “Canada needs an independent foreign policy” and “should get out of NATO, the aggressive US-led military alliance that drags us into every conflict of the US empire, including the war in Yemen and the occupation of Haiti.” Today, Ken Stone and Canada's Winter war resisters. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Mickey Z. and Me, Kim Goldberg January 15, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 17, 2023 59:53


Welcome to Gorilla Radio, recorded January 14th and 15th, 2023. My old friend and frequent interview victim, Mickey Z. proposed turning the tables and interviewing me for his excellent Substack podcast, Post-Woke. The entire interview is available at https://mickeyz.substack.com/ Mickey Z. talking to an Ape in the first half. And; it's been three years now since the great global unraveling wrought by the Covid-19 pandemic began. In Canada, part of that meant changes to the laws of the land - changes Canadians are yet to resolve properly in Parliament and the courts, or among a polity divided in unprecedented ways. Kim Goldberg is a Nanaimo-based poet, journalist, and author. She's written 8 books of poetry and nonfiction, including her most recent collection ‘DEVOLUTION: poems and fables of eco-pocalypse‘. She covered BC current affairs for Canadian Dimension Magazine for many years, most recently on the Fairy Creek blockade. Kim's writing is featured at Substack, and can be found again, after some rude interruption, on Twitter @KimPigSquash. Kim Goldberg and emerging the emergency in the second half. But first, me and Mickey Z. talking about alternative media and Gorilla Radio at twenty-four. Song: Pure Mirror of the Beloved Artist: Siddartha Corsus Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Kim Goldberg January 14th, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 14, 2023 28:06


Welcome to Gorilla Radio, recorded January 14th, 2023. It's been three years now since the great global unraveling wrought by the Covid-19 pandemic began. In Canada, part of that meant changes to the laws of the land - changes Canadians are yet to resolve properly in Parliament and the courts, or among a polity divided in unprecedented ways. Kim Goldberg is a Nanaimo-based poet, journalist, and author. She's written 8 books of poetry and nonfiction, including her most recent collection ‘DEVOLUTION: poems and fables of eco-pocalypse‘. Kim's also covered BC current affairs for Canadian Dimension Magazine for many years, most recently on the Fairy Creek blockade, and now her writing is featured at Substack. @KimPigSquash can too be found again, after some rude interruption, on Twitter. Kim Goldberg and emerging the emergency. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Tom Secker, David Rovics January 7, 2023

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Jan 7, 2023 59:59


Welcome to Gorilla Radio, recorded January 7th, 2023. It's been almost thirty-one years since UNPROFOR, the UN Protection Forces' mission to the Former Yugoslavia. Canada's military, then famed for peacekeeping, played a role in standing between the warring parties in hopes of a brokered truce. That mission failed, but not for the reasons believed then. In fact, according to records recently declassified, little about the conflict that led to the destruction of thousands of lives and ultimately redrew the political map of the Balkans occurred either why or how we were told. Tom Secker is a UK-based private researcher, journalist, and frequently featured commentator on security and intelligence issues. He's the host of the podcast, ClandesTime, principal behind Spyculture.com, “the world's premier online archive about government involvement in the entertainment industry”, and author, with Matthew Alford, of the book, 'National Security Cinema: The Shocking New Evidence of Government Control in Hollywood'. Tom recently collaborated with Kit Klarenberg of the Grayzone on the article, 'Declassified Intelligence Files Expose Inconvenient Truths of Bosnian War'. Tom Secker in the first half. And, though it may be difficult to imagine now, back in the day people came out en masse to give voice to the notion of a World without war. They marched and sang, colourfully costumed and carrying clever signs, while massive puppets, designed to attract the attention of a media more normalized to war footage, danced along the boulevards and in the High Street. They were then called "The Left", now they're simply known as departed. David Rovics' frequent essays on political issues and societal observation are featured at CounterPunch and Dissident Voice.org among other places. He's a broadcaster, musician, blogger, and author of the novel, ‘A Busker's Adventures'. His weekly program, This Week with David Rovics can be found through his website, DavidRovics.com - and on Substack - where you can read his essays, listen to his hundreds of original songs, and catch some of his hundreds of interviews. His recent article, 'An Autopsy on the US Left' verifies what many of us have known but may not have admitted, the fact that "that parrot is dead!" David Rovics and 'An Autopsy on the US Left in the second half. But first, Tom Secker and "CIA black ops, illegal weapon shipments, imported jihadist fighters, potential false flags, and stage-managed atrocities" revealed in Canada's declassified Yugoslavia cables. Song: At the End of World War III Artist: Chet Gardiner David sez: "If you've never heard of a musician named Chet Gardiner, here's a fine introduction. This is his solo acoustic version of a song I wrote a few months ago, which he recorded at his home studio in Hawaii. "Both the bassy resonance of Chet's voice and his delivery reminds me very much of the last recordings Johnny Cash made, which I think were brilliant. Chet's fingerstyle DADGAD guitar playing is so evocative as well."   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Mickey Z. December 31, 2022

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Dec 31, 2022 59:25


Welcome to GR, recorded December 31st, 2022. And welcome to the beginning of the end - of 2022. As we all resolve to make ourselves and the next year better, today a look back at the stories of the year past with someone needing no calendrical prompt to do "the hard work" of staring clear-eyed into the face of our times. Mickey Z. is the author of Post-Woke, a weekly podcast emanating from his native New York City. He is too a past lecturer and political activist, current martial artist and physical trainer, the author of a dozen books, and most importantly perhaps, a sensei teaching "the art of intellectual self-defense". Mickey Z.'s book titles include: ‘Darker Shade of Green,' ‘Self-Defense for Radicals: A to Z[ed] Guide for Subversive Struggle,' ‘CPR for Dummies,' 50 American Revolutions You're Not Supposed to Know: Reclaiming American Patriotism,' ‘The Seven Deadly Spins: Exposing the Lies Behind War Propaganda,' and ‘Occupy This Book: Mickey Z. on Activism.' Today, Mickey Z. on following the story, while always keeping your intellectual guard up! Songs: Smoke and Mirrors                Run Rabbit Run! Artist: When Humans Had Wings   Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, John Helmer Xmas Special December 20, 2022

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Dec 21, 2022 58:56


Welcome to Gorilla Radio, recorded December 20th, 2022. "Fascism" has become, in the post-totalitarian west, a term traduced through overuse to the point of triteness. So much so, it begs questioning whether this trivialization is not done by some nefarious design than the more usual intellectual laziness of the purveyors of news and other so-called "information". But - and it is an enormous but - fascism did not die with Herr Hitler, its surviving spore has found purchase in the body politic of liberal democracies across the World; in some more readily than others. John Helmer, the self-titled "doyen" of the foreign press corps in Russia is principle of the web news site, Dances with Bears. He's a native-born Australian, educated there and at Harvard University in America. He's also a prolific author, some of whose titles include: ‘The Lie That Shot Down MH-17,' ‘Skripal in Prison,' ‘The Man Who Knows Too Much About Russia,' ‘The Jackals' Wedding: American Power, Arab Revolt', and his latest, just hours off the presses, 'Australian Fascism: How It Destroyed the Courts.' Today, a special Christmas Season observance of the persistent survival of a political philosophy as perniciously adaptive, and destructive, as any holly and ivy ever was. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Diana Johnstone, James Bissett December 17th, 2022

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Dec 18, 2022 59:47


Welcome to Gorilla Radio, recorded December 17th, 2022. Looking at Europe's tattered union today it would be easy to mistake just who the target of America's proxy war in Ukraine is. Germany, the continent's economic engine, has sputtered through the last half year, its industrial base shuttered by gas shortages and soaring electricity costs. And yet, Germany's political leaders are backing the broadening of a war ostensibly meant to send Russia back to its agrarian age roots; no matter the cost to its own citizenry. How to make sense of it? Diana Johnstone is an author and journalist. Her book titles include ‘ From MAD to Madness: Inside Pentagon Nuclear War Planning,' ‘Circle in the Darkness: Memoirs of a World Watcher,' and ‘Fools' Crusade: Yugoslavia, NATO and Western Delusion.' Diana's articles can be found at ConsortiumNews.com, where her recent piece, 'The Specter of Germany is Rising' appears. Diana Johnstone in the first half. And; a major part of Germany and the EU's expansion plans lay in the small and impoverished Western Balkan countries. Last month, European Union big wigs met in Albania for the 'EU-Western Balkans Summit'. European Commission head, Ursula von der Leyen used the platform to express the EU's "strong partnership" with the Balkans, saying, "...we are taking every opportunity to bring our regions and our people closer together." But, togetherness is the last thing desired for Western Balkans neighbours Serbia and Kosovo - the latter hived out of Serbia during NATO's war there in the nineties, and later recognized - in some quarters - as an independent state. James Bissett is a former Canadian Ambassador whose tenure in Yugoslavia coincided with its 1991 dissolution. At century's end, he was one of the very few government insiders to oppose NATO's 78 day bombardment of Serbia in the name of “humanitarian intervention.” James Bissett and flaws in the EU's recipe for Balkan integration in the second half. But first, Diana Johnstone and what Germany's eastward attentions mean for Europe's future. Song: How Much is a Life Worth? Artists: The Four Fathers Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Dimitri Lascaris, Dan Kovalik December 10th, 2022

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Dec 10, 2022 59:59


Welcome to Gorilla Radio, recorded December 10th, 2022. As the war in Ukraine grinds on one casualty of the conflict rarely discussed is that on the environment, both in the war zone and beyond it. War and the preparation for war is one of the most ecologically costly of all human endeavours - not only because of the vast amounts of nature laid waste in the production of tanks and planes, and bombs and bullets, but also for the diversion of the time and talents of legions of scientists, engineers, and others which would be better occupied working on solutions to the precarious moment humanity finds itself in. Dimitri Lascaris is a Montreal-based activist, journalist, and lawyer. He very nearly became leader of the Green Party of Canada, finishing second in a tightly-contested race with the now-departed Annamie Paul. Dimitri's interviews for TRNN are at TheRealNews.com, and his articles appear at his website, DimitriLascaris.org, where I found his recent piece, 'As Ukraine War Escalates, the Climate Movement Goes AWOL'. Dimitri Lascaris in the first half. And; the nature of the conflict in Ukraine has been mischaracterized from the start. How and why we find ourselves at the precipice of perhaps the final war cannot be gleaned reading, watching, or listening to a western press which has, by turns, acted as propagandist and cheerleader for World War III. And, understanding that has never been more vital than it is now. Dan Kovalik is a lawyer, educator, labour, peace, and justice activist, democracy defender, journalist, author, and filmmaker. His book titles include: ‘Cancel This Book: The Progressive Case Against Cancel Culture,' the “Plot to” series on American efforts to undermine the governments and economies of Iran, Venezuela, Russia, and the World entirely. His latest is, ‘No More War: How the West Violates International Law by Using ‘Humanitarian' Intervention to Advance Economic and Strategic Interest.' Dan's just back from a fact-finding mission to Russia and the Eastern Republics of Ukraine. Dan Kovalik and life in World War time in the second half. But first, Dimtri Lascaris and Canada's Green movement, missing in action when needed most. Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: https://gorillaradioblog.blogspot.com/

Svelte Radio
Talking Gradio and AI with pngwn

Svelte Radio

Play Episode Listen Later Dec 1, 2022 75:06


SponsorVercel is the platform for frontend developers, providing the speed and reliability innovators need to create at the moment of inspiration. Founded by the creators of Next.js, Vercel has zero configuration support for 35+ frontend frameworks, including SvelteKit. We enable the world's largest brands like Under Armour, eBay, and Nintendo, to iterate faster and create quality software. Try out Vercel today to experience the easiest way to use Svelte.DescriptionIn this episode we FINALLY manage to catche the Pngwn

Odbita do bita
Žiga Emeršič o tehnoloških in etičnih vprašanjih generiranja slik

Odbita do bita

Play Episode Listen Later Oct 27, 2022 35:24


Predstavljajte si, da računalniški program lahko nariše, kar koli si zamislite, živali, predmete in ljudi, v položajih, ki v resničnem življenju ne obstajajo, pa so kljub temu videti realno. Pes, ki drži filmsko kamero, in mačka, ki pilotira letalo, sta sicer na generirani sliki lahko zabavna, če pa v domišljijo in umetno inteligenco vključimo obraze ljudi, postane položaj bolj resen in strašljiv. Tehnologija ni nova, je pa vse bolj vseprisotna in dostopna, pravi doc. dr. Žiga Emeršič iz ljubljanske Fakultete za računalništvo in informatiko. Zapiski: Zhis Person Does Not Exist Gradio Stable Diffusion Online Pozdravljeni v storitvi Colaboratory - Colaboratory (173) How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile - YouTube Kviz! Zanimivosti iz tehnološkega sveta pošiljava tudi v elektronske nabiralnike. Naročilnica na Odbito pismo je tukaj. Razpravi o odbitih temah se lahko pridružite na Twitterju. Dosegljiva sva tudi na naslovu: odbita@rtvslo.si. Podkast Odbita do bita je brezplačno na voljo v vseh aplikacijah za podkaste. Naročite se in podkast ocenite.

Gorilla Radio from Pacific Free Press
Gorilla Radio with Chris Cook, Jeremy Kuzmarov, Dan Kovalik October 9, 2022

Gorilla Radio from Pacific Free Press

Play Episode Listen Later Oct 10, 2022 57:21


Welcome to Gorilla Radio, recorded October 6th and 9th, 2022. Whether by design or merely the inevitable, organic results of societal organization in this first quarter of the 21st Century, Covid has transformed the world in these last two years. Now the fear of illness and disease engendered by a disease few understand is being deliberately manipulated; used as a tool to turn the screws of what can only be described as the machinery of totalitarian control of the people. The leading actors in this drive may seem at first blush to be the billionaire class that has profited so handsomely throughout the crisis - but according to my first guest, it goes much deeper than that. Jeremy Kuzmarov is a journalist and author who also serves as Managing Editor of CovertActionMagazine.com. He's the author of four books on U.S. foreign policy, including: ‘Obama's Unending Wars', and ‘The Russians Are Coming, Again' co-authored with John Marciano. His recent piece at CovertAction, 'How Much is Covid Being Used as a Pretext for Imposing Ever Greater Levels of Social Control?' addresses the largely unasked question lurking behind the Pandemic Response. Jeremy Kuzmarov in the first half. And; the United States and its allies have uniformly condemned the referenda in Donetsk, Luhansk, Zaporizhia, and Kherson, just as the same chorus shouted down the Crimean referendum to join Russia in 2014. As has been the case throughout the Ukraine troubles, six is nine in Western media, but in this case challenging the legitimacy of the political will of the people Kyiv has bombed, rocketed, and starved for eight years takes the cake, cherry-topped and all. Dan Kovalik is a lawyer, educator, labour, peace, and justice activist, democracy defender, journalist, author, and filmmaker. Dan's observed elections in Venezuela, Nicaragua, and Colombia – where he witnessed the 2016 peace plebiscite promising an end to the generational war there. Kovalik's book titles include: ‘Cancel This Book: The Progressive Case Against Cancel Culture,' the "Plot to" series on American efforts to undermine the governments and economies of Iran, Venezuela, Russia, (and to control the World entirely) and his latest is, ‘No More War: How the West Violates International Law by Using ‘Humanitarian' Intervention to Advance Economic and Strategic Interest.' Dan Kovalik and searching for legitimacy amid Ukraine's disintegration in the second half. But first, Jeremy Kuzmarov and Covid, a crisis by design or opportunity? Chris Cook hosts Gorilla Radio, broad/webcasting since 1999. Check out the Archive at Gorilla-Radio.com, GRadio.Substack.com, and the GR blog at: http://gorillaradioblog.blogspot.com/

Practical AI
Quick, beautiful web UIs for ML apps

Practical AI

Play Episode Listen Later Apr 5, 2022 42:08 Transcription Available


Abubakar Abid joins Daniel and Chris for a tour of Gradio and tells them about the project joining Hugging Face. What's Gradio? The fastest way to demo your machine learning model with a friendly web interface, allowing non-technical users to access, use, and give feedback on models.

Changelog Master Feed
Quick, beautiful web UIs for ML apps (Practical AI #174)

Changelog Master Feed

Play Episode Listen Later Apr 5, 2022 42:08 Transcription Available


Abubakar Abid joins Daniel and Chris for a tour of Gradio and tells them about the project joining Hugging Face. What's Gradio? The fastest way to demo your machine learning model with a friendly web interface, allowing non-technical users to access, use, and give feedback on models.

The Gradient Podcast
Abubakar Abid on AI for Genomics, Gradio, and the Fatima Fellowship

The Gradient Podcast

Play Episode Listen Later Jul 6, 2021 44:40


Subscribe to The Gradient Podcast: iTunes | RSS | SpotifyIn episode 3 of The Gradient Podcast, we interview researcher and entrepreneur Abubakar Abid. Follow him on Twitter and check out the websites of his company Gradio and his side project the Fatima Fellowship.Abubakar is an entrepreneur and researcher focused on AI and its applications to medicine. He is currently running the company Gradio, which is developing a product to generate an easy-to-use UI for any ML model, function, or API.  He is also running the Fatima Al-Fihri Predoctoral Fellowship, which is a 9-month program for computer science students from around the world who are planning on applying to PhD programs in the United States. Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe