POPULARITY
Applications for the NYC AI Engineer Summit, focused on Agents at Work, are open!When we first started Latent Space, in the lightning round we'd always ask guests: “What's your favorite AI product?”. The majority would say Midjourney. The simple UI of prompt → very aesthetic image turned it into a $300M+ ARR bootstrapped business as it rode the first wave of AI image generation.In open source land, StableDiffusion was congregating around AUTOMATIC1111 as the de-facto web UI. Unlike Midjourney, which offered some flags but was mostly prompt-driven, A1111 let users play with a lot more parameters, supported additional modalities like img2img, and allowed users to load in custom models. If you're interested in some of the SD history, you can look at our episodes with Lexica, Replicate, and Playground.One of the people involved with that community was comfyanonymous, who was also part of the Stability team in 2023, decided to build an alternative called ComfyUI, now one of the fastest growing open source projects in generative images, and is now the preferred partner for folks like Black Forest Labs's Flux Tools on Day 1. The idea behind it was simple: “Everyone is trying to make easy to use interfaces. Let me try to make a powerful interface that's not easy to use.”Unlike its predecessors, ComfyUI does not have an input text box. Everything is based around the idea of a node: there's a text input node, a CLIP node, a checkpoint loader node, a KSampler node, a VAE node, etc. While daunting for simple image generation, the tool is amazing for more complex workflows since you can break down every step of the process, and then chain many of them together rather than manually switching between tools. You can also re-start execution halfway instead of from the beginning, which can save a lot of time when using larger models.To give you an idea of some of the new use cases that this type of UI enables:* Sketch something → Generate an image with SD from sketch → feed it into SD Video to animate* Generate an image of an object → Turn into a 3D asset → Feed into interactive experiences* Input audio → Generate audio-reactive videosTheir Examples page also includes some of the more common use cases like AnimateDiff, etc. They recently launched the Comfy Registry, an online library of different nodes that users can pull from rather than having to build everything from scratch. The project has >60,000 Github stars, and as the community grows, some of the projects that people build have gotten quite complex:The most interesting thing about Comfy is that it's not a UI, it's a runtime. You can build full applications on top of image models simply by using Comfy. You can expose Comfy workflows as an endpoint and chain them together just like you chain a single node. We're seeing the rise of AI Engineering applied to art.Major Tom's ComfyUI Resources from the Latent Space DiscordMajor shoutouts to Major Tom on the LS Discord who is a image generation expert, who offered these pointers:* “best thing about comfy is the fact it supports almost immediately every new thing that comes out - unlike A1111 or forge, which still don't support flux cnet for instance. It will be perfect tool when conflicting nodes will be resolved”* AP Workflows from Alessandro Perili are a nice example of an all-in-one train-evaluate-generate system built atop Comfy* ComfyUI YouTubers to learn from:* @sebastiankamph* @NerdyRodent* @OlivioSarikas* @sedetweiler* @pixaroma* ComfyUI Nodes to check out:* https://github.com/kijai/ComfyUI-IC-Light* https://github.com/MrForExample/ComfyUI-3D-Pack* https://github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait* https://github.com/pydn/ComfyUI-to-Python-Extension* https://github.com/THtianhao/ComfyUI-Portrait-Maker* https://github.com/ssitu/ComfyUI_NestedNodeBuilder* https://github.com/longgui0318/comfyui-magic-clothing* https://github.com/atmaranto/ComfyUI-SaveAsScript* https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID* https://github.com/AIFSH/ComfyUI-FishSpeech* https://github.com/coolzilj/ComfyUI-Photopea* https://github.com/lks-ai/anynode* Sarav: https://www.youtube.com/@mickmumpitz/videos ( applied stuff )* Sarav: https://www.youtube.com/@latentvision (technical, but infrequent)* look for comfyui node for https://github.com/magic-quill/MagicQuill* “Comfy for Video” resources* Kijai (https://github.com/kijai) pushing out support for Mochi, CogVideoX, AnimateDif, LivePortrait etc* Comfyui node support like LTX https://github.com/Lightricks/ComfyUI-LTXVideo , and HunyuanVideo* FloraFauna AI* Communities: https://www.reddit.com/r/StableDiffusion/, https://www.reddit.com/r/comfyui/Full YouTube EpisodeAs usual, you can find the full video episode on our YouTube (and don't forget to like and subscribe!)Timestamps* 00:00:04 Introduction of hosts and anonymous guest* 00:00:35 Origins of Comfy UI and early Stable Diffusion landscape* 00:02:58 Comfy's background and development of high-res fix* 00:05:37 Area conditioning and compositing in image generation* 00:07:20 Discussion on different AI image models (SD, Flux, etc.)* 00:11:10 Closed source model APIs and community discussions on SD versions* 00:14:41 LoRAs and textual inversion in image generation* 00:18:43 Evaluation methods in the Comfy community* 00:20:05 CLIP models and text encoders in image generation* 00:23:05 Prompt weighting and negative prompting* 00:26:22 Comfy UI's unique features and design choices* 00:31:00 Memory management in Comfy UI* 00:33:50 GPU market share and compatibility issues* 00:35:40 Node design and parameter settings in Comfy UI* 00:38:44 Custom nodes and community contributions* 00:41:40 Video generation models and capabilities* 00:44:47 Comfy UI's development timeline and rise to popularity* 00:48:13 Current state of Comfy UI team and future plans* 00:50:11 Discussion on other Comfy startups and potential text generation supportTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hey everyone, we are in the Chroma Studio again, but with our first ever anonymous guest, Comfy Anonymous, welcome.Comfy [00:00:19]: Hello.swyx [00:00:21]: I feel like that's your full name, you just go by Comfy, right?Comfy [00:00:24]: Yeah, well, a lot of people just call me Comfy, even when they know my real name. Hey, Comfy.Alessio [00:00:32]: Swyx is the same. You know, not a lot of people call you Shawn.swyx [00:00:35]: Yeah, you have a professional name, right, that people know you by, and then you have a legal name. Yeah, it's fine. How do I phrase this? I think people who are in the know, know that Comfy is like the tool for image generation and now other multimodality stuff. I would say that when I first got started with Stable Diffusion, the star of the show was Automatic 111, right? And I actually looked back at my notes from 2022-ish, like Comfy was already getting started back then, but it was kind of like the up and comer, and your main feature was the flowchart. Can you just kind of rewind to that moment, that year and like, you know, how you looked at the landscape there and decided to start Comfy?Comfy [00:01:10]: Yeah, I discovered Stable Diffusion in 2022, in October 2022. And, well, I kind of started playing around with it. Yes, I, and back then I was using Automatic, which was what everyone was using back then. And so I started with that because I had, it was when I started, I had no idea like how Diffusion works. I didn't know how Diffusion models work, how any of this works, so.swyx [00:01:36]: Oh, yeah. What was your prior background as an engineer?Comfy [00:01:39]: Just a software engineer. Yeah. Boring software engineer.swyx [00:01:44]: But like any, any image stuff, any orchestration, distributed systems, GPUs?Comfy [00:01:49]: No, I was doing basically nothing interesting. Crud, web development? Yeah, a lot of web development, just, yeah, some basic, maybe some basic like automation stuff. Okay. Just. Yeah, no, like, no big companies or anything.swyx [00:02:08]: Yeah, but like already some interest in automations, probably a lot of Python.Comfy [00:02:12]: Yeah, yeah, of course, Python. But I wasn't actually used to like the Node graph interface before I started Comfy UI. It was just, I just thought it was like, oh, like, what's the best way to represent the Diffusion process in the user interface? And then like, oh, well. Well, like, naturally, oh, this is the best way I've found. And this was like with the Node interface. So how I got started was, yeah, so basic October 2022, just like I hadn't written a line of PyTorch before that. So it's completely new. What happened was I kind of got addicted to generating images.Alessio [00:02:58]: As we all did. Yeah.Comfy [00:03:00]: And then I started. I started experimenting with like the high-res fixed in auto, which was for those that don't know, the high-res fix is just since the Diffusion models back then could only generate that low-resolution. So what you would do, you would generate low-resolution image, then upscale, then refine it again. And that was kind of the hack to generate high-resolution images. I really liked generating. Like higher resolution images. So I was experimenting with that. And so I modified the code a bit. Okay. What happens if I, if I use different samplers on the second pass, I was edited the code of auto. So what happens if I use a different sampler? What happens if I use a different, like a different settings, different number of steps? And because back then the. The high-res fix was very basic, just, so. Yeah.swyx [00:04:05]: Now there's a whole library of just, uh, the upsamplers.Comfy [00:04:08]: I think, I think they added a bunch of, uh, of options to the high-res fix since, uh, since, since then. But before that was just so basic. So I wanted to go further. I wanted to try it. What happens if I use a different model for the second, the second pass? And then, well, then the auto code base was, wasn't good enough for. Like, it would have been, uh, harder to implement that in the auto interface than to create my own interface. So that's when I decided to create my own. And you were doing that mostly on your own when you started, or did you already have kind of like a subgroup of people? No, I was, uh, on my own because, because it was just me experimenting with stuff. So yeah, that was it. Then, so I started writing the code January one. 2023, and then I released the first version on GitHub, January 16th, 2023. That's how things got started.Alessio [00:05:11]: And what's, what's the name? Comfy UI right away or? Yeah.Comfy [00:05:14]: Comfy UI. The reason the name, my name is Comfy is people thought my pictures were comfy, so I just, uh, just named it, uh, uh, it's my Comfy UI. So yeah, that's, uh,swyx [00:05:27]: Is there a particular segment of the community that you targeted as users? Like more intensive workflow artists, you know, compared to the automatic crowd or, you know,Comfy [00:05:37]: This was my way of like experimenting with, uh, with new things, like the high risk fixed thing I mentioned, which was like in Comfy, the first thing you could easily do was just chain different models together. And then one of the first things, I think the first times it got a bit of popularity was when I started experimenting with the different, like applying. Prompts to different areas of the image. Yeah. I called it area conditioning, posted it on Reddit and it got a bunch of upvotes. So I think that's when, like, when people first learned of Comfy UI.swyx [00:06:17]: Is that mostly like fixing hands?Comfy [00:06:19]: Uh, no, no, no. That was just, uh, like, let's say, well, it was very, well, it still is kind of difficult to like, let's say you want a mountain, you have an image and then, okay. I'm like, okay. I want the mountain here and I want the, like a, a Fox here.swyx [00:06:37]: Yeah. So compositing the image. Yeah.Comfy [00:06:40]: My way was very easy. It was just like, oh, when you run the diffusion process, you kind of generate, okay. You do pass one pass through the diffusion, every step you do one pass. Okay. This place of the image with this brand, this space, place of the image with the other prop. And then. The entire image with another prop and then just average everything together, every step, and that was, uh, area composition, which I call it. And then, then a month later, there was a paper that came out called multi diffusion, which was the same thing, but yeah, that's, uh,Alessio [00:07:20]: could you do area composition with different models or because you're averaging out, you kind of need the same model.Comfy [00:07:26]: Could do it with, but yeah, I hadn't implemented it. For different models, but, uh, you, you can do it with, uh, with different models if you want, as long as the models share the same latent space, like we, we're supposed to ring a bell every time someone says, yeah, like, for example, you couldn't use like Excel and SD 1.5, because those have a different latent space, but like, uh, yeah, like SD 1.5 models, different ones. You could, you could do that.swyx [00:07:59]: There's some models that try to work in pixel space, right?Comfy [00:08:03]: Yeah. They're very slow. Of course. That's the problem. That that's the, the reason why stable diffusion actually became like popular, like, cause was because of the latent space.swyx [00:08:14]: Small and yeah. Because it used to be latent diffusion models and then they trained it up.Comfy [00:08:19]: Yeah. Cause a pixel pixel diffusion models are just too slow. So. Yeah.swyx [00:08:25]: Have you ever tried to talk to like, like stability, the latent diffusion guys, like, you know, Robin Rombach, that, that crew. Yeah.Comfy [00:08:32]: Well, I used to work at stability.swyx [00:08:34]: Oh, I actually didn't know. Yeah.Comfy [00:08:35]: I used to work at stability. I got, uh, I got hired, uh, in June, 2023.swyx [00:08:42]: Ah, that's the part of the story I didn't know about. Okay. Yeah.Comfy [00:08:46]: So the, the reason I was hired is because they were doing, uh, SDXL at the time and they were basically SDXL. I don't know if you remember it was a base model and then a refiner model. Basically they wanted to experiment, like chaining them together. And then, uh, they saw, oh, right. Oh, this, we can use this to do that. Well, let's hire that guy.swyx [00:09:10]: But they didn't, they didn't pursue it for like SD3. What do you mean? Like the SDXL approach. Yeah.Comfy [00:09:16]: The reason for that approach was because basically they had two models and then they wanted to publish both of them. So they, they trained one on. Lower time steps, which was the refiner model. And then they, the first one was trained normally. And then they went during their test, they realized, oh, like if we string these models together are like quality increases. So let's publish that. It worked. Yeah. But like right now, I don't think many people actually use the refiner anymore, even though it is actually a full diffusion model. Like you can use it on its own. And it's going to generate images. I don't think anyone, people have mostly forgotten about it. But, uh.Alessio [00:10:05]: Can we talk about models a little bit? So stable diffusion, obviously is the most known. I know flux has gotten a lot of traction. Are there any underrated models that people should use more or what's the state of the union?Comfy [00:10:17]: Well, the, the latest, uh, state of the art, at least, yeah, for images there's, uh, yeah, there's flux. There's also SD3.5. SD3.5 is two models. There's a, there's a small one, 2.5B and there's the bigger one, 8B. So it's, it's smaller than flux. So, and it's more, uh, creative in a way, but flux, yeah, flux is the best. People should give SD3.5 a try cause it's, uh, it's different. I won't say it's better. Well, it's better for some like specific use cases. Right. If you want some to make something more like creative, maybe SD3.5. If you want to make something more consistent and flux is probably better.swyx [00:11:06]: Do you ever consider supporting the closed source model APIs?Comfy [00:11:10]: Uh, well, they, we do support them as custom nodes. We actually have some, uh, official custom nodes from, uh, different. Ideogram.swyx [00:11:20]: Yeah. I guess DALI would have one. Yeah.Comfy [00:11:23]: That's, uh, it's just not, I'm not the person that handles that. Sure.swyx [00:11:28]: Sure. Quick question on, on SD. There's a lot of community discussion about the transition from SD1.5 to SD2 and then SD2 to SD3. People still like, you know, very loyal to the previous generations of SDs?Comfy [00:11:41]: Uh, yeah. SD1.5 then still has a lot of, a lot of users.swyx [00:11:46]: The last based model.Comfy [00:11:49]: Yeah. Then SD2 was mostly ignored. It wasn't, uh, it wasn't a big enough improvement over the previous one. Okay.swyx [00:11:58]: So SD1.5, SD3, flux and whatever else. SDXL. SDXL.Comfy [00:12:03]: That's the main one. Stable cascade. Stable cascade. That was a good model. But, uh, that's, uh, the problem with that one is, uh, it got, uh, like SD3 was announced one week after. Yeah.swyx [00:12:16]: It was like a weird release. Uh, what was it like inside of stability actually? I mean, statute of limitations. Yeah. The statute of limitations expired. You know, management has moved. So it's easier to talk about now. Yeah.Comfy [00:12:27]: And inside stability, actually that model was ready, uh, like three months before, but it got, uh, stuck in, uh, red teaming. So basically the product, if that model had released or was supposed to be released by the authors, then it would probably have gotten very popular since it's a, it's a step up from SDXL. But it got all of its momentum stolen. It got stolen by the SD3 announcement. So people kind of didn't develop anything on top of it, even though it's, uh, yeah. It was a good model, at least, uh, completely mostly ignored for some reason. Likeswyx [00:13:07]: I think the naming as well matters. It seemed like a branch off of the main, main tree of development. Yeah.Comfy [00:13:15]: Well, it was different researchers that did it. Yeah. Yeah. Very like, uh, good model. Like it's the Worcestershire authors. I don't know if I'm pronouncing it correctly. Yeah. Yeah. Yeah.swyx [00:13:28]: I actually met them in Vienna. Yeah.Comfy [00:13:30]: They worked at stability for a bit and they left right after the Cascade release.swyx [00:13:35]: This is Dustin, right? No. Uh, Dustin's SD3. Yeah.Comfy [00:13:38]: Dustin is a SD3 SDXL. That's, uh, Pablo and Dome. I think I'm pronouncing his name correctly. Yeah. Yeah. Yeah. Yeah. That's very good.swyx [00:13:51]: It seems like the community is very, they move very quickly. Yeah. Like when there's a new model out, they just drop whatever the current one is. And they just all move wholesale over. Like they don't really stay to explore the full capabilities. Like if, if the stable cascade was that good, they would have AB tested a bit more. Instead they're like, okay, SD3 is out. Let's go. You know?Comfy [00:14:11]: Well, I find the opposite actually. The community doesn't like, they only jump on a new model when there's a significant improvement. Like if there's a, only like a incremental improvement, which is what, uh, most of these models are going to have, especially if you, cause, uh, stay the same parameter count. Yeah. Like you're not going to get a massive improvement, uh, into like, unless there's something big that, that changes. So, uh. Yeah.swyx [00:14:41]: And how are they evaluating these improvements? Like, um, because there's, it's a whole chain of, you know, comfy workflows. Yeah. How does, how does one part of the chain actually affect the whole process?Comfy [00:14:52]: Are you talking on the model side specific?swyx [00:14:54]: Model specific, right? But like once you have your whole workflow based on a model, it's very hard to move.Comfy [00:15:01]: Uh, not, well, not really. Well, it depends on your, uh, depends on their specific kind of the workflow. Yeah.swyx [00:15:09]: So I do a lot of like text and image. Yeah.Comfy [00:15:12]: When you do change, like most workflows are kind of going to be complete. Yeah. It's just like, you might have to completely change your prompt completely change. Okay.swyx [00:15:24]: Well, I mean, then maybe the question is really about evals. Like what does the comfy community do for evals? Just, you know,Comfy [00:15:31]: Well, that they don't really do that. It's more like, oh, I think this image is nice. So that's, uh,swyx [00:15:38]: They just subscribe to Fofr AI and just see like, you know, what Fofr is doing. Yeah.Comfy [00:15:43]: Well, they just, they just generate like it. Like, I don't see anyone really doing it. Like, uh, at least on the comfy side, comfy users, they, it's more like, oh, generate images and see, oh, this one's nice. It's like, yeah, it's not, uh, like the, the more, uh, like, uh, scientific, uh, like, uh, like checking that's more on specifically on like model side. If, uh, yeah, but there is a lot of, uh, vibes also, cause it is a like, uh, artistic, uh, you can create a very good model that doesn't generate nice images. Cause most images on the internet are ugly. So if you, if that's like, if you just, oh, I have the best model at 10th giant, it's super smart. I created on all the, like I've trained on just all the images on the internet. The images are not going to look good. So yeah.Alessio [00:16:42]: Yeah.Comfy [00:16:43]: They're going to be very consistent. But yeah. People like, it's not going to be like the, the look that people are going to be expecting from, uh, from a model. So. Yeah.swyx [00:16:54]: Can we talk about LoRa's? Cause we thought we talked about models then like the next step is probably LoRa's. Before, I actually, I'm kind of curious how LoRa's entered the tool set of the image community because the LoRa paper was 2021. And then like, there was like other methods like textual inversion that was popular at the early SD stage. Yeah.Comfy [00:17:13]: I can't even explain the difference between that. Yeah. Textual inversions. That's basically what you're doing is you're, you're training a, cause well, yeah. Stable diffusion. You have the diffusion model, you have text encoder. So basically what you're doing is training a vector that you're going to pass to the text encoder. It's basically you're training a new word. Yeah.swyx [00:17:37]: It's a little bit like representation engineering now. Yeah.Comfy [00:17:40]: Yeah. Basically. Yeah. You're just, so yeah, if you know how like the text encoder works, basically you have, you take your, your words of your product, you convert those into tokens with the tokenizer and those are converted into vectors. Basically. Yeah. Each token represents a different vector. So each word presents a vector. And those, depending on your words, that's the list of vectors that get passed to the text encoder, which is just. Yeah. Yeah. I'm just a stack of, of attention. Like basically it's a very close to LLM architecture. Yeah. Yeah. So basically what you're doing is just training a new vector. We're saying, well, I have all these images and I want to know which word does that represent? And it's going to get like, you train this vector and then, and then when you use this vector, it hopefully generates. Like something similar to your images. Yeah.swyx [00:18:43]: I would say it's like surprisingly sample efficient in picking up the concept that you're trying to train it on. Yeah.Comfy [00:18:48]: Well, people have kind of stopped doing that even though back as like when I was at Stability, we, we actually did train internally some like textual versions on like T5 XXL actually worked pretty well. But for some reason, yeah, people don't use them. And also they might also work like, like, yeah, this is something and probably have to test, but maybe if you train a textual version, like on T5 XXL, it might also work with all the other models that use T5 XXL because same thing with like, like the textual inversions that, that were trained for SD 1.5, they also kind of work on SDXL because SDXL has the, has two text encoders. And one of them is the same as the, as the SD 1.5 CLIP-L. So those, they actually would, they don't work as strongly because they're only applied to one of the text encoders. But, and the same thing for SD3. SD3 has three text encoders. So it works. It's still, you can still use your textual version SD 1.5 on SD3, but it's just a lot weaker because now there's three text encoders. So it gets even more diluted. Yeah.swyx [00:20:05]: Do people experiment a lot on, just on the CLIP side, there's like Siglip, there's Blip, like do people experiment a lot on those?Comfy [00:20:12]: You can't really replace. Yeah.swyx [00:20:14]: Because they're trained together, right? Yeah.Comfy [00:20:15]: They're trained together. So you can't like, well, what I've seen people experimenting with is a long CLIP. So basically someone fine tuned the CLIP model to accept longer prompts.swyx [00:20:27]: Oh, it's kind of like long context fine tuning. Yeah.Comfy [00:20:31]: So, so like it's, it's actually supported in Core Comfy.swyx [00:20:35]: How long is long?Comfy [00:20:36]: Regular CLIP is 77 tokens. Yeah. Long CLIP is 256. Okay. So, but the hack that like you've, if you use stable diffusion 1.5, you've probably noticed, oh, it still works if I, if I use long prompts, prompts longer than 77 words. Well, that's because the hack is to just, well, you split, you split it up in chugs of 77, your whole big prompt. Let's say you, you give it like the massive text, like the Bible or something, and it would split it up in chugs of 77 and then just pass each one through the CLIP and then just cut anything together at the end. It's not ideal, but it actually works.swyx [00:21:26]: Like the positioning of the words really, really matters then, right? Like this is why order matters in prompts. Yeah.Comfy [00:21:33]: Yeah. Like it, it works, but it's, it's not ideal, but it's what people expect. Like if, if someone gives a huge prompt, they expect at least some of the concepts at the end to be like present in the image. But usually when they give long prompts, they, they don't, they like, they don't expect like detail, I think. So that's why it works very well.swyx [00:21:58]: And while we're on this topic, prompts waiting, negative comments. Negative prompting all, all sort of similar part of this layer of the stack. Yeah.Comfy [00:22:05]: The, the hack for that, which works on CLIP, like it, basically it's just for SD 1.5, well, for SD 1.5, the prompt waiting works well because CLIP L is a, is not a very deep model. So you have a very high correlation between, you have the input token, the index of the input token vector. And the output token, they're very, the concepts are very close, closely linked. So that means if you interpolate the vector from what, well, the, the way Comfy UI does it is it has, okay, you have the vector, you have an empty prompt. So you have a, a chunk, like a CLIP output for the empty prompt, and then you have the one for your prompt. And then it interpolates from that, depending on your prompt. Yeah.Comfy [00:23:07]: So that's how it, how it does prompt waiting. But this stops working the deeper your text encoder is. So on T5X itself, it doesn't work at all. So. Wow.swyx [00:23:20]: Is that a problem for people? I mean, cause I'm used to just move, moving up numbers. Probably not. Yeah.Comfy [00:23:25]: Well.swyx [00:23:26]: So you just use words to describe, right? Cause it's a bigger language model. Yeah.Comfy [00:23:30]: Yeah. So. Yeah. So honestly it might be good, but I haven't seen many complaints on Flux that it's not working. So, cause I guess people can sort of get around it with, with language. So. Yeah.swyx [00:23:46]: Yeah. And then coming back to LoRa's, now the, the popular way to, to customize models is LoRa's. And I saw you also support Locon and LoHa, which I've never heard of before.Comfy [00:23:56]: There's a bunch of, cause what, what the LoRa is essentially is. Instead of like, okay, you have your, your model and then you want to fine tune it. So instead of like, what you could do is you could fine tune the entire thing, but that's a bit heavy. So to speed things up and make things less heavy, what you can do is just fine tune some smaller weights, like basically two, two matrices that when you multiply like two low rank matrices and when you multiply them together, gives a, represents a difference between trained weights and your base weights. So by training those two smaller matrices, that's a lot less heavy. Yeah.Alessio [00:24:45]: And they're portable. So you're going to share them. Yeah. It's like easier. And also smaller.Comfy [00:24:49]: Yeah. That's the, how LoRa's work. So basically, so when, when inferencing you, you get an inference with them pretty efficiently, like how ComputeWrite does it. It just, when you use a LoRa, it just applies it straight on the weights so that there's only a small delay at the base, like before the sampling to when it applies the weights and then it just same speed as, as before. So for, for inference, it's, it's not that bad, but, and then you have, so basically all the LoRa types like LoHa, LoCon, everything, that's just different ways of representing that like. Basically, you can call it kind of like compression, even though it's not really compression, it's just different ways of represented, like just, okay, I want to train a different on the difference on the weights. What's the best way to represent that difference? There's the basic LoRa, which is just, oh, let's multiply these two matrices together. And then there's all the other ones, which are all different algorithms. So. Yeah.Alessio [00:25:57]: So let's talk about LoRa. Let's talk about what comfy UI actually is. I think most people have heard of it. Some people might've seen screenshots. I think fewer people have built very complex workflows. So when you started, automatic was like the super simple way. What were some of the choices that you made? So the node workflow, is there anything else that stands out as like, this was like a unique take on how to do image generation workflows?Comfy [00:26:22]: Well, I feel like, yeah, back then everyone was trying to make like easy to use interface. Yeah. So I'm like, well, everyone's trying to make an easy to use interface.swyx [00:26:32]: Let's make a hard to use interface.Comfy [00:26:37]: Like, so like, I like, I don't need to do that, everyone else doing it. So let me try something like, let me try to make a powerful interface that's not easy to use. So.swyx [00:26:52]: So like, yeah, there's a sort of node execution engine. Yeah. Yeah. And it actually lists, it has this really good list of features of things you prioritize, right? Like let me see, like sort of re-executing from, from any parts of the workflow that was changed, asynchronous queue system, smart memory management, like all this seems like a lot of engineering that. Yeah.Comfy [00:27:12]: There's a lot of engineering in the back end to make things, cause I was always focused on making things work locally very well. Cause that's cause I was using it locally. So everything. So there's a lot of, a lot of thought and working by getting everything to run as well as possible. So yeah. ConfUI is actually more of a back end, at least, well, not all the front ends getting a lot more development, but, but before, before it was, I was pretty much only focused on the backend. Yeah.swyx [00:27:50]: So v0.1 was only August this year. Yeah.Comfy [00:27:54]: With the new front end. Before there was no versioning. So yeah. Yeah. Yeah.swyx [00:27:57]: And so what was the big rewrite for the 0.1 and then the 1.0?Comfy [00:28:02]: Well, that's more on the front end side. That's cause before that it was just like the UI, what, cause when I first wrote it, I just, I said, okay, how can I make, like, I can do web development, but I don't like doing it. Like what's the easiest way I can slap a node interface on this. And then I found this library. Yeah. Like JavaScript library.swyx [00:28:26]: Live graph?Comfy [00:28:27]: Live graph.swyx [00:28:28]: Usually people will go for like react flow for like a flow builder. Yeah.Comfy [00:28:31]: But that seems like too complicated. So I didn't really want to spend time like developing the front end. So I'm like, well, oh, light graph. This has the whole node interface. So, okay. Let me just plug that into, to my backend.swyx [00:28:49]: I feel like if Streamlit or Gradio offered something that you would have used Streamlit or Gradio cause it's Python. Yeah.Comfy [00:28:54]: Yeah. Yeah. Yeah.Comfy [00:29:00]: Yeah.Comfy [00:29:14]: Yeah. logic and your backend logic and just sticks them together.swyx [00:29:20]: It's supposed to be easy for you guys. If you're a Python main, you know, I'm a JS main, right? Okay. If you're a Python main, it's supposed to be easy.Comfy [00:29:26]: Yeah, it's easy, but it makes your whole software a huge mess.swyx [00:29:30]: I see, I see. So you're mixing concerns instead of separating concerns?Comfy [00:29:34]: Well, it's because... Like frontend and backend. Frontend and backend should be well separated with a defined API. Like that's how you're supposed to do it. Smart people disagree. It just sticks everything together. It makes it easy to like a huge mess. And also it's, there's a lot of issues with Gradio. Like it's very good if all you want to do is just get like slap a quick interface on your, like to show off your ML project. Like that's what it's made for. Yeah. Like there's no problem using it. Like, oh, I have my, I have my code. I just wanted a quick interface on it. That's perfect. Like use Gradio. But if you want to make something that's like a real, like real software that will last a long time and will be easy to maintain, then I would avoid it. Yeah.swyx [00:30:32]: So your criticism is Streamlit and Gradio are the same. I mean, those are the same criticisms.Comfy [00:30:37]: Yeah, Streamlit I haven't used as much. Yeah, I just looked a bit.swyx [00:30:43]: Similar philosophy.Comfy [00:30:44]: Yeah, it's similar. It's just, it just seems to me like, okay, for quick, like AI demos, it's perfect.swyx [00:30:51]: Yeah. Going back to like the core tech, like asynchronous queues, slow re-execution, smart memory management, you know, anything that you were very proud of or was very hard to figure out?Comfy [00:31:00]: Yeah. The thing that's the biggest pain in the ass is probably the memory management. Yeah.swyx [00:31:05]: Were you just paging models in and out or? Yeah.Comfy [00:31:08]: Before it was just, okay, load the model, completely unload it. Then, okay, that, that works well when you, your model are small, but if your models are big and it takes sort of like, let's say someone has a, like a, a 4090, and the model size is 10 gigabytes, that can take a few seconds to like load and load, load and load, so you want to try to keep things like in memory, in the GPU memory as much as possible. What Comfy UI does right now is it. It tries to like estimate, okay, like, okay, you're going to sample this model, it's going to take probably this amount of memory, let's remove the models, like this amount of memory that's been loaded on the GPU and then just execute it. But so there's a fine line between just because try to remove the least amount of models that are already loaded. Because as fans, like Windows drivers, and one other problem is the NVIDIA driver on Windows by default, because there's a way to, there's an option to disable that feature, but by default it, like, if you start loading, you can overflow your GPU memory and then it's, the driver's going to automatically start paging to RAM. But the problem with that is it's, it makes everything extremely slow. So when you see people complaining, oh, this model, it works, but oh, s**t, it starts slowing down a lot, that's probably what's happening. So it's basically you have to just try to get, use as much memory as possible, but not too much, or else things start slowing down, or people get out of memory, and then just find, try to find that line where, oh, like the driver on Windows starts paging and stuff. Yeah. And the problem with PyTorch is it's, it's high levels, don't have that much fine-grained control over, like, specific memory stuff, so kind of have to leave, like, the memory freeing to, to Python and PyTorch, which is, can be annoying sometimes.swyx [00:33:32]: So, you know, I think one thing is, as a maintainer of this project, like, you're designing for a very wide surface area of compute, like, you even support CPUs.Comfy [00:33:42]: Yeah, well, that's... That's just, for PyTorch, PyTorch supports CPUs, so, yeah, it's just, that's not, that's not hard to support.swyx [00:33:50]: First of all, is there a market share estimate, like, is it, like, 70% NVIDIA, like, 30% AMD, and then, like, miscellaneous on Apple, Silicon, or whatever?Comfy [00:33:59]: For Comfy? Yeah. Yeah, and, yeah, I don't know the market share.swyx [00:34:03]: Can you guess?Comfy [00:34:04]: I think it's mostly NVIDIA. Right. Because, because AMD, the problem, like, AMD works horribly on Windows. Like, on Linux, it works fine. It's, it's lower than the price equivalent NVIDIA GPU, but it works, like, you can use it, you generate images, everything works. On Linux, on Windows, you might have a hard time, so, that's the problem, and most people, I think most people who bought AMD probably use Windows. They probably aren't going to switch to Linux, so... Yeah. So, until AMD actually, like, ports their, like, raw cam to, to Windows properly, and then there's actually PyTorch, I think they're, they're doing that, they're in the process of doing that, but, until they get it, they get a good, like, PyTorch raw cam build that works on Windows, it's, like, they're going to have a hard time. Yeah.Alessio [00:35:06]: We got to get George on it. Yeah. Well, he's trying to get Lisa Su to do it, but... Let's talk a bit about, like, the node design. So, unlike all the other text-to-image, you have a very, like, deep, so you have, like, a separate node for, like, clip and code, you have a separate node for, like, the case sampler, you have, like, all these nodes. Going back to, like, the making it easy versus making it hard, but, like, how much do people actually play with all the settings, you know? Kind of, like, how do you guide people to, like, hey, this is actually going to be very impactful versus this is maybe, like, less impactful, but we still want to expose it to you?Comfy [00:35:40]: Well, I try to... I try to expose, like, I try to expose everything or, but, yeah, at least for the, but for things, like, for example, for the samplers, like, there's, like, yeah, four different sampler nodes, which go in easiest to most advanced. So, yeah, if you go, like, the easy node, the regular sampler node, that's, you have just the basic settings. But if you use, like, the sampler advanced... If you use, like, the custom advanced node, that, that one you can actually, you'll see you have, like, different nodes.Alessio [00:36:19]: I'm looking it up now. Yeah. What are, like, the most impactful parameters that you use? So, it's, like, you know, you can have more, but, like, which ones, like, really make a difference?Comfy [00:36:30]: Yeah, they all do. They all have their own, like, they all, like, for example, yeah, steps. Usually you want steps, you want them to be as low as possible. But you want, if you're optimizing your workflow, you want to, you lower the steps until, like, the images start deteriorating too much. Because that, yeah, that's the number of steps you're running the diffusion process. So, if you want things to be faster, lower is better. But, yeah, CFG, that's more, you can kind of see that as the contrast of the image. Like, if your image looks too bursty. Then you can lower the CFG. So, yeah, CFG, that's how, yeah, that's how strongly the, like, the negative versus positive prompt. Because when you sample a diffusion model, it's basically a negative prompt. It's just, yeah, positive prediction minus negative prediction.swyx [00:37:32]: Contrastive loss. Yeah.Comfy [00:37:34]: It's positive minus negative, and the CFG does the multiplier. Yeah. Yeah. Yeah, so.Alessio [00:37:41]: What are, like, good resources to understand what the parameters do? I think most people start with automatic, and then they move over, and it's, like, snap, CFG, sampler, name, scheduler, denoise. Read it.Comfy [00:37:53]: But, honestly, well, it's more, it's something you should, like, try out yourself. I don't know, you don't necessarily need to know how it works to, like, what it does. Because even if you know, like, CFGO, it's, like, positive minus negative prompt. Yeah. So the only thing you know at CFG is if it's 1.0, then that means the negative prompt isn't applied. It also means sampling is two times faster. But, yeah. But other than that, it's more, like, you should really just see what it does to the images yourself, and you'll probably get a more intuitive understanding of what these things do.Alessio [00:38:34]: Any other nodes or things you want to shout out? Like, I know the animate diff IP adapter. Those are, like, some of the most popular ones. Yeah. What else comes to mind?Comfy [00:38:44]: Not nodes, but there's, like, what I like is when some people, sometimes they make things that use ComfyUI as their backend. Like, there's a plugin for Krita that uses ComfyUI as its backend. So you can use, like, all the models that work in Comfy in Krita. And I think I've tried it once. But I know a lot of people use it, and it's probably really nice, so.Alessio [00:39:15]: What's the craziest node that people have built, like, the most complicated?Comfy [00:39:21]: Craziest node? Like, yeah. I know some people have made, like, video games in Comfy with, like, stuff like that. So, like, someone, like, I remember, like, yeah, last, I think it was last year, someone made, like, a, like, Wolfenstein 3D in Comfy. Of course. And then one of the inputs was, oh, you can generate a texture, and then it changes the texture in the game. So you can plug it to, like, the workflow. And there's a lot of, if you look there, there's a lot of crazy things people do, so. Yeah.Alessio [00:39:59]: And now there's, like, a node register that people can use to, like, download nodes. Yeah.Comfy [00:40:04]: Like, well, there's always been the, like, the ComfyUI manager. Yeah. But we're trying to make this more, like, I don't know, official, like, with, yeah, with the node registry. Because before the node registry, the, like, okay, how did your custom node get into ComfyUI manager? That's the guy running it who, like, every day he searched GitHub for new custom nodes and added dev annually to his custom node manager. So we're trying to make it less effortless. So we're trying to make it less effortless for him, basically. Yeah.Alessio [00:40:40]: Yeah. But I was looking, I mean, there's, like, a YouTube download node. There's, like, this is almost like, you know, a data pipeline more than, like, an image generation thing at this point. It's, like, you can get data in, you can, like, apply filters to it, you can generate data out.Comfy [00:40:54]: Yeah. You can do a lot of different things. Yeah. So I'm thinking, I think what I did is I made it easy to make custom nodes. So I think that helped a lot. I think that helped a lot for, like, the ecosystem because it is very easy to just make a node. So, yeah, a bit too easy sometimes. Then we have the issue where there's a lot of custom node packs which share similar nodes. But, well, that's, yeah, something we're trying to solve by maybe bringing some of the functionality into the core. Yeah. Yeah. Yeah.Alessio [00:41:36]: And then there's, like, video. People can do video generation. Yeah.Comfy [00:41:40]: Video, that's, well, the first video model was, like, stable video diffusion, which was last, yeah, exactly last year, I think. Like, one year ago. But that wasn't a true video model. So it was...swyx [00:41:55]: It was, like, moving images? Yeah.Comfy [00:41:57]: I generated video. What I mean by that is it's, like, it's still 2D Latents. It's basically what I'm trying to do. So what they did is they took SD2, and then they added some temporal attention to it, and then trained it on videos and all. So it's kind of, like, animated, like, same idea, basically. Why I say it's not a true video model is that you still have, like, the 2D Latents. Like, a true video model, like Mochi, for example, would have 3D Latents. Mm-hmm.Alessio [00:42:32]: Which means you can, like, move through the space, basically. It's the difference. You're not just kind of, like, reorienting. Yeah.Comfy [00:42:39]: And it's also, well, it's also because you have a temporal VAE. Mm-hmm. Also, like, Mochi has a temporal VAE that compresses on, like, the temporal direction, also. So that's something you don't have with, like, yeah, animated diff and stable video diffusion. They only, like, compress spatially, not temporally. Mm-hmm. Right. So, yeah. That's why I call that, like, true video models. There's, yeah, there's actually a few of them, but the one I've implemented in comfy is Mochi, because that seems to be the best one so far. Yeah.swyx [00:43:15]: We had AJ come and speak at the stable diffusion meetup. The other open one I think I've seen is COG video. Yeah.Comfy [00:43:21]: COG video. Yeah. That one's, yeah, it also seems decent, but, yeah. Chinese, so we don't use it. No, it's fine. It's just, yeah, I could. Yeah. It's just that there's a, it's not the only one. There's also a few others, which I.swyx [00:43:36]: The rest are, like, closed source, right? Like, Cling. Yeah.Comfy [00:43:39]: Closed source, there's a bunch of them. But I mean, open. I've seen a few of them. Like, I can't remember their names, but there's COG videos, the big, the big one. Then there's also a few of them that released at the same time. There's one that released at the same time as SSD 3.5, same day, which is why I don't remember the name.swyx [00:44:02]: We should have a release schedule so we don't conflict on each of these things. Yeah.Comfy [00:44:06]: I think SD 3.5 and Mochi released on the same day. So everything else was kind of drowned, completely drowned out. So for some reason, lots of people picked that day to release their stuff.Comfy [00:44:21]: Yeah. Which is, well, shame for those. And I think Omnijet also released the same day, which also seems interesting. Yeah. Yeah.Alessio [00:44:30]: What's Comfy? So you are Comfy. And then there's like, comfy.org. I know we do a lot of things for, like, news research and those guys also have kind of like a more open source thing going on. How do you work? Like you mentioned, you mostly work on like, the core piece of it. And then what...Comfy [00:44:47]: Maybe I should fade it in because I, yeah, I feel like maybe, yeah, I only explain part of the story. Right. Yeah. Maybe I should explain the rest. So yeah. So yeah. Basically, January, that's when the first January 2023, January 16, 2023, that's when Amphi was first released to the public. Then, yeah, did a Reddit post about the area composition thing somewhere in, I don't remember exactly, maybe end of January, beginning of February. And then someone, a YouTuber, made a video about it, like Olivio, he made a video about Amphi in March 2023. I think that's when it was a real burst of attention. And by that time, I was continuing to develop it and it was getting, people were starting to use it more, which unfortunately meant that I had first written it to do like experiments, but then my time to do experiments went down. It started going down, because people were actually starting to use it then. Like, I had to, and I said, well, yeah, time to add all these features and stuff. Yeah, and then I got hired by Stability June, 2023. Then I made, basically, yeah, they hired me because they wanted the SD-XL. So I got the SD-XL working very well withітhe UI, because they were experimenting withámphi.house.com. Actually, the SDX, how the SDXL released worked is they released, for some reason, like they released the code first, but they didn't release the model checkpoint. So they released the code. And then, well, since the research was related to code, I released the code in Compute 2. And then the checkpoints were basically early access. People had to sign up and they only allowed a lot of people from edu emails. Like if you had an edu email, like they gave you access basically to the SDXL 0.9. And, well, that leaked. Right. Of course, because of course it's going to leak if you do that. Well, the only way people could easily use it was with Comfy. So, yeah, people started using. And then I fixed a few of the issues people had. So then the big 1.0 release happened. And, well, Comfy UI was the only way a lot of people could actually run it on their computers. Because it just like automatic was so like inefficient and bad that most people couldn't actually, like it just wouldn't work. Like because he did a quick implementation. So people were forced. To use Comfy UI, and that's how it became popular because people had no choice.swyx [00:47:55]: The growth hack.Comfy [00:47:56]: Yeah.swyx [00:47:56]: Yeah.Comfy [00:47:57]: Like everywhere, like people who didn't have the 4090, they had like, who had just regular GPUs, they didn't have a choice.Alessio [00:48:05]: So yeah, I got a 4070. So think of me. And so today, what's, is there like a core Comfy team or?Comfy [00:48:13]: Uh, yeah, well, right now, um, yeah, we are hiring. Okay. Actually, so right now core, like, um, the core core itself, it's, it's me. Uh, but because, uh, the reason where folks like all the focus has been mostly on the front end right now, because that's the thing that's been neglected for a long time. So, uh, so most of the focus right now is, uh, all on the front end, but we are, uh, yeah, we will soon get, uh, more people to like help me with the actual backend stuff. Yeah. So, no, I'm not going to say a hundred percent because that's why once the, once we have our V one release, which is because it'd be the package, come fee-wise with the nice interface and easy to install on windows and hopefully Mac. Uh, yeah. Yeah. Once we have that, uh, we're going to have to, lots of stuff to do on the backend side and also the front end side, but, uh.Alessio [00:49:14]: What's the release that I'm on the wait list. What's the timing?Comfy [00:49:18]: Uh, soon. Uh, soon. Yeah, I don't want to promise a release date. We do have a release date we're targeting, but I'm not sure if it's public. Yeah, and we're still going to continue doing the open source, making MPUI the best way to run stable infusion models. At least the open source side, it's going to be the best way to run models locally. But we will have a few things to make money from it, like cloud inference or that type of thing. And maybe some things for some enterprises.swyx [00:50:08]: I mean, a few questions on that. How do you feel about the other comfy startups?Comfy [00:50:11]: I mean, I think it's great. They're using your name. Yeah, well, it's better they use comfy than they use something else. Yeah, that's true. It's fine. We're going to try not to... We don't want to... We want people to use comfy. Like I said, it's better that people use comfy than something else. So as long as they use comfy, I think it helps the ecosystem. Because more people, even if they don't contribute directly, the fact that they are using comfy means that people are more likely to join the ecosystem. So, yeah.swyx [00:50:57]: And then would you ever do text?Comfy [00:50:59]: Yeah, well, you can already do text with some custom nodes. So, yeah, it's something we like. Yeah, it's something I've wanted to eventually add to core, but it's more like not a very... It's a very high priority. But because a lot of people use text for prompt enhancement and other things like that. So, yeah, it's just that my focus has always been on diffusion models. Yeah, unless some text diffusion model comes out.swyx [00:51:30]: Yeah, David Holtz is investing a lot in text diffusion.Comfy [00:51:34]: Yeah, well, if a good one comes out, then we'll probably implement it since it fits with the whole...swyx [00:51:39]: Yeah, I mean, I imagine it's going to be a close source to Midjourney. Yeah.Comfy [00:51:43]: Well, if an open one comes out, then I'll probably implement it.Alessio [00:51:54]: Cool, comfy. Thanks so much for coming on. This was fun. Bye. Get full access to Latent Space at www.latent.space/subscribe
Både och istället för antingen eller - en podd om integrativ medicin och hälsa
Dr. Jörgen Herlofson har vid det här laget varit med många gånger i podden, och samtalat om olika viktiga ämnen, denna gång tillsammans med Ida Ali-Lindqvist om Minoritets stress. Jörgen är legitimerad läkare, specialist i psykiatri, legitimerad psykoterapeut, författare, föreläsare och ansvarig utgivare för DSM på svenska. Ida Ali Lindqvist är journalist, författare, föreläsare och samtalsterapeut. Hon har tidigare arbetat inom missbruksvården, men idag så arbetar Ida på Amphi, en organisation som arbetar med jämlikhet, inkludering och där Ida främst arbetar för att skapa förändrings processer kopplat bland annat till frågor om våld i nära relation, beroende problematik och relationella trauma. Ljudet på denna inspelning blev inte som önskat, men vi har valt att sända ändå, utifrån samtalets ämne, och svårigheten att göra om ett samtal som blev både intensivt, uttömmande och stundtals hett.. men viktigt på riktigt! Tack för att just du lyssnar på ”Både och, istället för antingen eller – en podd om Integrativ medicin och hälsa. Följ oss på sociala medier, och ge oss gärna fem stjärnor på iTunes om det här var givande för dig. Prenumerera gärna på vår podd! ♥ Facebook: https://www.facebook.com/integrativmedicin ♥ Youtube: https://www.youtube.com/user/integrativMedicin
Latin Prefixes *Hook Word (mnemonic for memorization) A-, Ab-; away from Absent (word hook) Ad-; to, towards addition Ambi- (Ambo-, amb-, am-, an-); around, on both sides ambivalent Ante-; before, forward antebellum Antero-, anterior; before, front or forward part (Anterior) (see: antero-) Bi- (bis-, bin-); twice Bicycle Circum-; around circumference Contra- (Counter- often before vowels); against, opposite contraindicated De-; down from, away deficient Extra-, extro-; beyond, on the other side, outer (extrapulmonary, extraligamentous) external, extraordinary Infra-; below, lower (Inferior, infraspinatus, infrascapular, infracostal) Inferior Inter-; between (intervascular, interosseous) interloper Intra-; within, inside, during (intracervical) Intra-racism Juxta-; beside (juxtarticular, juxtaspinal) Juxtaposition Ob- (o-, oc-, op-); against, in the way, facing (occiput) object, objective Post-; after, behind (Posterior, Postcerebral, postnatal) poster Postero-; behind (Posterior, posterolateral, posterosuperior) postscript Prae- (pre); before, in front of (precordium) preclude Retro-; backward, behind (retroflexion, retronasal) retrograde Semi-; half (semiorbicular) semicircle Sub- (suc-, suf-, sup, sus-); under, below, near, somewhat (subdorsal, subcutis) submarine Super- (sur-); over, above (Superior) superior Supra-; Above, upon (supra-axillary, supraspinatus, suprapelvic) (see: super-) Trans- (often tran- before ‘s'); across, through (Transverse, transverse Abdominous, tranverse plane) Transcript Ultra-, ultro-; beyond, excessive (ultraligation, ultrasetaceous) ultra-runner, ultraviolet Greek Prefixes An, A-; not, without (anemia) absent Amphi-; around about (amphibian) amphibian Ana-, ano-; up, back, again (aneurysm) analysis Anti- (anth-); against, resisting antagonist Apo-; from, separation (apostasies: STA- “to stand”) apostle Arche-, archi-; first, chief (archiplasm) architype Cata- (cath-) or kata-; down, lower, under (catabolic) catastrophe Di-; twice, twofold (dimorphic) diameter (two radii) Dia-; through, across, apart (diameter, diagnosis) diameter Dicha-; in two, double (dichotomy) dichotomy Dys-; bad, difficult, hard, disordered (dysfunction) dysfunction Ek-, ex-; out form, outside (exophthalamus) eccentric Ekto-; without, on the outside (ectomere, ectopia) (see: ek-, ex-) En- (em-); in, within, among (endemic) energy Endo-; within (endothelium) endocrine Ento-; within (entotic: OT- “ear”) entropic (see: endo-) Epi-; upon, on (epidermis) epidemic Eso-; inward, within (esotropic, esotoxin) esoteric Eu-; we, good, normal (eugenics) euphony Exo-; outside, outward, outer (exocolitis) exotic Hemi-; half, partly (hemialgia: ALGOS – “pain”) hemisphere Hyper-; above, over, excessive (hyperemic, hypertrophy) hyperactive Hypo-; low, under, below (hyponatremia) hypocrite Meta-, (meth-); after, among, beyond, behind (metacarpal: KARPOS – “wrist”) metaphysical, metaphor, method Opistho-; behind (opithognathism) optic Pali(n)-; back, again, once more (palikinesia: KINE – “movement”) palindrome Para-; by the side of, near (paranoia: NOOS, “mind”) paranoia Peri-; around (periosteum) Periodontal (bone or tissue AROUND tooth) Pro-; before, in front of, forward (prophase) professional Pros-; to, in addition, near (prosencephalon: ENCEPHALON, brain) prosthetic Proso-, prostho-; forward, before, in front of (prosoplasia) prosthetic My links: My patreon: https://www.patreon.com/user?u=103280827 My Ko-fi: https://ko-fi.com/rhetoricrevolution Send me a voice message!: https://podcasters.spotify.com/pod/show/liam-connerly TikTok: https://www.tiktok.com/@mrconnerly?is_from_webapp=1&sender_device=pc Email: rhetoricrevolution@gmail.com Instagram: https://www.instagram.com/connerlyliam/ Podcast | Latin in Layman's - A Rhetoric Revolution https://open.spotify.com/show/0EjiYFx1K4lwfykjf5jApM?si=b871da6367d74d92 Gut Guardian Discount Code: https://www.feelgoods.co/discount/LIAM64728
Dans son zapping, Laurent Tessier revient sur les évènements politiques de la semaine.
Horse racing ending in Sonoita. Why? Amphi employee way out of line with minors
Julie Hammett fait le tour des images marquantes et des déclarations fortes de la journée et fait vivre l'info du soir, entouré de chroniqueurs, d'invités et d'éditorialistes.
Things have sunk so low in Tucson that we have grave robbers. Not remains but decorations a mother puts at a son's grave and mementos embedded in grave markers. Amphi says no more election polling places. Why? Chris Faddis, co-founder of Solidarity Healthshare
En chorégraphe, plasticien ou curateur, Christian Rizzo poursuit sans relâche l'élasticité et la mise en tension entre les corps et l'espace dans des récits où la fiction émerge de l'abstraction. Il fait ses débuts artistiques à Toulouse où il monte un groupe de rock et crée une marque de vêtements, avant de se former aux arts plastiques à la Villa Arson à Nice et de se diriger ensuite vers la danse. Dans les années 1990, il est interprète en Europe auprès de nombreux chorégraphes contemporains, signant aussi parfois les bandes son ou la création des costumes. En 1996, il fonde l'association fragile et crée des performances, des installations, des pièces solos ou de groupes en alternance avec d'autres commandes pour l'opéra, la mode et les arts plastiques. Depuis, plus d'une quarantaine de productions ont vu le jour, auxquelles s'ajoutent les activités pédagogiques. Christian Rizzo intervient régulièrement dans des écoles d'art en France et à l'étranger, ainsi que dans des structures dédiées à la danse contemporaine. En 2015, Christian Rizzo prend la direction du Centre chorégraphique national de Montpellier - Occitanie. Désormais renommé ICI (Institut Chorégraphique International), le CCN propose une vision transversale de la création, de la formation, de l'éducation artistique et de l'ouverture aux publics. Prenant support sur les pratiques et les territoires, le projet ICI—CCN est avant tout un espace prospectif qui englobe en un seul mouvement, l'invitation d'artistes, l'écriture du geste chorégraphique et les manifestations de son partage. En conversation avec Emmanuelle Huynh, artiste et cheffe d'atelier aux Beaux-Arts de Paris, et Alain Berland, responsable de la programmation culturelle. Penser le Présent est réalisé avec le soutien de Société Générale. Amphi des LogesJeudi 12 octobre 2023 Crédits photo : Mario Sinistaj
You're basically creating miracles that fit in the palm of your hand and you're helping reimagine life for patients.20 YEARS IN PERSPECTIVE:Givers are not good takers.I don't think of the 2000s as twenty years ago, I still think of the 80s, or the 70s as twenty years ago.I started in Singapore, then went back to the US and lived and worked in New Jersey and Chicago, then Munich, London and back in the US. I was fortunate enough to have those opportunities with the same company, so I have actually been with one company since I left INSEAD.The biggest challenge is, I'll call it kind of “recognizing my own power” and what I can contribute versus being told or accepting my situation. I think the biggest challenge for a lot of women in leadership, from a career standpoint, is stretching without hesitation for those bigger roles.For the last several years I've made mentoring both within the company, as well as outside the company a priority for me.Being able to talk about yourself is something that I'm not comfortable doing, but I've had to do it, so I've learned to do it.Carving out a half a day or even an hour to go do something that I want to do, it doesn't come easily, it does not come easily at all.ON TOPIC: Pharma, healthcare, lifespan, longevityMy biggest frustration with how the industry is viewed is the cost of drugs in the overall healthcare system is maybe 10% of the total cost of health care. Yet, where we try to push for savings is on the cost of drugs, which doesn't fundamentally solve the problem of access to care and cost of care.The frustration is, we're not picking the right battles to fight to change the health care system model. But that doesn't change the fact that what we do every day as an industry is work on products that hopefully just provide more for patients.The process that works is, in the early stages of development you pick certain disease areas that you feel like have a strong unmet need and you try to find targets to battle that disease.What we're seeing now in the industry, which is different from before is, the FDA is much more open to looking at real-world evidence and shorter-term studies to give access to products for patients compared to before.So even the regulatory pathways are trying to allow for faster innovation than what we had before.It is a very competitive industry, even in spaces like oncology, because 10 - 15 years ago you might have had 1 option, now you have multiple options, which is great for the patient, because now you can have your choice of products whether it's better side effect profile, even potentially better efficacy, and companies will continue to invest in it.Luckily in developed countries you do see an increase in lifespan, but you still have a lack of access to care in a lot of, whether it's sub-Saharan Africa, even India.What I'm hopeful for is that we will have more longevity. What I'm worried about is that we don't appreciate that and we don't take advantage of that extra time that we will hopefully all have right and about what we're going to do with that time.I'd rather see us be able to enjoy that extra time rather than just continue to work our asses off during that time.ON TOPIC: corporate life, cultural differences“Do not schedule a meeting during lunchtime, because nobody will come to your meeting and they will be pissed off at you.”All THINGS INSEAD AND GIVING BACKMilena: So you're not buying an Amphi.Guest: I'm not buying an Amphi, no.I feel like I'm doing some small part to ensure that the future is more positive for some of those individuals.
Diplômée des Beaux-Arts de Paris et de la Glasgow School of Art, Thu Van Tran nait en 1979 à Hô-Chi-Minh-Ville, et s'enfuit avec sa famille en France en 1981. Marquée par cet exil, l'artiste crée un corpus d'œuvres qui comprend photographies, dessins, films, sculptures et installations qu'elle réunit in situ pour former un grand récit qui soulève des questions sur notre capacité à nous souvenir de l'Histoire, des rituels et des héritages des pays et des nations. Elle s'intéresse à la façon dont hier plane sur aujourd'hui, dont la conscience historique et la conscience collective interagissent l'une avec l'autre. Abordant les relations postcoloniales du Vietnam avec la France et les États-Unis, sa pratique s'inspire constamment de la littérature, de l'histoire et de la nature, en mettant l'accent sur la manière dont les concepts culturels de contamination, d'identité et de langue sont explorés dans ces domaines. En 2018, elle a été nominée pour le prestigieux Prix Marcel Duchamp en France. En conversation avec Alain Berland. Amphi des Loges mercredi 15 février 2023 Penser le Présent est réalisé avec le soutien de Société Générale. Crédit photo : © Leslie Martinelli
Tue, 14 Feb 2023 23:00:00 GMThttps://www.rcf.fr/actualite/le-grand-invite-de-lamatinalercf?episode=341558RCFRCFnonofull14:18
Ballot drop boxes Amphi School Board Tu Simple hits the Wall Street Journal...they are under investigation by three federal agencies...and $120K from the City of Tucson.
My guest today is Mark Rzepczynski. Prior to co-founding AMPHI, Mark was the CEO of the fund group at FourWinds Capital Mgmt. Mark was also President and CIO at John W. Henry & Co., an iconic Commodity Trading Advisor. Mark has headed fixed income research at Fidelity Management and Research, served as the senior economist for the CME, and as a finance professor at the University of Houston Baer School of Business. The topic is Trend Following. In this episode of Trend Following Radio we discuss: How to build a strong trading career How does inflation cause distortions Inflation in financial markets Leverage and portfolio management How the FED would protect the stock market VUCA: Volatility, Uncertainty, Complexity and Ambiguity Accuracy from a trend following perspective Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
My guest today is Mark Rzepczynski. Prior to co-founding AMPHI, Mark was the CEO of the fund group at FourWinds Capital Mgmt. Mark was also President and CIO at John W. Henry & Co., an iconic Commodity Trading Advisor. Mark has headed fixed income research at Fidelity Management and Research, served as the senior economist for the CME, and as a finance professor at the University of Houston Baer School of Business. The topic is Trend Following. In this episode of Trend Following Radio we discuss: How to build a strong trading career How does inflation cause distortions Inflation in financial markets Leverage and portfolio management How the FED would protect the stock market VUCA: Volatility, Uncertainty, Complexity and Ambiguity Accuracy from a trend following perspective Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
Heute geht es um Drohnen beim Amphi, Fragen zu Festival- Lineups, wir klären sogar technische Fragen und noch viel mehr. Wir haben wieder mal ein Ohrenbluten dabei und eine Plattdütsch- Runde... Ist also wieder eine etwas regulärere (?) Folge geworden. Schickt uns gern weiter Fragen, Schnellrunden, Themen für den Podkasten oder einfach nette Mails an podcast@solarfake.de Zeitraum für Solar Care Bewerbungen (für Mitglieder des Shadowplay Fanclubs) ist vom 07. - 14.08., Bewerbungen an gewinnspiel@shadowplay-fanclub.de
Heute reden wir über unsere Reise nach Seattle, die ja doch recht ereignisreich war. Wir beantworten auch viele Eurer Fragen dazu. Außerdem gibt es Ausblicke auf das Black Lower Castle Festival morgen und das Amphi Festival kommende Woche. Natürlich haben wir noch viele andere Themen, ist ja klar. Die Aufnahme fand übrigens schon am Montag statt, mittlerweile ist auch Jeans wieder negativ. Leider hatten wir ein paar technische Schwierigekeiten bei der Aufnahme, sodass es leider ein paar Knackser gibt. Dafür entschuldigen wir uns. Nächste Woche muss der Podcast leider nochmal ausfallen, übernächste Woche geht es dann wie gewohnt weiter. Schickt uns gern weiter Fragen, Schnellrunden, Themen für den Podkasten oder einfach nette Mails an podcast@solarfake.de
An, A-; not, without (anemia) absent Amphi-; around about (amphibian) amphibian Ana-, ano-; up, back, again (aneurysm) analysis Anti- (anth-); against, resisting antagonist Apo-; from, separation (apostasies: STA- “to stand”) apostle Arche-, archi-; first, chief (archiplasm) architype Cata- (cath-) or kata-; down, lower, under (catabolic) catastrophe Di-; twice, twofold (dimorphic) diameter (two radii) Dia-; through, across, apart (diameter, diagnosis) diameter Dicha-; in two, double (dichotomy) dichotomy Dys-; bad, difficult, hard, disordered (dysfunction) dysfunction Ek-, ex-; out form, outside (exophthalamus) eccentric Ekto-; without, on the outside (ectomere, ectopia) (see: ek-, ex-) En- (em-); in, within, among (endemic) energy Endo-; within (endothelium) endocrine Ento-; within (entotic: OT- “ear”) entropic (see: endo-) Epi-; upon, on (epidermis) epidemic Eso-; inward, within (esotropic, esotoxin) esoteric Eu-; we, good, normal (eugenics) euphony Exo-; outside, outward, outer (exocolitis) exotic Hemi-; half, partly (hemialgia: ALGOS – “pain”) hemisphere Hyper-; above, over, excessive (hyperemic, hypertrophy) hyperactive Hypo-; low, under, below (hyponatremia) hypocrite Meta-, (meth-); after, among, beyond, behind (metacarpal: KARPOS – “wrist”) metaphysical, metaphor, method Opistho-; behind (opisthognathism) Pali(n)-; back, again, once more (palikinesia: KINE – “movement”) palindrome Para-; by the side of, near (paranoia: NOOS, “mind”) paranoia Peri-; around (periosteum) Periodontal Pro-; before, in front of, forward (prophase) professional Pros-; to, in addition, near (prosencephalon: ENCEPHALON, brain) prosthetic Proso-, prostho-; forward, before, in front of (prosoplasia) prosthetic Syn- (sym- before “b” or “p”); with, together (syndrome, symbiosis: BIOS, “life”) synonym, sympathy --- Support this podcast: https://anchor.fm/liam-connerly/support
Jeff Utsch, candidate for Amphi School Board sits down with Chris to discuss his background and what he would like to bring to school board governance if elected.
This episode is in Swedish I det här avsnittet av SAQMI Play möter vi BlatteQueers i en inspelning från den 17 november 2021 på Bio Valand i Göteborg. Samtalet ägde rum efter filmvisningen Queer Evidens och Bipoc Existens, som arrades av SAQMI under vinjetten Nightfall - Queer Talks & Screenings, SAQMIs plattform där queera filmskapare och konstnärer möts genom visningar och samtal kring processer och skapande. Under kvällen visades kortfilmer och panelen djupdök i ett samtal om att ta plats och existera som queera BIPOC personer i film och audiovisuella berättelser. Om hur filmaktivism uttrycks och visionen för ett inkluderande filmsverige. Curator och moderator för urval och samtal är Tine Alavi och deltagarna i samtalet är Ese Ejodame, Lisa Nagano Holm, Lasmi Belmar och Jon Ely Xiuming Aagaard Gao. Om BlatteQueers i film: BlatteQueers i film också kallad BQ i film är ett projekt av Interfem som riktar sig till HBTQI+ personer med erfarenheter av rasism där film är ett centralt fokus i projektet. Målet med BlatteQueers i film är att skapa bredare representation och inkludering i filmbranschen. Historia: BlatteQueers i film startade 2020 (men fanns redan 2015 som ett utvecklingsprojekt) med behov av att kunna identifiera sig i filmer och med karaktärer i filmer. Frågan som ställdes handlade om att kunna hitta queera filmer med karaktärer som både har queera erfarenheter och som har erfarenheter av rasism. I början var det mycket svårt att hitta den specifika representationen och därför startades projektet. 2016 lanserade BlatteQueers i film ett skolfilmspaket bestående av tre kortfilmer som riktade sig till skolelever. Filmerna har använts av både Amphi produktion och Mix festivalen i Köpenhamn genom deras skolfilmsutbud. Under 2020-2021 har BQ i film hållit nätverksträffar och evenemang samt gjort fyra kampanjfilmer som riktar sig till aktörer i filmbranschen, filmbolag och bidragsgivare. Vidare handlar BQ i film om att HBTQI+ personer som har erfarenheter av rasism och som är verksamma i filmbranschen kan träffas, utbyta erfarenheter och kunskap, skapa nätverk – och samarbeta för att göra film. Under åren har BlatteQueers i film deltagit på events och festivaler, varit med i panelsamtal och seminarier för att ta plats med frågor om bredare representation och inkludering av queer BIPOC personer i film och i filmbranschen. * Namnet BlatteQueers i film kan förkortas till ”BQ i film” för målgrupper som inte utsätts för rasism. Namnet BlatteQueers kan upplevas både provocerande och stärkande. Blatte är och har varit ett ord som använts för att förtrycka personer som blir eller riskerar att bli utsatta för rasism. Queer är ett begrepp som också har använts förtryckande mot homo- och bisexuella, trans- och queera personer. Det är också begrepp som personer själva valt att definiera sig med och använda för att ta makt över sina egna liv. Deltagare behöver inte definiera sig med begreppen blatte och queer för att vara med i projektet, så länge erfarenheterna passar in på beskrivningen ovan. Om de medverkande i samtalet: Tine Alavi är regissör och producent för bland annat BlatteQueers i film. Hen har även regisserad ”Dream Job” för kortfilmsantologin ”One off incident” producerad av Tuffi Film. Senast har hen regisserad kortfilmskampanjen ”Min mamma sa att vuxna måste lyssna” för Skuggkommittén. Tine kommer att moderera samtalet. Ese Ejodame är en ickebinär queer filmregissör. Hen arbetar huvudsakligen med ämnen som är HBTQIA-relaterade, våld som utförs av andra än cis-män samt psykisk ohälsa. Ese har en kandidat i Filmisk Gestaltning på HDK-Valand i Göteborg. Jon Ely Xiuming Aagaard Gao är poet och konstnär som arbetar med text på alla sätt och vis, foto, film, zines, performance, queera handlingar och interaktioner, typ? Enligt läsare skriver hen bland annat hyperintensivtinyourfacekärleks-dikter. Hen är också en del av transzinet drömsyskon och konst- och klubbkollektivet fake daughter. Lisa Nagano Holm är regissör för dokumentärfilmen Rasuul. Hon har en utbildningsbakgrund i dokumentärfilm på Ölands folkhögskola och inom fotografi och digital media, och studerar för nuvarande på Hyper Island. De projekt hon oftast landar reflekterar queerhet, rasifiering och identitet – projekt som Rasuul, med vilket hon har deltagit i Malmö Queer Filmfestival och Fake Pride. Lasmi Belmar är utbildad på Ölands dokumentär filmskola och agerar främst regissör. Hon har deltagit i festivaler som stockmotion och Malmö Queer Filmfestival. Queerhet och etnicitet är ett ämne som hon tillsammans med lisa har bearbetat ett tag. Dessa tankar kommer att ta form och uttryck med filmen Rasuul. Läs mer om alla filmerna som visades under Nightfall #8 här. Nedan finns länkar till alla filmer som gjorts inom BlatteQueers i film mellan åren 2015-2022: ALIKI, 2016, Längd 17:20 min. FÖRSTÅR DU VAD JAG MENAR?, 2016, Längd 8:52 min. TV-SERIEN, 2016, Längd 5:51 min. INTRO, 2021, Längd: 3:12 min OM MAN VILL, 2021, Längd 3:54 min. SUDI, 2021, Längd 3:39 min. MANUSFÖRFATTAREN, 2021, Längd: 4:05 min. Övriga filmer som visades under Nightfall #8: Jon Ely Xiuming Aagaard Gao - A Letter to Myself or the Gender Clinic or White People Credits SAQMI Play: Producenter: Anna Linder och Malin Holgersson Design och kod: Vincent Orback Komposition: Amanda Lindgren Klipp och mix: Malin Holgersson och Niki Yrla Inspelat live av: Anna Linder Host: Tine Alavi Original: Nightfall #8 - curator Tine Alavi Ansvarig utgivare: Anna Linder SAQMI Play produceras med stöd av Kulturrådet och Göteborgs stad.
Rested and recharged, Chris is back in saddle. Welcome back and welcome to 2022! Chris thanked all his guest hosts from last week. Mona Gibson, candidate for Amphi Schools Board of Directors joins Chris to talk about her background and her priorities for the Amphi School Board which she is seeking election. Chris also discusses reports from gvnews.com and kold.com about Pima county terminating employees over the vaccine mandate. A report from COVID Reality.
Bonjour bonjour ! C'est DE NOUVEAU Richoult ! Vous l’avez toujours pas vu venir, mais voici le premier bonus de l’adaptation audio de “Erika et les princes en détresse“, il reprend le dernier strip en date de Yatuu : Erika et les princes en détresse – spécial Noël. On a réussi à réaliser ce mono mp3 en très peu de temps (~ 2 semaines). Cela n’aurait pas pu être possible sans l’aide de Gihellcy et Yatuu, ni sans la réactivité d’Erika et Stellatsu. Voici le bonus de Noël : Erika et les princes en détresse – Le Prince des Neiges ( 12 MB) Cliquez pour télécharger – 11597 téléchargements Découvrir les épisodes précédents. Distribution : Richoult : NarrateurErika : ErikaStellatsu : PitaEgg : AglaglaxAmphi : PrinceTati : Mère Noël Merci aux acteurs qui ont fait preuve d’une rapidité exceptionnelle ! https://javras.fr/Audio/SagasMP3/ErikaEtLesPrincesEnDetresse/Erika%20et%20les%20princes%20en%20detresse%20-%20Le%20Prince%20Des%20Neiges.mp3 Google Podcasts Podcast Addict RSS Spotify YouTube iTunes C’est le dernier fichier mp3 qui sortira sur Javras pour cette année. J’en profite pour vous souhaiter de bonnes fêtes de fin d’année et je vous donne rendez-vous en Janvier pour l’épisode 2 de La Mémoire Arrangée. Prenez soin de vous ! ~ Richoult Suivez-nous sur nos réseaux sociaux : Facebook Twitter Instagram Youtube Tiktok Vous pouvez nous aider en soutenant la team Javras sur Tipeee ou sur Patreon !Partagez cet épisode, mettez une note sur Itunes et/ou laissez un commentaire. Ils sont 31, et ils ont soutenu cet épisode :Sur Tipeee : Nackrikor, Kiri, Monololo, Richoult, Zenigata, Yotti, Itineris, Sorenran, Lucas D., Stellatsu, Ruby-lin, LaurentDoucet, Thomas S., Titus, LC, Thomas Lecavelier, Neron, JPPJ, Reshkaf, Mopolo, Adelyx, Flamingfox, Lord purple Peignoir, Melectrik, Yohann Lorant et MrBadger.Sur Patreon : Lmorel, Clément Simonin, Kauraly, Socolin et Watchinofoye.Merci infiniment à vous tous !
Bonjour bonjour ! C'est DE NOUVEAU Richoult ! Vous l'avez toujours pas vu venir, mais voici le premier bonus de l'adaptation audio de “Erika et les princes en détresse“, il reprend le dernier strip en date de Yatuu : Erika et les princes en détresse – spécial Noël. On a réussi à réaliser ce mono mp3 en très peu de temps (~ 2 semaines). Cela n'aurait pas pu être possible sans l'aide de Gihellcy et Yatuu, ni sans la réactivité d'Erika et Stellatsu. Voici le bonus de Noël : Erika et les princes en détresse – Le Prince des Neiges ( 12 MB) Cliquez pour télécharger – 12 téléchargements Découvrir les épisodes précédents. Distribution : Richoult : NarrateurErika : ErikaStellatsu : PitaEgg : AglaglaxAmphi : PrinceTati : Mère Noël Merci aux acteurs qui ont fait preuve d'une rapidité exceptionnelle ! https://javras.fr/Audio/SagasMP3/ErikaEtLesPrincesEnDetresse/Erika%20et%20les%20princes%20en%20detresse%20-%20Le%20Prince%20Des%20Neiges.mp3 Google Podcasts Podcast Addict RSS Spotify YouTube iTunes C'est le dernier fichier mp3 qui sortira su...
Scott and Anne-Marie discuss school funding and all the different programs provided through the Amphi Schools Foundation with their president, Leah Noreng.
Flowing Wells High School football coach Scott Cortese previews his team's opener against Amphi in the Coaches for Charity Kickoff Classic this Thursday at Amphi.
Dave Humphreys, assistant principal at Amphitheater High School, joins the show the preview the Coaches for Charity Kickoff Classic high school football game on Sept. 2, featuring Amphi vs. Flowing Wells and Pueblo at Sahuaro.
As part of our Southern Arizona Coaching Memories series with our All Sports Tucson podcast, our first guest is longtime Amphi defensive coordinator Ed Roman, who was an assistant of Vern Friedli's for 36 years. Co-host Matt Johnson, the head coach at Mountain View High School, and Andy and Javier Morales interview Roman about his experiences with the late Friedli and some of the legends he coached such as the Bates brothers (Michael, Marion and Mario), Riki Ellison and Jon Volpe. He also delves into his coaching style and the blitzing, attacking defenses he developed.
Talk the Talk - a podcast about linguistics, the science of language.
Our Mailbag is once again full of questions, and podcasting luminary Helen Zaltzman is here to help us answer them! Why is the raspberry sound (PBTPBBBBT) not a speech sound in any language? Or is it? How can sounds in a language change so much over time? Am I BURNED OUT? Or BURNT OUT? Why are they called metaphysicians and not metaphysicists? What can we call something besides LAME? Why is AMPHI- so infrequently used in English?
SRO Greg Chmara talks about the programs he has implemented at AMPHI schools...and it"s not all about policing..join a great conversation
"On ne peut pas dissocier le féminisme de l'antiracisme, de la lutte décoloniale, de la lutte écologiste, parce que tout est lié dans nos vies. Ca n'a aucun sens de tout séparer et c'est pour ça qu'on a créé un collectif, le Spicy Devis, parce qu'on ne trouvait pas ce safe space ailleurs." Ce 2ème épisode hors série est consacré à l'asio-féminisme, plus précisément le féminisme sud-asiatique. Enregistré il y a quelques mois, nous profitons de cette semaine où l'on célèbre la journée internationale des droits des femmes pour évoquer l'importance de la lutte féministe. On entend beaucoup parler de féminisme, un peu d'afro-féminisme, beaucoup moins d'asio-féminisme. Pourtant, les femmes asiatiques connaissent des expériences spécifiques qu'il est nécessaire de prendre en compte. Dans cet épisode, Amanda échange avec Pulan Devii, Sabrina, Mina et Amphi, 4 femmes qui forment avec Manuela @angrywomanofcolour (qui n'était pas présente) le collectif sud-asiatique féministe décolonial Spicy Devis : Pulan Devii : créatrice du podcast et du compte instagram Ni ton hindou ni ton pakpak. Sabrina : créatrice du site et du compte instagram @sabrinalikestoread (revues littéraires de textes féministes décoloniaux). Mina : créatrice du compte instagram@lindiennedanslaville et membre de @décolonisonsleféminisme (convergence des luttes antiraciste, féministe, ecologiste). Amphi : @amphi.tryon : artiste, elle travaille sur le genre et comment mélanger la politique, l'art et la spiritualité dans une démarche post-coloniale, et sur la notion de self esteem. Ensemble, elles se demandent : Pourquoi parler d'Asioféminisme et de féminisme sud-asiatique ? Pourquoi la lutte écologique est-elle indispensable à la lutte féministe décoloniale ? Etre féministe c'est être radical.e ? Qu'est-ce qu'un "safe space" ? Par où commencer si l'on veut agir ? Quelles sont les actions du collectif Spicy Devis ? Quelles ressources pour s'éduquer ? Des questions complexes mais si importantes.
什麼都要等到完美的話就什麼都不用做了 百忙之中還是錄了今天的字首字根,影片還在考慮要不要打字幕阿?? 先把音檔放上來! Powered by Firstory Hosting
durée : 00:02:12 - Le Billet de Charline Vanhœnacker - Charline cède son micro à Camille, 19 ans, qui fait sa rentrée universitaire… face à Laure Adler, l'invitée du 7h50
Comment imaginer un imam, un rabbin, un prêtre, un pasteur et un bouddhiste, et des agents de la fonction publique parler ensemble de la laïcité, de l’avortement, de la place des femmes dans la société ou de l’extrémisme religieux, avec une totale liberté de parole et de ton, dans des termes accessibles à tous ? En poussant la porte d’Emouna, l’Amphi des religions, et en s’installant avec eux pendant un an. Cette première formation inter-religieuse en France, et même au monde, initiée par Sciences Po après les attentats de 2015, réunit chaque année une trentaine de responsables religieux et laïcs actifs. Le but : ouvrir le dialogue entre les religions et trouver les clefs pour construire un bien vivre ensemble dans une France multiple et laïque. Une histoire hors normes dont il est urgent de rendre compte pour montrer que tout est encore possible.
Das Amphi-Festival, Kölns Gothic-Schwarzes (Halb-)Open-Air am Deutzer Rheinufer mit 42 Acts auf drei Bühnen - zu Lande und zu Wasser, unter freiem Himmel am Tanzbrunnen, im Theater und natürlich „alle in einem Boot“ - und mit über 12.500 Besuchern aus aller Welt, fand in diesem Jahr zum 15. Mal statt. Nach einer offiziellen Pre-Party mit den DJs Erk Aicrag, Adrian Hates, Daniel Myer und MSTH sowie dem stets ausverkauften Eröffnungsevent „Call The Ship To Port“ auf dem Party-Katamaran MS Rhein Energie kehrt das Festival u.a. mit In Extremo zu seinen Anfängen zurück. Die Mittelalter-Rocker spielten bisher nur einmal auf dem Amphi - bei der Premiere 2005. Als weiterer Headliner feiern die EBM-Urgesteine Nitzer Ebb ein triumphales Bühnencomeback. Neben zahlreichen Szenegrößen wie Project Pitchfork oder Hocico, dazu Goth'n'Roll und feinste Dark-Elektronik aus großen Hymnen und treibenden Stiefel-Beats, sieht bzw. hört man auch über den Tellerrand - mit White Lies nimmt ein Hochkaräter aus dem alternativen Rock/Pop-Bereich mit unüberhörbaren Wave-Einflüssen mit dem Charme der 80er Jahre am Tanzbrunnen Platz. Offizielle Homepage: AMPHI FESTIVAL Facebook: AMPHI FESTIVAL Facebook: XV. Amphi Festival 2019
Au Venezuela l'association Gran Cine diffuse des films en plein air, malgré la crise. A Abidjan, les étudiants d'une université préfèrent dormir dans l'amphithéâtre que de rentre chez eux. En RD Congo, des jeunes élèves apprennent à cultiver un arbre pendant leur scolarité.
Épisode 10 : Ninon a grandi dans le 19ème à Paris entre les Buttes Chaumont et la Villette. Elle a vécu la vie de quartier Parisien avec sa mère institutrice, son père chauffeur de bus et sa petite soeur. Elle garde de très bons souvenirs de l'école, du mémory des animaux et d'une maîtresse très particulière et très attachante qui adorait les sorcières ! Une jeunesse entière plongée dans un imaginaire magique et aventurier donne à Ninon l'envie de devenir archéologue, chercheuse ou exploratrice. Au collège, la "bonne élève un peu insolente" s'en sort avec de bonnes notes malgré un climat tendu avec les professeurs. L'imaginaire fantastique de Ninon se développe dans les personnages qu'elle dessine pendant les cours. Sa passion pour l'archéologie reste intacte, elle se voit déjà marcher dans les pas d'Indiana Jones. L'arrivée au Lycée, dans le Marais est un changement pour elle, changement de quartier, changement de milieu social, changement de rythme mais elle réussit bien à s'y intégrer. Après le Lycée et avant d'intégrer la fac de Bio, Ninon passe le BAFA et part en colo comme animatrice ! L'arrivée en Amphi de 600 personnes change la vision qu'elle avait des études et dès lors son parcours scolaire a beaucoup évolué... Un parcours semé de changements, de l'étude des dents de chèvres, en passant par la programmation et la puériculture jusqu'à la vie de travailleuse sociale. Un beau parcours, avec des galères et des solutions. . . . Ce podcast t'est proposé par Key4Job. Key4Job est une plateforme pour valoriser les initiatives, les projets et les dispositifs favorisant l'emploi des jeunes. À retrouver sur Instagram et Facebook. Enregistrement et production du podcast : Supernatifs.
ette dernière séance permettra de mettre en exergue comment la pensée du système se traduit désormais dans des doctrines politiques, des textes législatifs et des grandes déclarations internationales (notamment celles portées par l’ONU). Il s’agit de tenter de prendre au sérieux, à la fois l’idée de l’importance des relations entre humains et non humains, et celle de la considération qu’on doit porter au système global. Les Cours publics de l’Ecole urbaine de Lyon consacrés aux questions urbaines et anthropocènes. Cette activité est conçue sur le modèle des cours du Collège de France. Il s’agit de proposer à des chercheurs de l’université de Lyon, de toute origine disciplinaire, de présenter à tous ceux qui souhaitent suivre ces séances (librement, sans condition de diplômes et sans évaluation finale), les résultats de leurs travaux de recherche, dans une perspective de remise au cœur de la société des savoirs scientifiques. Le Cours public "Qu’est-ce que l’anthropocène ?" Le terme Anthropocène s’impose et se diffuse de plus en plus, tant auprès des scientifiques que du grand public. Dans un cas comme dans l’autre, il suscite de nombreux débats et discussions, ce qui est un indice de sa puissance. Celle-ci est due à ce qu’il recouvre : l’idée d’une bifurcation dans l’histoire de l’humanisation de la planète et une bifurcation qui menace possiblement l’habitabilité de la terre. C’est la raison pour laquelle il a été placé au cœur du projet scientifique et pédagogique de l’Ecole urbaine de Lyon, avec notamment un cycle de cours publics qui lui est entièrement consacré, assuré par Michel Lussault, directeur de l’Ecole urbaine de Lyon. Programme complet du Cours public "Qu’est-ce que l’anthropocène ?" Pour cette première année, ce cours, qui se prolongera les années universitaires suivantes, posera les cadres d’une possible théorie critique de l’anthropocène. Il s’agira de présenter les différentes approches de l’anthropocène, mais aussi les contestations existantes de la pertinence de ce mot. Le programme comporte 6 séances d’1h30 (dont une petite période de questions en fin de séance) : 1. Leçon inaugurale. Planète, Terre, Monde : le global entre en scène. Jeudi 24 janvier, 17 heures, Halles du Faubourg, 10, impasse des Chalets, Lyon 7ème Ecouter le podcast de la leçon inaugurale 2. La rupture des années 1950 ou la « grande accélération ». Jeudi 14 février, 12h30, amphithéâtre de la MILC, 35 rue Raulin, Lyon 7ème Ecouter le podcast de la 2ème séance 3. Pourquoi faudrait-il admettre la nouveauté de l’anthropocène ? La promotion de la pensée du système et ses conséquences scientifiques et politiques. Jeudi 28 février, 12h30, salle 110, IUT Lyon 3, 88 Rue Pasteur, Lyon 7ème Ecouter le podcast de la 3ème séance 4. Le système-PTM, un état des lieux Jeudi 7 mars, 12h30, Bâtiment Thémis, Amphi 8, Université Claude Bernard Lyon1, Campus LyonTech La Doua, 11 Avenue Gaston Berger, Villeurbanne. Tramway T1 : arrêt La Doua Gaston Berger Ecouter le podcast de la 4ème séance 5. Le système-terre et son gouvernement. Jeudi 14 mars 12h30, Université de Lyon, Grand amphithéâtre, 90, rue Pasteur, Lyon 7ème. Tramway T1 : arrêt Quai Claude Bernard Ecouter le podcast de la 5ème séance 6. Quelle trajectoire pour le système-PTM ? L’anthropocène comme défi scientifique et politique mondial. Mercredi 10 avril, 12h30-14h, Les Halles du Faubourg Cette dernière séance permettra de mettre en exergue comment la pensée du système se traduit désormais dans des doctrines politiques, des textes législatifs et des grandes déclarations internationales (notamment celles portées par l’ONU). Dans tous les cas, il s’agit de tenter de prendre au sérieux, à la fois l’idée de l’importance des relations entre humains et non humains, et celle de la considération qu’on doit porter au système global. On terminera par une interrogation : ce système, dont l’évolution actuelle inquiète, quelle pourrait-être sa trajectoire, si l’on en croit des recherches récentes, et sur quoi cette réflexion sur cette trajectoire peut/doit-elle déboucher sur des recommandations politiques ?
Cette séance montrera que, depuis une quinzaine d’années, les approches de l’anthropocène font de la "pensée du système-terre" ("Earth Systeme Thinking"), la clef de compréhension des changements en cours. Un nouveau paradigme scientifique tend à s’imposer, qui permettrait de spécifier la nouveauté de l’anthropocène par rapport à toutes les modifications anthropogéniques préalables. Le présent cours se place d’ailleurs dans cette perspective, même s’il propose de considérer non pas le système-terre, mais le système Planète/Terre/Monde, comme il a été expliqué en séances 1 et 2. Mais cette pensée systémique infuse également désormais le champ de la mobilisation mondiale contre le changement global et même le domaine de la réflexion (géo)politique. On examinera cela à partir d’une première analyse rapide du dialogue instaurée au sein de l’ONU sur la "Jurisprudence de la Terre". Les Cours publics de l’Ecole urbaine de Lyon consacrés aux questions urbaines et anthropocènes. Cette activité est conçue sur le modèle des cours du Collège de France. Il s’agit de proposer à des chercheurs de l’université de Lyon, de toute origine disciplinaire, de présenter à tous ceux qui souhaitent suivre ces séances (librement, sans condition de diplômes et sans évaluation finale), les résultats de leurs travaux de recherche, dans une perspective de remise au cœur de la société des savoirs scientifiques. Le Cours public "Qu’est-ce que l’anthropocène ?" Le terme Anthropocène s’impose et se diffuse de plus en plus, tant auprès des scientifiques que du grand public. Dans un cas comme dans l’autre, il suscite de nombreux débats et discussions, ce qui est un indice de sa puissance. Celle-ci est due à ce qu’il recouvre : l’idée d’une bifurcation dans l’histoire de l’humanisation de la planète et une bifurcation qui menace possiblement l’habitabilité de la terre. C’est la raison pour laquelle il a été placé au cœur du projet scientifique et pédagogique de l’Ecole urbaine de Lyon, avec notamment un cycle de cours publics qui lui est entièrement consacré, assuré par Michel Lussault, directeur de l’Ecole urbaine de Lyon. Programme complet du Cours public "Qu’est-ce que l’anthropocène ?" Pour cette première année, ce cours, qui se prolongera les années universitaires suivantes, posera les cadres d’une possible théorie critique de l’anthropocène. Il s’agira de présenter les différentes approches de l’anthropocène, mais aussi les contestations existantes de la pertinence de ce mot. Le programme comporte 6 séances d’1h30 (dont une petite période de questions en fin de séance) : 1. Leçon inaugurale. Planète, Terre, Monde : le global entre en scène. Jeudi 24 janvier, 17 heures, Halles du Faubourg, 10, impasse des Chalets, Lyon 7ème Ecouter le podcast de la leçon inaugurale 2. La rupture des années 1950 ou la « grande accélération ». Jeudi 14 février, 12h30, amphithéâtre de la MILC, 35 rue Raulin, Lyon 7ème Ecouter le podcast de la 2ème séance 3. Pourquoi faudrait-il admettre la nouveauté de l’anthropocène ? La promotion de la pensée du système et ses conséquences scientifiques et politiques. Jeudi 28 février, 12h30, salle 110, IUT Lyon 3, 88 Rue Pasteur, Lyon 7ème 4. Le système-PTM, un état des lieux 1. Jeudi 7 mars, 12h30, Bâtiment Thémis, Amphi 8, Université Claude Bernard Lyon1, Campus LyonTech La Doua, 11 Avenue Gaston Berger, Villeurbanne 5. Le système-PTM, un état des lieux 2 : le rôle et la place de l’urbanisation. Jeudi 14 mars 12h30, Université de Lyon, Grand amphithéâtre, 92, rue Pasteur, Lyon 7ème. 6. Quelle trajectoire pour le système-PTM ? L’anthropocène comme défi scientifique et politique mondial. Jeudi 11 avril, 12h30, lieu en cours de détermination.
Cette séance montrera que, depuis une quinzaine d’années, les approches de l’anthropocène font de la "pensée du système-terre" ("Earth Systeme Thinking"), la clef de compréhension des changements en cours. Un nouveau paradigme scientifique tend à s’imposer, qui permettrait de spécifier la nouveauté de l’anthropocène par rapport à toutes les modifications anthropogéniques préalables. Le présent cours se place d’ailleurs dans cette perspective, même s’il propose de considérer non pas le système-terre, mais le système Planète/Terre/Monde, comme il a été expliqué en séances 1 et 2. Mais cette pensée systémique infuse également désormais le champ de la mobilisation mondiale contre le changement global et même le domaine de la réflexion (géo)politique. On examinera cela à partir d’une première analyse rapide du dialogue instaurée au sein de l’ONU sur la "Jurisprudence de la Terre". Les Cours publics de l’Ecole urbaine de Lyon consacrés aux questions urbaines et anthropocènes. Cette activité est conçue sur le modèle des cours du Collège de France. Il s’agit de proposer à des chercheurs de l’université de Lyon, de toute origine disciplinaire, de présenter à tous ceux qui souhaitent suivre ces séances (librement, sans condition de diplômes et sans évaluation finale), les résultats de leurs travaux de recherche, dans une perspective de remise au cœur de la société des savoirs scientifiques. Le Cours public "Qu’est-ce que l’anthropocène ?" Le terme Anthropocène s’impose et se diffuse de plus en plus, tant auprès des scientifiques que du grand public. Dans un cas comme dans l’autre, il suscite de nombreux débats et discussions, ce qui est un indice de sa puissance. Celle-ci est due à ce qu’il recouvre : l’idée d’une bifurcation dans l’histoire de l’humanisation de la planète et une bifurcation qui menace possiblement l’habitabilité de la terre. C’est la raison pour laquelle il a été placé au cœur du projet scientifique et pédagogique de l’Ecole urbaine de Lyon, avec notamment un cycle de cours publics qui lui est entièrement consacré, assuré par Michel Lussault, directeur de l’Ecole urbaine de Lyon. Programme complet du Cours public "Qu’est-ce que l’anthropocène ?" Pour cette première année, ce cours, qui se prolongera les années universitaires suivantes, posera les cadres d’une possible théorie critique de l’anthropocène. Il s’agira de présenter les différentes approches de l’anthropocène, mais aussi les contestations existantes de la pertinence de ce mot. Le programme comporte 6 séances d’1h30 (dont une petite période de questions en fin de séance) : 1. Leçon inaugurale. Planète, Terre, Monde : le global entre en scène. Jeudi 24 janvier, 17 heures, Halles du Faubourg, 10, impasse des Chalets, Lyon 7ème Ecouter le podcast de la leçon inaugurale 2. La rupture des années 1950 ou la « grande accélération ». Jeudi 14 février, 12h30, amphithéâtre de la MILC, 35 rue Raulin, Lyon 7ème Ecouter le podcast de la 2ème séance 3. Pourquoi faudrait-il admettre la nouveauté de l’anthropocène ? La promotion de la pensée du système et ses conséquences scientifiques et politiques. Jeudi 28 février, 12h30, salle 110, IUT Lyon 3, 88 Rue Pasteur, Lyon 7ème 4. Le système-PTM, un état des lieux 1. Jeudi 7 mars, 12h30, Bâtiment Thémis, Amphi 8, Université Claude Bernard Lyon1, Campus LyonTech La Doua, 11 Avenue Gaston Berger, Villeurbanne 5. Le système-PTM, un état des lieux 2 : le rôle et la place de l’urbanisation. Jeudi 14 mars 12h30, Université de Lyon, Grand amphithéâtre, 92, rue Pasteur, Lyon 7ème. 6. Quelle trajectoire pour le système-PTM ? L’anthropocène comme défi scientifique et politique mondial. Jeudi 11 avril, 12h30, lieu en cours de détermination.
Cette séance montrera que, depuis une quinzaine d’années, les approches de l’anthropocène font de la "pensée du système-terre" ("Earth Systeme Thinking"), la clef de compréhension des changements en cours. Un nouveau paradigme scientifique tend à s’imposer, qui permettrait de spécifier la nouveauté de l’anthropocène par rapport à toutes les modifications anthropogéniques préalables. Le présent cours se place d’ailleurs dans cette perspective, même s’il propose de considérer non pas le système-terre, mais le système Planète/Terre/Monde, comme il a été expliqué en séances 1 et 2. Mais cette pensée systémique infuse également désormais le champ de la mobilisation mondiale contre le changement global et même le domaine de la réflexion (géo)politique. On examinera cela à partir d’une première analyse rapide du dialogue instaurée au sein de l’ONU sur la "Jurisprudence de la Terre". Les Cours publics de l’Ecole urbaine de Lyon consacrés aux questions urbaines et anthropocènes. Cette activité est conçue sur le modèle des cours du Collège de France. Il s’agit de proposer à des chercheurs de l’université de Lyon, de toute origine disciplinaire, de présenter à tous ceux qui souhaitent suivre ces séances (librement, sans condition de diplômes et sans évaluation finale), les résultats de leurs travaux de recherche, dans une perspective de remise au cœur de la société des savoirs scientifiques. Le Cours public "Qu’est-ce que l’anthropocène ?" Le terme Anthropocène s’impose et se diffuse de plus en plus, tant auprès des scientifiques que du grand public. Dans un cas comme dans l’autre, il suscite de nombreux débats et discussions, ce qui est un indice de sa puissance. Celle-ci est due à ce qu’il recouvre : l’idée d’une bifurcation dans l’histoire de l’humanisation de la planète et une bifurcation qui menace possiblement l’habitabilité de la terre. C’est la raison pour laquelle il a été placé au cœur du projet scientifique et pédagogique de l’Ecole urbaine de Lyon, avec notamment un cycle de cours publics qui lui est entièrement consacré, assuré par Michel Lussault, directeur de l’Ecole urbaine de Lyon. Programme complet du Cours public "Qu’est-ce que l’anthropocène ?" Pour cette première année, ce cours, qui se prolongera les années universitaires suivantes, posera les cadres d’une possible théorie critique de l’anthropocène. Il s’agira de présenter les différentes approches de l’anthropocène, mais aussi les contestations existantes de la pertinence de ce mot. Le programme comporte 6 séances d’1h30 (dont une petite période de questions en fin de séance) : 1. Leçon inaugurale. Planète, Terre, Monde : le global entre en scène. Jeudi 24 janvier, 17 heures, Halles du Faubourg, 10, impasse des Chalets, Lyon 7ème Ecouter le podcast de la leçon inaugurale 2. La rupture des années 1950 ou la « grande accélération ». Jeudi 14 février, 12h30, amphithéâtre de la MILC, 35 rue Raulin, Lyon 7ème Ecouter le podcast de la 2ème séance 3. Pourquoi faudrait-il admettre la nouveauté de l’anthropocène ? La promotion de la pensée du système et ses conséquences scientifiques et politiques. Jeudi 28 février, 12h30, salle 110, IUT Lyon 3, 88 Rue Pasteur, Lyon 7ème 4. Le système-PTM, un état des lieux 1. Jeudi 7 mars, 12h30, Bâtiment Thémis, Amphi 8, Université Claude Bernard Lyon1, Campus LyonTech La Doua, 11 Avenue Gaston Berger, Villeurbanne 5. Le système-PTM, un état des lieux 2 : le rôle et la place de l’urbanisation. Jeudi 14 mars 12h30, Université de Lyon, Grand amphithéâtre, 92, rue Pasteur, Lyon 7ème. 6. Quelle trajectoire pour le système-PTM ? L’anthropocène comme défi scientifique et politique mondial. Jeudi 11 avril, 12h30, lieu en cours de détermination.
My guest today is Mark Rzepczynski. Prior to co-founding AMPHI, Mark was the CEO of the fund group at FourWinds Capital Mgmt. Mark was also President and CIO at John W. Henry & Co., an iconic Commodity Trading Advisor. Mark has headed fixed income research at Fidelity Management and Research, served as senior economist for the CME, and as a finance professor at the Univ. of Houston Baer School of Business. The topic is Trend Following. In this episode of Trend Following Radio we discuss: Controlling volatility Style diversification Long term vs. Short term managers Simplicity beats complexity Quant trading Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!
Agnès Benassy-Quéré, enseignante en économie à l'université Paris 1, décrit dans cette vidéo comment la mise en place d'un « pad » (éditeur de texte collaboratif en ligne) permet à nouveau à ses étudiant.e.s de lui poser des questions pendant son cours en amphithéâtre. Elle prévoit des temps pour consulter les questions des étudiant.e.s en début et aux moments opportuns de son cours. Puis elle les classe, répond à certaines immédiatement, et répond aux autres en fonction de l'avancement du cours. Ce simple dispositif permet, d'une part, aux étudiant.e.s d'avoir une réponse rapide à leurs questions, de manière anonyme et sans interrompre le cours. D'autre part, il donne à l?enseignante un retour en temps réel sur la compréhension.
We talk about various stuff, including what must be a secret moustache society, getting shat on by a bird, the Amphi festival and related Amsterdam trip stuff, a half metre sausage, a cunty dude, one person moshpits, and close off with an item of interest. If you enjoy the content, don't forget to like & share this post! http://wearenotheretopleaseyou.blogspot.fi http://www.podgodsnetwork.com/
We talk about various stuff, including what must be a secret moustache society, getting shat on by a bird, the Amphi festival and related Amsterdam trip stuff, a half metre sausage, a cunty dude, one person moshpits, and close off with an item of interest. If you enjoy the content, don't forget to like & share this post! http://wearenotheretopleaseyou.blogspot.fi http://www.podgodsnetwork.com/
Gregor PRIDUN, Horst JENS und Horst S. plaudern über freie Software und andere Nerd-Themen. Shownotes auf http://goo.gl/8lOym6 oder http://biertaucher.at
Jeanette hat seit ein paar Tagen Urlaub und sie war letztes Wochenende in Koln und hat dort unter anderem das Amphi besucht. Dies ist ein Gothic Festival, das schon seit einigen Jahren regelmassig stattfindet und auch regelmassig ausgebucht ist. Ich war letztes Wochenende bei Munchen auf einer Gartenparty bei einer Freundin eingeladen und habe dort mehr zufallig ein neues Auto gekauft, welches ich nachsten Samstag dort abholen kann.
An important reminder about registration!
The title pretty much says it all.
The title pretty much says it all.
A quick reminder about our 8th Grade Night taking place on March 12, 2008!
Final Announcements for 2007!
Quick update for seniors at Amphi HS!
This is an update given on 10/26/2007 for the following week.
Character Counts Week @ Amphi HS!
Character Counts Week @ Amphi HS!
Character Counts Week @ Amphi HS!
Character Counts Week @ Amphi HS
This is day one of Character Counts week at Amphi HS!
This is an update for what to expect the week of September 3rd.
Quick look at two important areas of the AHS Counseling website that will help keep students updated.
This is part two of the Honors Freshman Program presentation that took place during the Amphitheater High School Open House on August 22nd. The following is a slidecast of Mr. Corrigan’s presentation.
This is part one of the Honors Freshman Program presentation that took place during the Amphitheater High School Open House on August 22nd. Part two will be Mr. Corrigan’s presentation in the form of a slidecast.
Important updates for the week of 08/06/2007
A quick update for current Amphi NHS members.
The LINK Crew at Amphitheater High School has a special invitation for the class of 2011!
This podcast will give an overview of the Freshman Transitions Program at Amphitheater High School located in Tucson, AZ.
A quick preview of what to expect during the first days of the Freshman Transitions Class at Amphitheater High School, located in Tucson, AZ.
Here is an important update in regards to the registration process at Amphitheater High School.
Important updates and information for incoming freshmen at Amphitheater High School.
A quick update on some important dates for the 2007-08 school year.
The roster of NHS members of the class of 2007 as read at the Honors Convoacation on May 14, 2007.
This is a re-broadcast of an earlier NHS announcement.
Our NHS podcast is new and improved! Don't forget that you can also check us out at www.myspace.com/amphinhs!
Amphitheater High School weekly announcements for April 4, 2007. Many of these announcements have to do with senior updates and the Counseling Office.
Important dates for the Amphitheater High School Class of 2007!
Sandy Spiewak, Counseling Dept. Chair, gives details on the "Career Explorations & Transitions" class in our very 1st video!
The counselors at Amphitheater High School will be registering the 8th graders at LaCima Middle School and Amphi Middle School this week.
Updates for the juniors at Amphitheater High School in Tucson, Arizona.
This is important information that our seniors need to know. We talk briefly about financial aid, college, and the AIMS tests.