Podcasts about unet

  • 35PODCASTS
  • 41EPISODES
  • 45mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Aug 16, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about unet

Latest podcast episodes about unet

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
AI Magic: Shipping 1000s of successful products with no managers and a team of 12 — Jeremy Howard of Answer.ai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 16, 2024 58:56


Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe

Papers Read on AI
IMAGDressing-v1: Customizable Virtual Dressing

Papers Read on AI

Play Episode Listen Later Jul 22, 2024 27:37


Latest advances have achieved realistic virtual try-on (VTON) through localized garment inpainting using latent diffusion models, significantly enhancing consumers' online shopping experience. However, existing VTON technologies neglect the need for merchants to showcase garments comprehensively, including flexible control over garments, optional faces, poses, and scenes. To address this issue, we define a virtual dressing (VD) task focused on generating freely editable human images with fixed garments and optional conditions. Meanwhile, we design a comprehensive affinity metric index (CAMI) to evaluate the consistency between generated images and reference garments. Then, we propose IMAGDressing-v1, which incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE. We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet, ensuring users can control different scenes through text. IMAGDressing-v1 can be combined with other extension plugins, such as ControlNet and IP-Adapter, to enhance the diversity and controllability of generated images. Furthermore, to address the lack of data, we release the interactive garment pairing (IGPair) dataset, containing over 300,000 pairs of clothing and dressed images, and establish a standard pipeline for data assembly. Extensive experiments demonstrate that our IMAGDressing-v1 achieves state-of-the-art human image synthesis performance under various controlled conditions. The code and model will be available at https://github.com/muzishen/IMAGDressing. 2024: Fei Shen, Xin Jiang, Xin He, Hu Ye, Cong Wang, Xiaoyu Du, Zechao Li, Jinghui Tang https://arxiv.org/pdf/2407.12705v1

Nikotellen
145. (TB) Vesa tajus & yhdet päiväunet baarissa

Nikotellen

Play Episode Listen Later Jul 5, 2024 61:01


* Tiedäthän että tämä kuuntelemasi jakso on throwback-jakso podcastin alkuvaiheilta. Nikotellen -podcastin tuoreita jaksoja löydät Podmesta. Podmessa voit kuunnella vaikka putkeen satoja ja taas satoja Nikotellenin aiemmin julkaistuja jaksoja, ja tuoretta sisältöä tulee lisää joka viikko - ja mikä parasta, ilman mainoksia. Eli jos tykkäät Nikotellen-podista ja haluat lisää, sitä löytyy yllin kyllin osoitteesta podme.com. Jenna on laskeutunut onnellisesti Helsingin maaperälle ja he ovat molemmat tänään studiossa! Aloitetaan ajanlasku MEIDÄN vuoteen 2022 tästä! Muutama tarinahan jäi takataskuun The Leviltä.. mm. Nikon elämän mies oli kuin olikin siellä discopallojen välkkeessä. Ja mitä hittoa tapahtuu Tornion Vanhassa Mestarissa? Miksi ihmeessä Jenna juoksi pillupaljaana yössä ja miksi Nikon tinder game on ohitse (vihdoinkin)? @nikotellen vuoden jokaisena päivänä instagramissa. Parhaat etkot, jatkot ja kaikki mitä kaipaat tältä vauhtikaksikolta. Kuullaan jälleen ensi perjantaina! Uusi throwback-jakso, joka perjantai. Klikkaa podi seurantaan!

Papers Read on AI
Improving Diffusion Models for Virtual Try-on

Papers Read on AI

Play Episode Listen Later May 10, 2024 27:25


This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment, given a pair of images depicting the person and the garment, respectively. Previous works adapt existing exemplar-based inpainting diffusion models for virtual try-on to improve the naturalness of the generated visuals compared to other methods (e.g., GAN-based), but they fail to preserve the identity of the garments. To overcome this limitation, we propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images. Our method, coined IDM-VTON, uses two different modules to encode the semantics of garment image; given the base UNet of the diffusion model, 1) the high-level semantics extracted from a visual encoder are fused to the cross-attention layer, and then 2) the low-level features extracted from parallel UNet are fused to the self-attention layer. In addition, we provide detailed textual prompts for both garment and person images to enhance the authenticity of the generated visuals. Finally, we present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity. Our experimental results show that our method outperforms previous approaches (both diffusion-based and GAN-based) in preserving garment details and generating authentic virtual try-on images, both qualitatively and quantitatively. Furthermore, the proposed customization method demonstrates its effectiveness in a real-world scenario. More visualizations are available in our project page: https://idm-vton.github.io 2024: Yisol Choi, Sangkyung Kwak, Kyungmin Lee, Hyungwon Choi, Jinwoo Shin https://arxiv.org/pdf/2403.05139

Radio Voiman podcastit
Pelicans-hyökkääjä Aatu Jämsen torkkuu iltapäivällä - "Päiväunet sisältyy aina pelipäivään"

Radio Voiman podcastit

Play Episode Listen Later Apr 9, 2024 9:04


21-vuotias Pelicans-hyökkääjä Aatu Jämsen kertoi tiistaina Radio Voimassa nautiskelleensa pelipäivän aamupalaksi hedelmiä ja äidin valmistaman munakkaan. Jämsen kertoi myös haaveestaan jatkaa musiikin tekemistä liigakauden jälkeen. - Edelleen haaveilen siitä, että pääsisin jonkun isomman artistin kanssa tekemään biisin. Se on ollut tavoitteeni, Jämsen kertoi. Kuka voisi olla sellainen artisti? - Ehkä William tai Ahti. Jämsen pohti rohkean ja omanlaisensa pelityylin muotoutumista. - Se menee varmaan sinne lapsuuteen, kun olen pelannut paljon pihapelejä. Sieltä se silmä-käsi-koordinaatio on tullut, että osaa käsitellä sitä kiekkoa. Jämsen painotti myös, että hän on aina tykännyt tehdä omaa hommaansa, välittämättä siitä, mitä muut siitä ajattelevat. Miten valmistaudut pelipäivään? - Päiväunet on sellainen, mikä sisältyy aina pelipäivään. Näen myös päiväunilla enemmän unia kuin yöunilla. Kuuntele koko haastattelu.

Ihmisiä, siis eläimiä
MDMA-terapia. Ylisukupolviset traumat. Polku terapeutiksi. Psykedeelien riskit. #74 Mippi Vuorikoski

Ihmisiä, siis eläimiä

Play Episode Listen Later Mar 26, 2024 202:06


Tue ohjelmaa Patreonissa: https://www.patreon.com/soinnunmaanhenry Jakson esittelyteksti: https://www.patreon.com/posts/101016652 74. jakson vieraana seksuaaliterapeutti ja ihmissuhde- ja vuorovaikutusasiantuntija Mippi Vuorikoski. Jakso taltioitiin 3.2.2024. Lataa mp3: https://soundcloud.com/ihmisiis/#74-mippi-vuorikoski Videoversio: https://youtu.be/7uJEuO07W-Q Spotify: https://spoti.fi/43wMidc Apple Podcasts: https://apple.co/3xeRc2M RSS: http://feeds.soundcloud.com/users/soundcloud:users:358481639/sounds.rss 00:00:00 Polku. 00:06:26 Mipin MDMA-terapiakokemus. 00:20:46 MDMA-terapia ja perhe. 00:31:29 Psykedeelit, stigmat ja vieraantuminen. 00:44:37 Psykedeelit ja vastuun ottaminen. 00:55:57 Suomi, sota ja ylisukupolviset traumat. 01:14:32 Keskustelu kuolleen isoisän kanssa. 01:21:38 Mielenterveys ja psykiatria. 01:25:59 Mipin 14-vuotias minä. 01:35:36 Fyysinen pahoinvointi psykedeeliterapiassa. 01:50:44 Kohtaaminen ja korjaavat kokemukset. 01:58:04 Unet ja sienet kertovat tehtävän. 02:09:44 Psykedeelit ja syyttä suotta nauraminen. 02:16:15 Psykedeelien rankat ja negatiiviset vaikutukset. 02:26:14 Psykedeelikokemuksiin valmistautuminen. 02:35:06 Psykedeelien vaikutukset eivät rajaudu tripin kestoon. 02:39:46 Psykedeelit eivät ole yhdenlaisia. 02:50:12 Väärinkäytökset ja eettiset ulottuvuudet MDMA-terapiassa. 03:09:37 Mipin kutsumus. 03:17:53 Loppulyhyet. Marc Aixalán Psychedelic Integration -kirja https://t.ly/AtcDS Hoivakotilo https://kotilo.info Ecstatic Integration -blogi https://t.ly/M276u Jules Evansin Psyty-webinaari 29.4.2024 https://t.ly/8vbBd Challenging psychedelic experiences https://t.ly/hBrdO Dissosioitunut pilotti https://t.ly/1UiwQ Teksti psykedeelikentästä ja -liikkeestä https://t.ly/l8tEU Juulia Järvenpään MDMA-opinnäytetyö https://t.ly/HHEyl Meghan Buisson ja eettisiä rajoja rikkoneet MDMA-terapeutit https://t.ly/PtOPi Mipin Insta https://t.ly/4hYPM Mipin verkkosivut http://mippivuorikoski.com Jos mietit, miksi Mippi viittaa psykoterapiaopintoihinsa vaikkei toimi psykoterapeutin ammattinimikkeellä, kyse on siitä että hän kouluttautui Helsingin Psykoterapiainstituutin ohjelmassa, jota Valvira ei vastoin odotuksia lopulta ole hyväksynyt psykoterapeutin ammattiin johtavaksi koulutukseksi, mikä käytännössä tarkoittaa, etteivät he saa toimia Suomessa psykoterapeuttina (joka on Suomessa suojattu ammattinimike). Asiasta voi lukea lisää esimerkiksi tästä: https://www.hs.fi/kotimaa/art-2000010215173.html – Ihmisiä, siis eläimiä -podcast rakastaa ymmärrystä avartavia näkökulmia. Syvän tiedonjanon ajaman ohjelman visiona on luoda asioiden ytimeen pureutuvaa, hitaampaa mediaa. Podcastin keskeisiä teemoja ovat tiede ja taide, tavallinen ja erikoinen, yksilö ja yhteiskunta sekä ihminen ja muu luonto. Ohjelman vetäjä, ymmärrykseltään keskeneräinen mutta utelias Henry Soinnunmaa on muusikko, kirjoittaja ja amatöörigeneralisti. • Telegram: https://t.me/ihmisiis • Facebook: https://facebook.com/ihmisiis • X: https://x.com/ihmisiis • Instagram: https://instagram.com/ihmisiis • Youtube: https://youtube.com/ihmisiis • Spotify: https://spoti.fi/2MLqNQE • Apple Podcasts https://apple.co/32jaPqX • Soundcloud: https://soundcloud.com/ihmisiis

Traumainformoitu Toivo
Unet ja leikki trauman parantajina - vieraana emerita professori Raija-Leena Punamäki-Gitai

Traumainformoitu Toivo

Play Episode Listen Later Feb 5, 2024 36:04


Voiko jokin niin arkinen asia kuin uni, leikki ja mielikuvitus auttaa meitä toipumaan menneisyyden taakoista? Psykologi, emeritaprofessori Raija-Leena Punamäki-Gitai kertoo mielenkiintoisista tutkimuksistaan traumatisoitumisesta ja toipumisesta. Raija-Leenan tutkimusalueina Tampereen yliopistossa olivat traumakokemukset, mielenterveys ja kehityksellinen psykopatologia. Uni voi olla terapeuttista, kuntouttavaa, silloin kun siinä on useita erilaisia tunteita läsnä, ja kun unesta tulee ehjä narratiivi, jossa on alku ja loppu ja jopa jonkinlainen opetus lopussa. Unta voi ikään kuin jalostaa, työstämällä omia unia vaikkapa sen kautta, että merkitsee uniaan herätessään muistiin. Unille voi myös itse työstää parempaa tarinaa. Tutkimuksissa on huomattu, miten tärkeää kaikki vuorokauden tunnit ovat toipumisessa, joten myös yön eli unen tunnit on hyvä saada hyödynnettyä tervehtymiselle. Unia kannattaa kirjoittaa ylös vaikka vihkoon, niitä kammottavia ja pelottaviakin, ja jonkin ajan kuluttua tutkia ja havainnoida, mikä unessa toistuu ja mitä unet ehkä haluavat meille kertoa. Myös unissa tapahtuvaa muutosta on tärkeää pohtia. Huolestuttavinta — ja myös tutkimusten mukaan merkittävää yksilön mielenterveydelle on se, jos joku ei koskaan muista uniaan. Hyvä uutinen on se, että unia pystyy ikään kuin kutsumaan. Lasten kanssa työskennellessä on tärkeää tiedostaa että jo keskustelu unista, niissä toistuvista elementeistä yhdessä turvallisen aikuisen kanssa työstää unia kohti tervehdyttävämpiä, vivahdeikkaampia narratiivejä. Leikin, loruttelun ja mielikuvituksen kautta ihminen voi työstää hyvin syvällä sisimmässä olevia tunteita ja muistoja. Leikki on usein lapsen avain vaikean asian käsittelyyn, leikin kautta esimerkiksi traumaattinen muisto voi tulla käsiteltävämmäksi ja helpommaksi kantaa, ja paraneminen voi näin alkaa. Lapsille laulettavat laulut ovat jo itsessään terapeuttisia ja hoitavia. Laulajan ei tarvitse olla musikaalinen tai laulaa puhtaasti, tärkeintä on että lauluissa on emootioita, tunteita mukana.

Mangakartta
92: I Want to Hold Aono-kun so Badly I Could Die

Mangakartta

Play Episode Listen Later Nov 16, 2023 173:54


I Want to Hold Aono-kun so Badly I Could Die on Umi Shiinan kauhua, romantiikkaa, mysteerinratkaisua ja huumoria yhdistelevä kummitustarina, jossa päähenkilön poikaystävä kuoltuaan palaatakaisin haamuna. Ajankohtaisina aiheina puhumme siitä, miten Crunchyroll sulkee mangapalvelunsa kymmenen vuoden jälkeen, Petterin vierailusta Ylen Kulttuuriykkösen Ghibli-jaksossa sekä siitä, miten Japanin uusi verolaki vaikuttaa manga- ja animealan työntekijöihin. Lukujonossa aloitamme lopultakin Natsumi Andon wagashi-teemaisen murhamysteerin Something's Wrong With Us ja luemme Ogeretsu Tanakan Happy of the Endin kakkospokkarin. --- Kommentoi | Bluesky | Mastodon | X | Instagram --- (01:17) – KUULUMISET: PETTERI KÄVI JAPANISSA - Tokyo Banana - Jakso 9, jossa puhuimme British Museumin manganäyttelystä - Petterin ruoka- ja hintaketju Mastodonissa, X:ssä ja Blueskyssä (07:45) – AONO-KUN: ESITTELY - I Want to Hold Aono-kun so Badly I Could Die - Afternoon-lehti (14:16) – AONO-KUN: SARJA YLEISESTI - Paha Aono näyttäytyy Horielle ovikamerassa Kariyana (kuva) - Seksuaalisuus on isossa osassa (kuva) - "Päästä minut sisään" (kuva) - Säännöt ovat tärkeitä kummitustarinoissa (kuva) - Unet ovat epämääräisen ahdistavia (kuva) - Näitä sanoja et saa koskaan sanoa (kuva) - Koomista ilmeilyä (kuva) (26:28) – AONO-KUN: YUURI KARIYA JA RYUUHEI AONO - Romanssi normaalin Aonon kanssa on suloinen… - Salainen suudelma (kuva) - Aono ei pysty riisumaan vaatteitaan (kuva) - Sähkötolppahali (kuva) - …mutta pahan Aonon obsessiivisuus ja pyrkimys alistaa ja hyväksikäyttää on sen kanssa ikävässä kontrastissa (kuva) - Kariya kasvattaa selkärankaa (kuva) - Kariya oppii neuvottelemaan pahan Aonon kanssa, joka loppujen lopuksi on pakkomielteinen Kariyasta (kuva) - Kariya vaatii kaikki suudelmat takaisin (kuva) - Kariya ja Aono varsinaisesti vasta tutustuvat sen jälkeen, kun Aono on jo kuollut (kuva) - Aono (jota Fujimoto ei tässä näe, vaan he keskustelevat kännykän avulla) kokee ristiriitaisia tunteita, kun toisaalta haluaisi Kariyan siirtyvän eteenpäin elämässä, mutta toisaalta kaipaa Kariyaa (kuva) - Aono (Kariyan kehossa) kertoo Fujimotolle, miten koki syyllisyyttä siitä miten Kariya tykkäsi hänestä (kuva) (42:12) – AONO-KUN: FUJIMOTO JA HORIE - Fujimoto on söpö äksy poika (kuva) - Et voi pussailla tyttöjä kehossani, etkä varsinkaan asettaa kättäsi tissin alle! (Aono on Kariyan kehossa) (kuva) - Horie on kauhuleffaharrastaja (kuva) (53:11) – AONO-KUN: TAIDE JA KERRONTA - Häät hautausmaalla (kuva) - Suudelma akvaariossa (kuva) - Eteisen kukkakimppu lisääntyy ja valtaa koko asunnon (kuva) - Riivauksesta vapautumisen hämmennys (kuva) - Land of the Lustrous - Beastars, josta puhuimme jaksossa 13 (01:03:03) – AONO-KUN: KANNET - Sarjan kannet (01:07:41) – AONO-KUN: JULKAISU - Ihan hauska käännös (Aono on Fujimoton kehossa) (kuva) (01:09:30) – AONO-KUN: SPOILERIOSIO - Toisen suojeleminen tälle valehtelemalla on oikeastaan vain itsensä suojelua (kuva) - Jakso 91, jossa puhuimme sarjasta The Girl from the Other Side - Neitsyitä uhrilahjoiksi (kuva) - X-viilto (Aono on Fujimoton kehossa) (kuva) - Skitso isosisko ja välinpitämättömät vanhemmat (kuva) - Aono kommentoi asiaa (kuva) - Takauma menneisyydestä: miksi on nöyryyttävää olla se jota lyödään? (kuva) - Aonon äidillä oli rankkaa (kuva) - Käskikö äiti? (kuva) - Dinosaurus (kuva) - The Song of Saya - Yotsukubi-saman rituaali (kuva) (01:39:24) – AONO-KUN: YHTEENVETO - Higurashi: When They Cry - Pan's Labyrinth - Jakso 31, jossa puhuimme Kasanesta - Afternoon-lehti (01:44:16) – CRUNCHYROLL SULKEE DIGIMANGAPALVELUNSA - Crunchyrollin tiedote mangapalvelun sulkemisesta - Lista sarjoista, jotka palvelussa vielä on - Jakso 41, jossa puhuimme Crunchyroll Mangan HTML5-version päällekääntämisestä - Jakso 52, jossa puhuimme mangapalvelu Azukin perustamisesta - Crunchyrollin uutinen mangapalvelun avaamisesta vuodelta 2013: alun perin mangapalvelu sisältyi hintaan vain kalliimman tilauksen ostaneille - Lucifer and the Biscuit Hammer - Sun-Ken Rock - Spirit Circle - Scum's Wish - Investor Z - Girl May Kill - Joshi Kausei - Inside Mari - Insufficient Direction - Memoirs of Amorous Gentlemen - Kodansha veti sarjansa muista palveluista ja perusti oman palvelunsa - Crunchyrollin uusi mobiilipelitarjooma - Animenstriimauspalvelun striimiboksipalveluitakin on päivitetty tänä kesänä (01:52:18) – YLEN KULTTUURIYKKÖNEN JA GHIBLI - Ylen Kulttuuriykkönen 8.11.2023: Maailman tunnetuin animaattori Hayao Miyazaki sekä Studio Ghiblin uutuusleffa Poika ja haikara - Pekka Lehtosaari - Myy Lohi - Maaretin vierailu Kulttuuriykkösessä 13.7.2023, josta puhuimme Mangakartan jaksossa 87 - Afureko-blogin Äänijälki-podcast - Mamoru Hosoda - Hiromasa Yonebayashi - Petterin uutinen Ghiblin tuotannon sulkemisesta Anime-lehdessä 6/2013 (kuva) - Petterin uutinen Ghiblin tuotannon jatkumisesta Anime-lehdessä 4/2017 (kuva) - Petterin jatkoartikkeli Ghiblin tuotannon jatkumisesta Anime-lehdessä 7/2017 (kuva) - Tuottaja Toshio Suzukilla on alkanut mennä firman rahat ja omat rahat sekaisin - NTV osti Studio Ghiblin ja teki siitä tytäryhtiönsä syyskuussa 2023 - Poika ja haikara (01:59:46) – JAPANIN VEROUUDISTUS - Fullfrontal.moen informatiivinen Twitter-ketju verouudistuksesta - Unseen Japan: Will Voice Actors Quit Over Japan's New Tax Law? - The Japan Times: Freelancers aren't happy with Japan's new invoice system - The Mainichi: 27% of voice actors in Japan may quit due to 'hellish choice' with new invoice system - Ääninäyttelijöiden etujärjestö Voictionin englanniksi tekstitetty animaatiovideo verouudistuksesta (YouTube) (02:08:38) – HAMPAANKOLOSSA: EISNER JA HARVEY 2023 - Jakso 75, jossa puhuimme vuoden 2022 Eisner- ja Harvey- palkinnoista - ANN: Chainsaw Man Manga Wins Best Manga Harvey Award for 3rd Straight Year - ANN: Hayao Miyazaki's Shuna's Journey Wins Eisner Award - ANN: The Will Eisner Comic Industry Awards: A Spotlight on Quality or A Sticker for Sales? (02:14:02) – HAMPAANKOLOSSA: MANGA PLUS MAX - Jakso 91, jossa puhuimme Manga Plus -palvelun uudesta kuukausimaksullisesta tilausmallista - Henri Björnin kommentti X:ssä (ent. Twitter) (02:14:37) – KUULIJAKOMMENTTI: VANITAKSEN KIRJA JA VAMPIIRIT - Jakso 90, jossa puhuimme siitä, miten Vanitaksen kirja -mangan suomennoksessa käytetään sanaa “vampiiri” - Antti Valkaman kommentti X:ssä (ent. Twitter) - Jakso 47, jossa puhuimme sarjasta Black Rose Alice, jossa vampyyrit ovat “verta imeviä puita” (吸血樹, kyuuketsuki) eivätkä “verta imeviä hirviöitä” (吸血鬼, kyuuketsuki) kuten tavallista (02:18:13) – KUULIJAKOMMENTTI: DEVILMAN JA EPÄSOVINNAISUUS - Jakso 89, jossa puhuimme Devilmanista - Jarmon kommentti Mastodonissa - Geekkicastin sarjissilmäys Devilmanista (YouTube) - Jakso 77, jossa puhuimme Chainsaw Manista (02:20:43) – KUULIJAKOMMENTTI: COMICS CODE - Äänijälki-pocastin Cillan kommentti X:ssä (ent. Twitter) - Comics Code Authority - Seduction of the Innocent (02:24:07) – KUULIJAKOMMENTTI: THE GUY SHE WAS INTERESTED IN WASN'T A GUY AT ALL - Jakso 91, jossa puhuimme sarjasta The Guy She Was Interested in Wasn't a Guy at All - Vampirenaomin kommentti X:ssä (ent. Twitter) (02:24:44) – LUKUJONOSSA: SOMETHING'S WRONG WITH US - Something's Wrong With Us - Natsumi Ando - Kitchen Princess - Valssin aika, josta puhuimme jaksossa 58 - Be-Love-lehden lukijakunta on varsin vanhaa - Wagashit, perinteiset japanilaiset makeiset - Seirou Opera, josta puhuimme jaksossa 7 (02:39:21) – LUKUJONOSSA: HAPPY OF THE END 2 - Happy of the End - Jakso 88, jossa puhuimme Ogeretsu Tanakan tuotannosta ja sarjan ykköspokkarista - Sarjan kolmannen ja viimeisen pokkarin kansi (02:50:51) – LOPETUS - Podcast Addict - Pocket Casts  

The Jim Gale Show
E27: Full Spectrum Coherent Water for Life Featuring Dolf Zantinge

The Jim Gale Show

Play Episode Listen Later Jan 13, 2023 53:02


In this episode, Dolf Zantinge joins Jim and Matthew to share the importance of coherent water on the body and mind. Jim also shares his concerns with artificial intelligence and his excitement for the many benefits of Food Forests. Dolf is an entrepreneur with a background in fibre optics, telecommunication, artificial intelligence and data mining. Early in his career, he co-founded Syllogic, an international IT firm in the domains of AI, machine learning and database management systems. After Syllogic was acquired by Perot Systems, Dolf was European Director for the company. Later, he was the Director of IT at KPN, the largest Dutch telecom company. He founded and chaired UNET, one of the first fiber optics companies in Europe. From a tech-heavy career, Dolf's career took an unusual turn when he pursued the study of Chinese medicine / acupuncture and delved into the impact of electromagnetic frequencies on biological systems. This led to further research on photonics, physiology, light, and water. He partnered with Eric to develop the technology to create full spectrum coherent water and measure its positive effects on biological systems. He is passionate about providing holistic and natural solutions to counter the impact of electromagnetic radiation on our health and consciousness.   The Rainwater Harvesting Masterclass starts on Jan 25th here: https://vergepermaculture.ca/rainwaterharvestingcourse Analemma Water: https://analemma-water.com?ref=3068 Save 10% with Coupon Code: FFA    Follow Analemma: Website: https://analemma-water.com?ref=3068 Facebook: https://www.facebook.com/analemmawater YouTube: https://www.youtube.com/@analemmawater1346 Instagram: https://www.instagram.com/analemmawater_/ TikTok: https://www.tiktok.com/@analemmawater Food Forest Abundance: Website: https://foodforestabundance.com Facebook: https://www.facebook.com/FoodForestAbundance Instagram: https://www.instagram.com/foodforestabundance/ Twitter: https://twitter.com/FFAbundance LinkedIn: https://www.linkedin.com/company/food-forest-abundance/   The Jim Gale Show Podcast: https://linktr.ee/jimgaleshow   Sponsored by The Weston A. Price Foundation: https://www.westonaprice.org

PaperPlayer biorxiv neuroscience
Applying Unet for extraction of vascular metrics from T1-weighted and T2-weighted MRI

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Dec 20, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.12.18.520922v1?rss=1 Authors: Orooji, F., Butler, R. Abstract: We apply deep learning to the problem of segmenting the arterial system from T1w and T2w images. We use the freely available 7-Tesla 'forrest' dataset from OpenNeuro, (which contains TOF, T1w, and T2w) and use supervised learning with T1w or T2w as input, and TOF segmentation as ground truth, to train a Unet architecture capable of segmenting arteries and quantifying arterial diameters from T1w or T2w images alone. We demonstrate arterial segmentations from both T1w and T2w images, and show that T2w images have sufficient vessel contrast to estimate arterial diameters comparable to those estimated from TOF. We then apply our Unet to T2w images from a separate dataset (IXI) and show our model generalizes to images acquired at different field strength. We consider this work proof-of concept that arterial segmentations can be derived from MRI sequences with poor contrast between arteries and surrounding tissue (T1w and T2w), due to the ability of deep convolutional networks to extract complex features based on local image intensity. Future work will focus on improving the generalizability of the network to non forrest datasets, with the eventual goal of leveraging the entire pre-existing corpus of neuroimaging data for study of human cerebrovasculature. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Harhaoppia
58. Joosef, taivaan kuningatar? (feat. Pasi Schultz)

Harhaoppia

Play Episode Listen Later Dec 9, 2022 61:25


Genesiksessä kerrotaan Joosefista, jonka veljet heittivät hänet kaivoon ja myivät orjaksi, mutta joka nousi lopulta Egyptin valtakunnan johtoon unien tulkitsemisen kykynsä avulla. Mutta mitä tekemistä mesopotamialaisella jumalatar Inannalla on Joosefin tarinan kanssa ja miten kumpikin heistä liittyy kristinuskon opetukseen Jeesuksesta Jumalan inkarnaationa? Jakso on itsenäinen kolmas osa Joosefin tarinaa käsittelevästä neljän jakson adventtispesiaalista, joka ei liity adventtiin ja sopii siksi kuunneltavaksi ympäri vuoden ilman että ahdistaa liikaa. Mukana jälleen Harhaoppia-podcastin epävirallinen virallinen joulunajan kanssaharhaopettaja, helluntailais-luterilainen uskonnon opettaja Pasi Schultz. Kuuntele myös Pasin aiemmat jaksot 26. Maria, Taivaan kuningatar ja 37. Auringonkierron evankeliumi. Jakson aiheita - Unet, Jumalan puhe ja intuitio - Egypti Tuonelana - Faraon leipurin ja juomanlaskijan unien ehtoollissymboliikka - Jeesuksen ruumiin molekyylit ja ehtoollisteologia - Joosef Kristus-hahmona - Mesopotamialaisen jumalatar Inannan Tuonelan-matka Joosefin tarinan taustalla ja sen mahdollinen vaikutus uskontunnustukseen - Mitä tapahtuu 5.9.31? Entä kuka on Anna Valta ja mitä tekemistä hänellä on Sauli Niinistön kanssa? Harhaoppia on katsomusaineiden opettaja Markus Finnilän (TM) ohjelma epäileville uskoville ja uskoville epäilijöille, joka tarjoaa epätervettä teologiaa ja vääriä vastauksia elämän suurimpiin kysymyksiin. Palautetta, kysymyksiä ja kommentteja voi lähettää osoitteeseen harhaoppia@gmail.com tai facebookin ja Instagramin kautta. Kuuntele myös Harhaoppia-podcastin spin-off Uskonto on tylsää. - Harhaoppia Instagramissa - Harhaoppia Facebookissa Musiikki Dan Koch

Ingenios@s de Sistemas
Episodio 152- Inteligencia Artificial III

Ingenios@s de Sistemas

Play Episode Listen Later Nov 15, 2022 14:04


Cómo funciona una inteligencia artificial del tipo Stable Diffusion.En esencia, los modelos de difusión son modelos generativos. En concreto, en las tareas de visión por ordenador, funcionan primero añadiendo sucesivamente ruido gaussiano a los datos de la imagen de entrenamiento. Una vez que los datos originales están completamente cargados de ruido, el modelo aprende a invertir completamente el proceso de ruido, lo que se denomina eliminación de ruido. El objetivo de este proceso de eliminación de ruido es recrear de forma iterativa las características gruesas y finas de la imagen original. A continuación, una vez completado el entrenamiento, podemos utilizar el modelo de difusión para generar nuevos datos de imagen simplemente pasando el ruido muestreado aleatoriamente a través del proceso de eliminación de ruido aprendido. Stable Diffusion Stable Diffusion, la continuación del trabajo anterior de los mismos equipos sobre los modelos de difusión latente, mejoró significativamente a sus predecesores tanto en la calidad de la imagen como en el alcance de su capacidad. Lo ha conseguido gracias a un conjunto de datos de entrenamiento más robusto y a cambios significativos en la estructura del diseño. Este modelo utiliza un codificador de texto congelado CLIP ViT-L/14 para condicionar el modelo a las indicaciones de texto. El conjunto de datos utilizado para el entrenamiento es el laion2B-en, que consta de 2.320 millones de pares imagen-texto en lengua inglesa. Tras el entrenamiento, con sus 860M de UNet y 123M de codificador de texto, el modelo es relativamente ligero y puede ejecutarse en una GPU con al menos 10GB de VRAM. También puede optimizarse para ejecutarse en GPUs con ~8 GB de VRAM, reduciendo la precisión del formato numérico a media precisión (FP16). En la práctica, estos cambios permiten a Stable Diffusion sobresalir en una serie de tareas de visión por ordenador, entre ellas Síntesis semántica: generación de imágenes únicamente mediante el condicionamiento a partir de indicaciones de texto. Inpainting - rellenar con precisión las partes que faltan en las imágenes, utilizando el aprendizaje profundo para predecir las características de la parte que falta de la imagen Superresolución - una clase de técnicas que mejoran (aumentan) la resolución de un sistema de imágenes

Inteligencia Artificial
Monitoreando vegetación con segmentación semántica

Inteligencia Artificial

Play Episode Listen Later Jun 10, 2022


En este episodio conversamos sobre como se pueden monitorear espacios verdes utilizando redes neuronales con la arquitectura UNet para segmentación semántica Origen

UNIRESEPTI / Johannes Kajava
Esseisti painajaisista

UNIRESEPTI / Johannes Kajava

Play Episode Listen Later Apr 22, 2022 7:02


Unet ja niiden alalaji painajaiset ovat kiinnostaneet ihmisiä epäilemättä pidempään kuin mihin historiallinen lähdeaineisto yltää. Voisiko unia tarkastella samoin kriteerein kuin kaunokirjallisia tuotoksia? Kirjailija ja esseisti Jorge Luis Borges (1899–1986) tarttui esseekokoelmassaan Seitsemän iltaa uniin. Hän pohtii, ovatko unet olleet aina fabulointia, keksimistä, sepittämistä ja satuilua. Hän pitää mahdollisena, että parantelemme unia ja jatkamme parantelua … Continue reading "Esseisti painajaisista"

Daily
Pantallas ProMotion en el iPhone 14

Daily

Play Episode Listen Later Feb 24, 2022 12:44


Capítulo 2117 en el que os cuento los últimos rumores acerca de la más esperada característica para un iPhone no Pro: la pantalla ProMotion.Patrocinado https://riverside.fm, el mejor servicio para grabar tu podcast con invitados en remoto. Graba audio y también vídeo, hasta en 4K, con https://riverside.fm desde 9$/mes. Uneté ya a los más de 70.000 usuarios de riverside.fm, incluidos Spotify, el New York Times y Emilcar FM.

Daily
Spotify Car Thing

Daily

Play Episode Listen Later Feb 23, 2022 12:30


Capítulo 2116 en el que os cuento la salida al mercado general de Estados Unidos del Spotify Car Thing, un dispositivo que nos puede parecer extemporáneo pero que encaja perfectamente en la estrategia de Spotify de no dejar un frente sin atender.Patrocinado https://riverside.fm, el mejor servicio para grabar tu podcast con invitados en remoto. Graba audio y también vídeo, hasta en 4K, con https://riverside.fm desde 9$/mes. Uneté ya a los más de 70.000 usuarios de riverside.fm, incluidos Spotify, el New York Times y Emilcar FM.

Daily
Imposible contactar con Just Eat

Daily

Play Episode Listen Later Feb 22, 2022 14:20


Capítulo 2115 en el que os cuento cómo, tras sufrir un incidente con un pedido de Just Eat (nada del otro jueves), descubrí lo realmente imposible que puede ser contactar con la empresa para presentar una legítima reclamación.Patrocinado https://riverside.fm, el mejor servicio para grabar tu podcast con invitados en remoto. Graba audio y también vídeo, hasta en 4K, con https://riverside.fm desde 9$/mes. Uneté ya a los más de 70.000 usuarios de riverside.fm, incluidos Spotify, el New York Times y Emilcar FM.

Daily
Desplegando Shellys en HomeKit

Daily

Play Episode Listen Later Feb 21, 2022 12:55


Capítulo 2114 en el que os cuento la última fase de mi mudanza a HomeKit, consistente en desplegar varios Shellys para controlar las luces de diversas habitaciones.Patrocinado https://riverside.fm, el mejor servicio para grabar tu podcast con invitados en remoto. Graba audio y también vídeo, hasta en 4K, con https://riverside.fm desde 9$/mes. Uneté ya a los más de 70.000 usuarios de riverside.fm, incluidos Spotify, el New York Times y Emilcar FM.

Cityn Aamu Nyman & Jääskeläinen
Pirkka lager ja helvetin huonot unet

Cityn Aamu Nyman & Jääskeläinen

Play Episode Listen Later Feb 18, 2022 43:57


Radio Cityn Aamussa asiantuntijaosasto Jere ja Samuel seuraavat Suomi - Slovakia jännitysnäytelmää! Pojat pohtivat, että miksei Iivo Niskasen Olympiakultaa juhlittu torilla? Entä miksi Samuelin mummon kauppareissu lähti käsistä? Perinteinen perjantain Pornoa vai saippuaa -skabassa myös jännittävät paikat. Mikä on normaalin suomalaisen miehen kropan salaisuus? Lisäksi puhetta katalysaattoreista, Antti Aallosta, vakuutuksista ja Dacia-kuskeista.

ent lager lis mik pojat perinteinen unet pirkka
KRS:
UA 240122 Kaksikielisyyden ja uskon rikkaus sekä isänä, rakentajana ja saarnaajana (Matias Gädda)

KRS:

Play Episode Listen Later Feb 14, 2022 49:05


Uskon askeleita ohjelmat koostuvat kolmesta osuudesta, joissa jokaisessa kuullaan keskustelu, haastattelu tai opetus. Tässä jaksossa saamme kuulla kolme keskustelua, jotka ohjelman toimittaja Mikko Matikainen kävi Matias ”Matti” Gäddan kanssa. Matias on suomen ruotsikielinen. Ohjelman aikana on puhetta kielestä, taustoista, Jumalan kutsusta ja johdatuksesta. Esiin nousevat perhe, isyys ja rukouksen merkitys osana elämää, työtä ja palvelutehtävää maallikkosaarnaajana. Ohjelman ensimmäisessä osuudessa Matias Gädda kertoo kasvaneensa kodissa, jossa kuuluttiin lestadiolaiseen Rauhan Sanaan. Hän oli rukoushuoneen penkissä ja oli aika ujo. Hän alkoi saada unia, jossa hän oli rukoushuoneen saarnatuolissa. Ne tuntuivat painajaisilta. Jumala johdatti talonrakentamisen piiriin ja rakennesuunnittelijaksi, josta aukesi vähitellen mielekäs työ ja elämä. Jeesus oli rakentajan poika ja äärimmäisen käytännöllinen. Sanan tulisi sulautua ihmiseen ja muuttua teoiksi. Mikko avaa myös näköaloja mediamissioon ja kertoo missä siinä ollaan menossa. Ohjelman toisessa osuudessa Matias Gädda kertoo, miten hän siirtyi pääkaupunkiseudulle ja oppi samalla suomen kieltä. Jumalan sana avautui ja puhui hänelle. Tarina tuntui avautuvan hänen silmiensä edessä kuin elokuvana. Unet palasivat, sitten Matias kutsuttiin puhujien kokoontumiseen, jossa häntä kutsuttiin puhujien joukkoon. Silloin hän kertoi ensimmäisen kerran jo nuorena näkemistään unista. Näin hänestä tuli osa Rauhan Sanan puhujajoukkoa. Matias kokee Jumalan puhuvan, kun hän valmistelee puheita. Joskus tuntuu, kuin Jumala puhuisi kirjoittavien sormien kautta. Jumala haluaa viettää kanssamme aikaa. Aina ei tarvitse lukea pitkää pätkää Raamatusta. On vaan hyvä olla Sanan äärellä kuulolla ja kuulostella. Ohjelman kolmannessa osuudessa ohjelman toimittaja Mikko Matikainen puhui Matias Gäddan kanssa kutsumuksesta. Hän kertoi opiskelleensa rakennesuunnittelijaksi. Nyt hän on norjalaisen suuren vaatealan konsernin Suomen kiinteistöistä ja liiketiloista vastaava. Matias kertoi, että silloin kun hän rukoilee työn puolesta, niin työt yleensä sujuvat paremmin. Kun rukous unohtuu, niin asiat eivät suju ja päivät venyvät. Matias kertoi myös neljästä lapsestaan, heidän rohkeudestaan olla kristittyjä. Perheenä he tuovat esiin sekä uskoa että ruotsinkielisyyttään. Näin tuleekin tehdä. On todella hienoa tutustua Matias Gäddaan ja kuulla hänen tapaansa elää kristittynä. Hänen intonsa on tarttuvaa lajia. Uskon askeleita ohjelmissa rukoillaan esiin nousevien asioiden puolesta ja annetaan rohkaisua kristityn arkeen. Ohjelman lopuksi annetaan virkkeitä ja ajatuksia, joita voi soveltaa omaan elämäänsä. Ne löytyvät myös uskon askeleita Facebook-seinältä. Toimittajana on Kansan Raamattuseuran reissupastori Mikko Matikainen. KRS koulutustiimin tekemiä haastatteluja työstää ohjelmaa varten Jussi Pyysalo. Uskon askeleita ohjelman tuottavat yhteistyössä Kristityt yhdessä ry ja Kansan Raamattuseura.

Podcasty Radia Wnet / Warszawa 87,8 FM | Kraków 95,2 FM | Wrocław 96,8 FM / Białystok 103,9 FM
Studio Białoruskie Radia Wnet odc. 1: Kryzys migracyjny, białoruska diaspora w Lublinie, historia redakcji Radia Unet.

Podcasty Radia Wnet / Warszawa 87,8 FM | Kraków 95,2 FM | Wrocław 96,8 FM / Białystok 103,9 FM

Play Episode Listen Later Oct 13, 2021 49:16


Zapraszamy do wysłuchania pierwszego Studia Białoruskiego Radia Wnet. Oprócz serwisu informacyjnego można wysłuchać dwóch ciekawych rozmów. Po pierwsze z dr Jakubem Olchowskim z Instytutu Europy Środkowej, o kryzysie na granicy polsko-białoruskiej, możliwych jego rozwiązaniach i tym, co zrobiła w tej kwestii Litwa, a czego nie zrobiła Polska. Po drugie z panią Iriną Lappo, Białorusinką mieszkającą w Lublinie, organizatorką wielu wydarzeń umacniających tamtejszą diasporę białoruską. Rozmawiamy między innymi o wydarzeniach kulturalnych w Lublinie. Pod koniec, redaktor Olga Semashko przedstawia białoruską redakcję Radia Unet i dotychczasowe osiągnięcia stacji. Audycje prowadzą Paweł Bobołowicz i Józef Skowroński --- Send in a voice message: https://anchor.fm/radiownet/message

Ihan mamina podcast
# 5 Perhepeti pilaa yöunet ja seksielämän, myytti rikottu!

Ihan mamina podcast

Play Episode Listen Later May 31, 2021 40:03


Ihan mamina podcastin neljännessä jaksossa ratkotaan perhepedin mysteeriä. Millaista on nukkua taaperon, vauvan ja miehen kanssa samassa sängyssä? Latistuuko seksielämä? Miten saada lapset opetettua omaan sänkyyn? Studiossa kanssanne parempia yöunia metsästämässä Niina Mantere ja Jenni Pykälä

Äänirunopuro
Sydämen aarre

Äänirunopuro

Play Episode Listen Later Apr 23, 2021 1:30


Sydämen aarre On piilossa sydämen aarre tuo kaunis ja kultainen, Pintaa kun vain raaputat, saat esiin kyvyt mahtavat, vain kuoresi peittää sen. Illan tullen piilostaan haaveet saapuu kertomaan, mitä sisälläsi piileekään. Unet kyllä kertovat, saat niistä kartat upeat, kunhan luotat sisimpään. On aika nyt tullut sen kuunnella viestiä sydämen. Takkatulen loimussa, taikka meren rannassa, sydän kertoo totuuden. Kokeilkaapa kuningaskobran tahtiin. Ulla-Maija Mantere

syd illan unet aarre
UNIRESEPTI / Johannes Kajava
Päiväunet ja unipaineen lasku

UNIRESEPTI / Johannes Kajava

Play Episode Listen Later Mar 12, 2021 5:49


Nukahtaminen on tavallisesti niin itsestään selvä asia, ettei sitä tule edes miettineeksi. Se vaatii kuitenkin oikea-aikaisen väsymystilan sekä mielen ja kehon levollisuuden. Päiväunet voivat häiritä nukahtamista myöhemmin illalla.   Unettomuudesta kärsivien määrä on viimeisten kymmenien vuosien aikana lisääntynyt ainakin kehittyneissä maissa, joihin unitutkimus keskittyy. Kulttuuri- ja elinympäristömme muutokset tarjoavat yhden näkökulman tarkastella ongelmaa. Katson tällä … Continue reading "Päiväunet ja unipaineen lasku"

Futucast
Antti Revonsuo | Tietoisuus ja unet #149

Futucast

Play Episode Listen Later Jan 11, 2021 90:27


Tämän jakson vieraana on neurotieteilijä ja mielenfilosofi, Turun yliopiston professori Antti Revonsuo. Kuinka monta kertaa olet miettinyt mitä tietoisuus on? Mikä on tämä hereillä olemisen tila jossa voi nähdä, tuntea, haistaa, maistaa, luulla, laulaa, puhua ranskaa, silittää kissaa? Mistä se tulee, ja miksi se on olemassa? Miksi näemme nukkuessamme unia, ja mitä ne kertoo meidän tietoisuudesta? Antti miettii ja tutkii näitä asioita työkseen. Osallistu keskusteluun Twitterissä: https://twitter.com/futucast Lyhyet klipit Instagramissa: https://www.instagram.com/futucast/ Jaksot videon kera Youtubesta: https://www.youtube.com/channel/UCQPojdjir3suCXQA_09P0ag Siistit nettisivut: https://www.futucast.com

BCL Coast to Coast
Coast To Coast Podcast Episode 12: 2020 review, 2021 hopes, CJ Harris of Hapoel Unet-Credit Holon

BCL Coast to Coast

Play Episode Listen Later Jan 2, 2021 58:07


Happy New Year and all the best to everyone for 2021 from the Basketball Champions League Coast To Coast podcast team. There were no games this past week but we look back at 2020 and look ahead to 2021. There is also an interview with CJ Harris of Hapoel Unet-Credit Holon. David Hein is joined by Diccon Lloyd-Smeath to talk about each of our favorite three moments from 2020 and also came up with three New Year's resolutions for BCL teams. And of course, we look at the action for the next week of action. Rundown of the show 8:30 - 2020 moments 18:20 - New Year's resolutions 24:20 - CJ Harris of Hapoel Unet-Credit Holon 51:30 - Next week's games If you have any questions or ideas about what we should talk about on the podcast, please email us at info(@)championsleague.basketball Follow the Basketball Champions League on Twitter and Instagram at @BasketballCL and like the Basketball Champions League on Facebook and subscribe to our BCL YouTube channel. Also, download the BCL mobile app on iOS and Android and watch games on Livebasketball.tv.

Tyhjäntoimittajat
19. Jussi Omaheimon unet

Tyhjäntoimittajat

Play Episode Listen Later Aug 16, 2020 51:23


Tässä Tyhjäntoimittajien jaksossa Juhana keskustelee Kansalliskirjaston kirjastonhoitajan ja pitkän linjan meditoijan Jussi Omaheimon kanssa unista, uniharjoituksista ja näiden yhteyksistä meditointiin ja dharmaan.

NRJ:n Aamupalat
Sammakonreidet wings-kastikkeella ja magneettikuvaus unet

NRJ:n Aamupalat

Play Episode Listen Later Jun 22, 2020 59:16


NRJ:n Aamu toivottaa tsemppiä tiistaihin! Jerellä unenlahjat ovat selvästi parantumaan päin, kun mies nukahtelee jo kesken lääkärikäyntien. Janne taas suunnitteli tekevänsä sammakonreisistä helpommin lähestyttävät! Mielessä myös stadilainen isäntä, Ranta-Ahon some ja kotipaikkakuntahypetys!

Villapaitaseura
Jakso 24: Unet

Villapaitaseura

Play Episode Listen Later Jun 13, 2020 59:17


Kesäkuun ensimmäisessä jaksossa hypätään unien maailmaan. Mitä unet kertovat meistä ja tämän hetkisestä elämäntilanteestamme? Mitä tiede sanoo unien näkemisestä? Juho kaivaa esille perinteisen oppaan unien merkityksestä ja syyniin pääsee niin hänen kuin Merin viimeisimmät uniseikkailut. Lähteet: Leeni Peltonen: ”Unien näkeminen kiehtoo ja hämmentää” Uniuutiset (Uniliitto ry:n jäsen- ja tiedotuslehti) 4/2019. https://www.google.com/search?q=leeni+peltonen+uni&rlz=1C1NDCM_fiFI795FI796&oq=leeni+peltonen+uni&aqs=chrome..69i57.4260j0j7&sourceid=chrome&ie=UTF-8 // Yle Uutiset ”Miksi unessa ei koskaan pääse perille ja kännykkä on aina mykkä? Unien terveysvaikutukset yhä osin mysteeri myös tutkijoille” 10.1.2020. https://yle.fi/uutiset/3-11146370 //Tamara Maunonen: Suuri Unikirja (Gummerus 2008). //Yle Areenan Uniraati: Klassikkounet – Viki ja Köpi 14.12.2018.: https://areena.yle.fi/audio/1-50025753

TRAVESÍA BIM
El BIM en las universidades, caso Venezuela - EP011

TRAVESÍA BIM

Play Episode Listen Later Mar 13, 2020 56:19


Hoy les hablaremos acerca de la incursión del concepto del Building Information Modelling dentro de la academia. Haremos especial foco en el caso de Venezuela, pero a su vez haciendo un repaso general a puntos en común que podemos extrapolar a universidades de nuestra región. Para esta ocasión, tengo el gusto de que me acompañe el Arq. José Rafael Ferrero, arquitecto egresado de la Universidad Nacional del Táchira UNET, Profesor de la Facultad de ingeniería de la Universidad Metropolitana UNIMET, modelador BIM, autodenominado BIM-Rookie, y miembro del comité BIM FORUM capítulo Venezuela, en el proyecto BIMLABS. Algunos puntos del tema central que pudimos conversar son: El arranque de múltiples iniciativas sobre BIM en Venezuela en los últimos meses ¿Cómo afecta la burocracia a la transformación digital dentro de las universidades? La resistencia al cambio, tan presente en nuestra industria AEC, ¿es mayor o menor en la academia? Las brechas tecnológicas, por razones económicas, socioculturales, en particular acerca de la situación Venezolana. Las brechas generacionales y cómo abordar los cambios de paradigmas en la enseñanza y aprendizaje. José Rafael nos compartió unas excelentes reflexiones finales sobre este interesante tema, y nos comparte los siguientes enlaces: Twitter Universidad Metropolitana - https://twitter.com/Unimet Twitter BIM Forum Venezuela - https://twitter.com/BIMForumVE Y sus cuentas personales son: Linkedin - https://www.linkedin.com/in/jose-rafael-ferrero-alvarado-6943b9a0/ Twitter - https://twitter.com/jferreroa Instagram - https://www.instagram.com/jferreroarq/ Como siempre, la invitación a suscribirse, comentar y compartir, y que vivamos juntos la transformación digital de la industria arquitectura ingeniería y construcción. Vivamos BIM Nuestros Enlaces: Web -- formacionaec.com/travesiabim Canal Telegram -- t.me/travesiabim Twitter -- @msanleon Instagram -- @msanleon_ Facebook -- facebook.com/msanleon LinkedIn -- linkedin.com/in/msanleon YouTube -- youtube.com/channel/UCHgZbYxXyu-2EJmtEXdPBWQ --- Send in a voice message: https://anchor.fm/travesiabim/message

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Secrets of a Kaggle Grandmaster with David Odaibo - #354

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Mar 5, 2020 41:15


Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions. Having completed his degree last year, he is currently co-founder and CTO of Analytical AI, a company that grew out of one of his recent Kaggle successes. David has a background in deep learning and medical imaging–something he shares with his brother, Stephen Odaibo, who we interviewed last year about his work in Retinal Image Generation for Disease Discovery. Check out the full article and interview at twimlai.com/talk/354

Ihan Pihalla
Kolmioleipäsota, cheerleaderit, läskipyöräsankarit, taantuminen & päiväunet

Ihan Pihalla

Play Episode Listen Later Dec 22, 2019 73:52


Se olis murekepäivä! Sekoilu jatkuu tälläkin viikolla, kun viisi urhoa louskuttavat mitä moninaisimmista aiheista toistensa päälle huutaen.Muchos grandes. Anteeksi Väpä. Värikapina. Märsti ja märkyli. Ursäkta mig. Behavaritransistori. Kokkivinkki. Stuntti seis. Teollisesti märkä. Smärmeli on hyvä sana. Piihana. Club Sandwich. Bsalami. Pohjatarina. Tarinan maalaus. Lapsi kimpoilee peräkärryssä. Mummon Helkama. Miksi keksiä pyörä uudestaan? Goretex kiinnostelee. Oletko muistanut opiskella ja käydä töissä? Ootappa mää soitan yhden puhelun. Uni tulee, jos on tullakseen. Kengät jalassa äitin sängyllä. Kengännauhabudjetti. Mikä on hotelli ja miten aloitan? Pitkä pasodoble. Imitaattorivieraina TIS-Väpä, Jorrrrma Uotinen ja Helena Ahti-Hallberg.Jakso on nauhoitettu 7.12.2019.Lähetä meille palautetta, kannustusta ja kenties asioita, joista sinä olet pihalla! Meidät löytää sosiaalisen median palveluista @ihanpihallapodcast sekä sähköpostilla ihanpihallapodcast@gmail.com.

Les Reportages de Ouest Track Radio
#UnEtéauHavre : les Havrais invités à construire une ville éphémère

Les Reportages de Ouest Track Radio

Play Episode Listen Later May 27, 2019 5:37


Nous nous sommes rendus au Port Center du Havre, QG de l'édition 2019 d'un Été Au Havre où nous avons rencontré Marine et Hélène, médiatrices du projet.Elles nous parlent de ce qu'est un été au Havre mais aussi des ateliers participatifs proposés aux Havrais dans le cadre de la construction d'une ville éphémère au Quai Southampton ~Contacter l'équipe de médiation de l'association MARC pour les inscriptions aux ateliers participatifs :associationmarc@gmail.com07 69 60 07 28Site internet d'un Été Au Havre : https://www.uneteauhavre.fr/frInstagram : uneteauhavreFB : https://www.facebook.com/UnEteAuHavre/L'association MARC ? Qu'est-ce que c'est ? http://www.asso-marc.fr/

Pyhiä juutalaisia kirjoituksia
Talmudin unikirja: Nukkuisin, minulla olisi rauha

Pyhiä juutalaisia kirjoituksia

Play Episode Listen Later Apr 14, 2019 29:43


Jakso 42/59. ”Se joka unessa kiipeää katolle, saa kunniaa ja mainetta.” Unet ja niiden tulkinnat jatkuvat. Nukkuminen on sairaalle hyvä merkki. Luentaohjelmasarja. Suomennos: Riikka Tuori ja Tapani Harviainen. Keskustelijoina Simon Livson, Riikka Tuori ja Tapani Harviainen. Toimittaja Juha-Pekka Hotinen. Lukija Pekka Savolainen. Äänisuunnittelu: Timo Hintikka.

Valominussa's podcast
Jakso #9: Pyhät unet, niiden sanoma ja muistiin kirjaaminen

Valominussa's podcast

Play Episode Listen Later Dec 20, 2018 20:53


Jouluaatosta alkaa Pyhien unien sarja ja jatkuu 12 yötä eteenpäin. Nämä unet ovat jokaiselle katsottavissa oman elämän suuntaviitoiksi seuraavalle vuodelle, jokaiselle tulevalle kalenterikuukaudelle on omansa. Sinulla ei tarvitse olla mitään poikkeuksellisia kykyjä saadaksesi omat unesi muistiin. Kuuntele ja poimi vinkit talteen. Tässä jaksossa kuulet Mitä ovat Pyhät unet ja mitä hyötyä niistä on? Mistä unet kertovat? Muutama vinkki, jotta saat unesi muistiin. Lohduttavat unet (parempi sana kuin varoittavat) Iloisten tapahtumien, elämän tuomien lahjojen ja mahdollisuuksien unet ml. henkimatkaamisen unet. Esimerkki ikävästä tulevasta tapahtumasta ja miksi unennäkö etukäteen oli merkityksellistä? Esimerkki kuinka uni kertoi vuotta aikaisemmin tarkan ajankohdan kahden kissanpennun syntymästä ja sukupuolen. Voisiko tästä päätellä, että tätä isommatkin asiat on kirjattuina maailmankaikkeuden rakastavaan viisauteen?

Meets.fm
Sever Seller

Meets.fm

Play Episode Listen Later Dec 11, 2018 94:40


# Reference --- - [Turing Complete FM](https://turingcomplete.fm/) - [Rui Ueyama](https://twitter.com/rui314) - [リンカ](http://e-words.jp/w/%E3%83%AA%E3%83%B3%E3%82%AB.html) - [エミュレータ](http://e-words.jp/w/%E3%82%A8%E3%83%9F%E3%83%A5%E3%83%AC%E3%83%BC%E3%82%BF.html) - [Matz](https://ja.wikipedia.org/wiki/%E3%81%BE%E3%81%A4%E3%82%82%E3%81%A8%E3%82%86%E3%81%8D%E3%81%B2%E3%82%8D) - [Rebuild](https://rebuild.fm/) - [hak](https://twitter.com/hak) - [backspace.fm](http://backspace.fm/) - [misreading chat](https://misreading.chat/) - [Generative Adversarial Networks(GAN)](http://ventureclef.com/blog2/?p=3423) - [バイリンガルニュース](https://bilingualnews.libsyn.com/) - [Meetup](https://www.meetup.com/) - [connpass](https://connpass.com/) - [セッション](https://wa3.i-3-i.info/word1791.html) - [Cookie](https://saruwakakun.com/it/web/cookie-cache) - [GPU](https://www.idcf.jp/words/gpu.html) - [Nvidia GPU](https://wccftech.com/nvidia-geforce-11-series-gtx-1180-gtx-1170-gtx-1160-release-date-leak/) - [GPU ベンチマーク](https://www.dospara.co.jp/5shopping/share.php?contents=vga_def_parts) - [分散処理](https://www.idcf.jp/words/distributed-processing.html) - [Kaggle](https://www.kaggle.com/) - [Unet](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/) - [AWS](https://aws.amazon.com/jp/) - [K80](http://www.elsa-jp.co.jp/products/products-top/gpu_computing/tesla_server/tesla_k/tesla_k80/) - [Tesla v100](https://www.nvidia.com/ja-jp/data-center/tesla-v100/) - [Oryx Pro](https://system76.com/laptops/oryx) - [ubuntu](https://www.ubuntulinux.jp/) - [libreOffice](https://ja.libreoffice.org/) - [イテレーション](https://kotobank.jp/word/%E3%82%A4%E3%83%86%E3%83%AC%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3-674398) - [JAVA](https://techacademy.jp/magazine/9250) - [数値最適化](https://ja.wikipedia.org/wiki/%E6%9C%80%E9%81%A9%E5%8C%96%E5%95%8F%E9%A1%8C) - [アクチュアリー](https://ja.wikipedia.org/wiki/%E3%82%A2%E3%82%AF%E3%83%81%E3%83%A5%E3%82%A2%E3%83%AA%E3%83%BC) - [正規分布](https://ja.wikipedia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83)

NHL-löylyt
3. Änäri alkoi ja yöunet meni – katsaus ensimmäiseen viikkoon

NHL-löylyt

Play Episode Listen Later Oct 11, 2018 40:46


NHL-löylyt -podcastin kolmannessa jaksossa mm. nostetaan jalustalle ensimmäisen viikon vakuuttavimmat suomalaissuoritukset, triggeröidytään Tom Wilsonin ja Brad Marchandin rottailuista sekä lanseerataan sarjan viralliset viikkopalkinnot.

SNOCKAST - Unternehmer Podcast über Amazon FBA
#40 – Durch Facebook Ads & Perfomance Marketing seine Sales verdoppeln. Interview mit Unet Marketing

SNOCKAST - Unternehmer Podcast über Amazon FBA

Play Episode Listen Later May 8, 2018 49:18


Im neuen SNOCKAST spricht Johannes mit den Experten für Facebook Ads in Deutschland: Alex und Marvin von Unet Marketing. Worauf es bei Facebook Werbeanzeigen ankommt und wie auch Ihr Euer Business damit erfolgreich bewerben könnt, das verraten die beiden im Gespräch. Hört Euch auch ihren eigenen Podcast zum Thema an! Facebook Gruppe für Fortgeschrittene Amazon-Verkäufer “SNOCKSULTING”: www.facebook.com/groups/272828473256567/ Unet Marketing: www.unet-marketing.de MAIL: team@unet-marketing.de Conversion Booster Podcast: https://apple.co/2I5JnxR Johannes: www.instagram.com/johannes.snocks MAIL: johannes@snocks-socks.com Felix: www.instagram.com/felix.snocks MAIL: felix@snocks-socks.com SNOCKS: www.snocks-socks.com www.instagram.com/snocks.socks AMALYZE: www.amalyze.com

Radio Suomesta poimittuja
Radio Suomesta poimittuja: Sananen – Kun unestakin leikataan

Radio Suomesta poimittuja

Play Episode Listen Later Feb 9, 2016 4:47


Rahalliset leikkaukset me kestämme. Mutta kun aletaan leikata unesta, pinnasta ja ajattelusta, vaikeudet alkavat. Kun jostain tulee oikein lyhyttä, siitä ei saa myötäotetta. Korkeasaaren karhut heräsivät talviuniltaan. Yli kuukauden etuajassa. Asiantuntijat eivät tiedä syytä. Minä tiedän. On kuulemma totuttava siihen, että kaikesta leikataan. Karhujen talviunista leikattiin yli kuukausi. Se on kohtuuton kertaleikkaus. Uni on aineetonta hyvää, perustavanlaatuinen saavutettu etu. Ihmisen pitäisi tietää, miten elämä alkaa muistuttamaan alamäkiluistelua, kun unet jäävät liian lyhyeksi. Karhu ei ole niin tyhmä, että haluaisi näillä keleillä olla hereillä. Karhut heräsivät, kun kuuntelivat aikansa (betonin läpi) valittavia ihmisiä. Ei ne sanoja ymmärrä, mutta ikävä intonaatio ja aggressiivinen jankkaaminen se herättää kerrostaloissakin, vaikka seinän läpi ei saa sanoista selvää. Kun leikataan, se on pahinta jos ei tiedä kuka leikkasi. Unet on hyvä esimerkki. Ihminenkin, joka säpsähtää hereille jäteauton raivokkaista kolinoista, on pitkään neuvoton. Vielä synkässä tokkurassa aamukahvipöydässä se miettii, minkähän näköinen nikander autoa piiskasi ja mikä hänen lohduton elämäntilanteensa mahtoi olla. Joku leikkasi meidän talvestamme kaksi kuukautta. Joku vaan. Ja tässäkään ei ole kysymys rahasta. Kun muutetaan maisema, vaihdetaan pakkasen raittius khakin väriseen sakkaan ja kun koko ajan on tunne kuin joku kusisi korvaan, silloin on tehty paljon. Kukaan ei tule kysymään vaihtaisitko lomarahasi, jos saisit vaihdossa lapsuutesi talvet, ikuisen onnen ja viattoman huolettomuuden vielä kerran takaisin. Miksei ne jo leikkaa rahasta? Kun taskulaskin jumittaa, kaikki muut leikkaukset korostuvat. Ihmiskunnan kehitys näyttää taantuvan, ja vauhdilla. Missä vaiheessa meiltä leikattiin pitkäjänteinen oleminen? Jos menen syömään lounasta, meinaan tukehtua nuudeliin, kun yritän olla tehokas. Kun katson ikkunasta, en näe mitään. Silmämunat muuttuvat bingopalloiksi. Jos onnistuisi vahingossa rauhoittamaan itsensä, saattaisi katse tarkentua ihmiseen. Alkaisi pian miettiä yhteiskunnallisia. Lukeminenkin on vuosi vuodelta vaikeampaa. Meiltä on leikattu kykyä istua ja pysyä rivillä. Jos on niin sanotun älykön kirja, taistelu on todellista. Jänne venyy ja päästää jopa ääntä. Meidän ajatteluamme on myös leikattu. Mitä monimutkaisempi ajateltava, sitä varmemmin ajatus katkeaa. Esimerkkinä Euroopan pakolaiskriisi. Järkevän näköinen ihminen alkaa suoltaa mielipidettään ääneen ja jo kohta pöytäkaverit istuvat posket loipottavasta myötähäpeästä punaisina ja puhaltavat turtana latteensa. Missä kohtaa logiikka katkesi? Ajatus lähti niin kivasti liikkeelle. Vanhempien pinnaa on leikattu koko ajan lyhyemmäksi, jonkun toimesta. Odotan vaarina lukevani nyt pikkulapsena elävien romaaneja, jotka ovat nimeltään: Kasvoin hermoherkkien helvetissä ja Sähkövanhempien jaloissa läpi Taivalkosken ja Thaimaan. Ja kuitenkin meidän jokaisen elämämme on leikkaamaton versio. On tylsää, kaunista, turhaa ja ainutlaatuista. Katselen heränneitä Korkeasaaren karhuja. Kuulemma niiden pihkatappi on irronnut. Miten viisas on luonto. Miten sotku voidaan välttää. Kun karhu nukkuu, kauniisti ja raukeasti, sen perään muodostuu pihkatappi. Kuraattori muistuttaa meitä ihmisotuksia, että pihkatappi on karhun tapa estää jätöksiä sotkemasta sen turkkia – siis ulkoista olemusta. Jos evoluutio kulkee, kuten unessani katsastin, ihmisellekin tulee pihkatappi. Silloin vasta luonto laittaa meidät kuriin. Keskustelupalstat siistiytyvät. Viisas vaikeneminen näyttäytyy toisen kunnioituksena. Kun kaikki ei tule saman tien ulos, energiaa imeytyy itsetutkiskeluun. Maallikkosaarnaaja Maasola

Stressivapaa johtaja | Näkökulmia henkilökohtaiseen kasvuun

Unet ovat välähdyksiä itsemme puolista, joita emme muuten näkisi. Ne esittävät mielemme teatterissa lahjomattomalla logiikallaan kaavoja elämästämme. Unien avulla voimme elää turvallisella tavalla todeksi oman luonnetyyppimme heikkouksia - ja oppimalla unista voimme ryhtyä korjaamaan niitä parempaan, tuottavampaan ja tasapainoisempaan suuntaan.   Jos pidät tästä tai muista jaksoista, jaa SJ-podcastia sosiaalisessa mediassa; jätä arvio iTunesiin; ja ota yhteyttä.   jp@stressivapaajohtaja.fi http://www.stressivapaajohtaja.fi © Zeno Integral Coaching® 2016

sj usko unet unien
UNIRESEPTI / Johannes Kajava
Unet kohinakerroksina

UNIRESEPTI / Johannes Kajava

Play Episode Listen Later Jan 1, 1970


Miksi näkemämme unet ovat usein sirpaleisia ja sekavia, mutta toisinaan selkeitä ja helpommin liitettävissä oman elämämme asioihin ja tapahtumiin? Mahdollisesti kyse on unikerroksista, ei vain yhdestä unesta, tai ehkä kyse ei ole unista lainkaan. Yritämme silti muodostaa unestamme tarinan muille kerrottavaksi, mutta mistä tämä tapa on lähtöisin?    Käsittelen lyhyesti kokemusten epätarkkuutta, aivokuvantamista, narratiivin käsitettä … Continue reading "Unet kohinakerroksina"