POPULARITY
rWotD Episode 2770: HIST1H2AE Welcome to Random Wiki of the Day, your journey through Wikipedia’s vast and varied content, one random article at a time.The random article for Tuesday, 3 December 2024 is HIST1H2AE.Histone H2A type 1-B/E is a protein that in humans is encoded by the HIST1H2AE gene.Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Nucleosomes consist of approximately 146 bp of DNA wrapped around a histone octamer composed of pairs of each of the four core histones (H2A, H2B, H3, and H4). The chromatin fiber is further compacted through the interaction of a linker histone, H1, with the DNA between the nucleosomes to form higher order chromatin structures. This gene is intronless and encodes a member of the histone H2A family. Transcripts from this gene lack polyA tails; instead, they contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6p22-p21.3.This recording reflects the Wikipedia text as of 00:57 UTC on Tuesday, 3 December 2024.For the full current version of the article, see HIST1H2AE on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm neural Amy.
Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe
Has admitted, frankly, that abortion is murder, the extermination of the powerless by the powerful. Liberals, for the most part, have shrunk from facing the ethical consequences of their embrace of abortion, which results in the anihilation of concrete individuals that people… And not just clumps of insensate tissue. We must acknowledge that people should be a little troubled by abortion. The procedure snuffs out a potential personality. Now, Polya has spoken on college campuses all over the United States.
In our recent podcast episode, we had the pleasure of hosting Jonathan Lind again. Jonathan is an experienced high school math teacher currently teaching at an American school in Qatar. Jonathan shared invaluable insights and practical strategies on how he transformed his assessment practices in his math class, leading to remarkable outcomes for his students.During our conversation, Jonathan delved into the following key ideas:Shifting from labeling students to using assessment for growth;Time-saving curriculum coverage with standards-based grading;Promoting student success through growth and proficiency days;Assessing Polya problem-solving techniques effectively;Designing a grading system to inspire student achievement;This is a Math Moment Maker Reflection episode where we talk with a member of our fantastic community who is working hard to continue reflecting and refining their practice to Make Math Moments with more students in their math classroom.You'll Learn: How shifting from using assessments to label students to using assessment for growth can be a game changer in math class;How using standards based grading can save you time when trying to cover your curriculum; Why growth and proficiency days and smaller, but more frequent assessments can help more students achieve at higher levels in your math class;How the Polya problem solving model should be used and assessed in a math class;How to design your grading system to inspire students to achieve the fullest potential;Resources: Assessment For Growth [Course]Make Math Moments Problem-Based LessonsLearn more about Jon Lind at https://LindJonath.com Find him on Twitter @LindJonathAre you a district mathematics leader interested in crafting a mathematics professional learning plan that will transform your district mathematics program forever? Book a time to chat with our team!Learn our proven 3-part framework for building easy to plan and fun to deliver lessons that kids will not only love, but also learn from regardless of their level of readiness.Register for our winter cohort now! https://makemathmoments.com/onlineworkshop If you're looking to explore other ways to build and strengthen your own wealth then each week you'll hear tips, strategies, and options to increase your personal wealth. Listen and subscribe here. Get a Customized Math Improvement Plan For Your District.Are you district leader for mathematics? Take the 12 minute assessment and you'll get a free, customized improvement plan to shape and grow the 6 parts of any strong mathematics program.Take the assessment
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.06.511227v1?rss=1 Authors: Faulkner, G. J. Abstract: A recent study (Takahashi et al., Neuron, 2022) concluded LINE-1 (L1) retrotransposon activation drives cerebellar ataxia and neurodegeneration. This position was based on L1 upregulation in ataxia telangiectasia (AT) patient cerebellum samples, as measured by RNA-seq, and observation of ataxia and neurodegeneration in mice where cerebellar L1 expression was induced via dCas9-CRISPR. Here, a re-analysis of the RNA-seq data, which were obtained by rRNA depletion rather than polyA+ selection, revealed a high fraction (38.4%) of intronic reads. Significantly (p=0.034) more intronic reads were present in the AT data than the matched controls. This finding provides an alternative and robust explanation for a key result reported by Takahashi et al.: intronic L1 sequences are abundant in pre-mRNAs, and more pre-mRNAs were retained in the AT libraries. This apparent batch effect deserves further examination, as claims of L1-mediated pathogenesis could shape future efforts to treat AT by trying to attenuate L1 activity. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer
The learning we've done and the lies we were told. Sen. Rand Paul stands up. Control the opposition and you will win the war. Forfeiting freedoms year by year. The sharpest weapons are the non violent ones. WWG1WGA is made for now. The plan was always you. Hailstone numbers, Benford's Law, Polya conjecture, Python shapes and collapsing timelines. Math is life. Doctors speak out when the people give them strength. Are all our leaders actors? Mars is the past, Venus the future. Be ready for anything and always retain your situational awareness. Learn more about your ad choices. Visit megaphone.fm/adchoices
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.09.290411v1?rss=1 Authors: Jones, D. C., Ruzzo, W. L. Abstract: The analysis of mRNA transcript abundance with RNA-Seq is a central tool in molecular biology research, but often analyses fail to account for the uncertainty in these estimates, which can be significant, especially when trying to disentangle isoforms or duplicated genes. Preserving uncertainty necessitates a full probabilistic model of the all the sequencing reads, which quickly becomes intractable, as experiments can consist of billions of reads. To overcome these limitations, we propose a new method of approximating the likelihood function of a sparse mixture model, using a technique we call the Polya tree transformation. We demonstrate that substituting this approximation for the real thing achieves most of the benefits with a fraction of the computational costs, leading to more accurate detection of differential transcript expression. Copy rights belong to original authors. Visit the link for more info
Get all links mentioned in the episode here: bit.ly/185-geordiewilliamsonSkip through the episode: 00:28 - Welcome to Uncommon01:09 - Guest introduction01:26 - Getting into climbing 05:58 - Growing up differently08:58 - Childhood aspirations & nostalgia11:29 - Childhood principles 12:47 - Creativity in science and maths 16:56 - Why Galois Theory?20:13 - Explaining the breadth of maths23:52 - Proving The Lusztig Conjecture in the shower30:36 - Polya & changing perspectives35:52 - Inspirations40:17 - Tackling maths communication43:09 - Choosing to stay in Australia48:07 - Collaboration in a pandemic52:11 - A mathematician's COVID-19 forecast57:14 - Maths & science: the communication issue01:00:25 - Career highs & lows01:03:41 - Go-to lockdown food01:04:03 - Best purchase under $20001:04:22 - Show recommendation 01:05:23 - Book recommendation
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.23.262055v1?rss=1 Authors: Alnasir, D. J. J., Shanahan, H. P. Abstract: Given the wide variability in the quality of NGS data submitted to public repositories, it is essential to identify methods that can perform quality control on these datasets when additional quality control data, such as mean tile data, is missing. This is particularly important because such datasets are routinely deposited in public archives that now store data at an unprecedented scale. In this paper, we show that correlating counts of reads corresponding to pairs of motifs separated over specific distances on individual exons corresponds to mean tile data in the datasets we analysed, and can therefore be used when mean tile data is not available. As test datasets we use the H. sapiens IVT (in-vitro transcribed) dataset of Lahens et al., and a D. melanogaster dataset comprising wild and mutant types from Aerts et al. The intra-exon motif correlations as a function of both GC content parameters are much higher in the IVT-Plasmids mRNA selection free RNA-Seq sample (control) than in the other RNA-Seq samples that did undergo mRNA selection: both ribosomal depletion (IVT-Only) and PolyA selection (IVT-polyA, wild-type, and mutant). There is considerable degradation of similar correlations in the mutant samples from the D. melanogaster dataset. This matches with the available mean tile data that has been gathered for these datasets. We observe that extremely low correlations are indicative of bias of technical origin, such as flowcell errors. Copy rights belong to original authors. Visit the link for more info
De acordo com Polya (1978, p. 65), “resolver problemas é uma habilidade prática, como nadar, esquiar ou tocar piano: você pode aprendê-la por meio de imitação e prática. (...) Se você quer aprender a nadar você tem de ir à água e se você quer se tornar um bom resolvedor de problemas, tem que resolver problemas”. O método de Pólya, proposto com base no método cartesiano (de René Descartes), apresenta quatro etapas para resolução de problemas. Para a resolução de problemas computacionais, a abstração é um mecanismo, pois permite simplificar a realidade e representar os aspectos mais relevantes de um problema e sua solução. Durante as aulas serão apresentadas técnicas específicas que permitem desenvolver soluções algorítmicas para o problema com mais eficiência e facilidade. Habilidades e competências Ao final da semana, você deve ser capaz de: compreender os conceitos referentes aos métodos de resolução de problemas e as principais técnicas para construção de algoritmos: decomposição, refinamento sucessivos, modularização, paralelismo, generalização, reconhecimento de padrões e recursividade. Desafio A Torre de Hanói é um quebra-cabeças, divulgado pelo matemático francês Édouard Lucas, em 1883. A utilização do jogo contribui para o desenvolvimento do raciocínio lógico e para a capacidade de resolução de problemas. O objetivo do jogo é mover todos os discos de uma haste para outra, utilizando o menor número possível de movimentos, respeitando-se as seguintes regras: passar todos os discos para o último eixo com a ajuda do eixo central, de modo que no momento da transferência o pino de maior diâmetro nunca fique sobre o de menor diâmetro. O desafio é calcular o número de movimentos mínimos para resolver a Torre de Hanói com qualquer quantidade de discos. Para resolver o desafio os alunos poderão utilizar o recurso educacional aberto (REA) desenvolvido pela Univesp, disponível neste link. Aulas Nesta videoaula abordamos o método de resolução de problemas de George Pólya, a solução de problemas segundo Pozo, e a abordagem para a resolução de problemas baseada no pensamento computacional, que destaca a abstração como um mecanismo importante no processo de solução de problemas. Nesta videoaula são apresentadas as técnicas para construção de algoritmos, dentre elas: decomposição, refinamento sucessivos, modularização, paralelismo, generalização, reconhecimento de padrões e recursividade. Slides de apoio Parte 1 Parte 2 Textos de apoio How to solve it Entendendo pensamento computacional Lógica de programação Itinerário formativo de computação Recursividade --- Send in a voice message: https://anchor.fm/aulas-univesp/message Support this podcast: https://anchor.fm/aulas-univesp/support
Yet another movie named "3" - will this one have some polyamory in it? Or will it be another cheating film? Joreth reviews the German film Drei, or 3, for polyamorous content.
3 couples struggle with the definitions of monogamy and fidelity, after some "insight" from Jason Alexander. Joreth reviews this film to see if any ethical non-monogamy could possibly come out of it at all.
There are so many movies called "Three"! Is this one that actually has polyamory in it? Is there really an FMF triad like on the cover? Are any of the characters polyamorous? Or is this just another cheating cautionary tale or threesome gone wrong story? Joreth reviews this particular "Three" to find out!
Can a movie with blockbuster names be a poly movie? Joreth reviews Bandits with Bruce Willis, Billy Bob Thornton, and Cate Blanchet to see if there is any polyamory in this star-studded film.
This is a Special Birthday episode where I want to give a Shoutout and a big congrats to my good friend, student, partner and simply amazing woman Polya Rosin. Happy Birthday. Polya Rosin is the go-to relationship coach for parents, entrepreneurs and busy professionals. She is an expat, a mum of 3 kids and an entrepreneur. Polya has a bachelor and a master's degree in science, teaches evidence-based decision making and is a Robbins-Madanes trained coach. Over the last 5 years, she has helped high-level executives, entrepreneurs, busy professionals and parents to build deep authentic connections, feel more appreciated in their relationships and have the confidence and freedom to express themselves. She uses strategic interventions, NLP, science and modern psychology. Polya coaches clients online as well as in-person in Stockholm. You can connect with Polya on her social media: IG: https://www.instagram.com/polyarosin/ Facebook: https://www.facebook.com/PolyaRosinRelationshipsCoach/ And make sure to book a free consultation: Link for freebie - my calendar to book a free consultation, www.polyarosin.as.me ________________________________________ You know that I absolutely love visualisation, meditation and vision boards. This time I have created something very special for you future millioniares. A Million dollar Vision roadmap. Download here for FREE and join my success tribe: http://bit.ly/milliondollarvision Last but not least, message from your host Dijana Llugolli If you loved this show, please follow, share and leave a comment, your support means the world to me. You can connect with me on Instagram where I share behind the scenes stories about being a mom and serial entrepreneur. I also invite you to check my page Fearless & Successful Coaching and join me for some LIVE tips & tricks and if you are a global kick-ass leader and female entrepreneur, I invite you to my Success Tribe on Facebook Fearless and Successful Fempreneurs For more inspiration and motivation you are welcome to sneak peak to my Success Library and grab some of my favourite free and paid resources: Success Library And If you also have a story to share and would love to make an impact, we would love to feature you too. You can apply here for the >>> Fearless & Successful Interview --- Send in a voice message: https://anchor.fm/fearlessandsuccessful/message
Autonomous Sensory Meridian Response (ASMR), sometimes Auto Sensory Meridian Response,is an experience characterized by a static-like or tingling sensation on the skin that typically begins on the scalp and moves down the back of the neck and upper spine. It has been compared with auditory-tactile synesthesia[ and may overlap with frisson.In this episode, the Bobby & Jim let loose and sort of catch up on life. They also dive into many topics such as whispering, Hawaii weddings, drunk Indian food, beer, scooters, A list Gays and lisps just to name a few. Bobby drops about 20 F bombs but sometimes that's how the cookie crumbles. Sit back and get ready to laugh. Cards Against Humanity Phrase of the week:" I'm Miss Tennessee, and if I could make the world better by changing one thing, I would get rid of crying and shitting and eating spaghetti"OpeningCharity Inappropriate Catch Up Riding scootersDrunk Indian Restaurant Whale Tales Dick Pics Florida TrashGay kickballConfidence A list Gays Taco Bell Naps Social Media Minute Chelcie Lynn Tammy Funny Questions to answer WhisperASMR (Autonomous Sensory Meridian Response)Lesbian JokesWe love Lesbians Montreal Road Trip Grant VanderbiltWEHO Mickeys Party MondaysTalk about future episodes PolyA list wants to visit Trans Episode coming Coming out episode next week Exit TREVOR PROJECT DONATION https://fundly.com/she-s-not-doing-so-well-podcast-for-the-trevor-projecthttps://teespring.com/dont-call-me-honey?tsmac=store&tsmic=shes-not-doing-so-well&pid=2&cid=573SOCIAL MEDIA SPOTLIGHT OF THE WEEK: Check out Chelcie Lynne @ https://www.instagram.com/chelcielynnn/MERCH https://teespring.com/stores/shes-not-doing-so-wellOUR SOCIAL LINKSwww.Shesnotdoingsowell.com Instagram @ shesnotdoingsowellCALL US 669-207-4643
Can a movie set in the '60s and filmed in the '90s really feature a polyamorous quad? Joreth reviews The Blood Oranges for a little-seen poly structure to see if there is any polyamory in it at all. "Husband and wife Cyril and Fiona explore new ground and new relationships when they take a vacation in the tropics. While on holiday, the pair meets another couple, Hugh and Catherine, and their three children. Relationships become intertwined when Cyril and Fiona lose their inhibitions and seek sexual intimacy with Hugh and Catherine in this erotic drama." So Netflix says. It sounded pretty promising, and yeah, I think this fits under the "poly-ish" heading. Cyril and Fiona are clearly in an open marriage with both of them openly supportive of each others' interests. Honestly, though, I was surprised to see that this movie was made in 1999. It just felt like another '60s sexual revolution type of film, not the least of which was a slightly predatory personality from Fiona and a pseudo-sex cult leader attitude from Cyril, but also it just kind of looked like it - the cinematography and lack of a soundtrack, I think. Here's what I liked about the movie: An attempted quad instead of unicorn hunters looking for the hot bi babe The newbie love interest struggles with deeply indoctrinated beliefs of fidelity & ownership Neither the polyamory nor society around them was responsible for ending the relationships How non-traditional parental relationships affects children old enough to have internalized society's messages about relationships A couple not letting their pre-existing relationship make the other relationships "secondary" and doing what's best for the family instead of "protecting" their couplehood at all costs Here's what I didn't like about the movie: The characters I like serious dramas, but I'm really picky about them. I don't tend to like movies that I describe as "very French" - filled with unnecessary angst and smoking and existential ennui and desolation. Unfortunately, in movies that explore alternative sexuality, if it's a drama and not a comedy or something uplifting, I too often find it's one of these types of dramas. Such was this movie for me. I didn't like the movie, but that's based solely on personal taste. One might say that I have no taste, since I'd rather be watching cheesy '80s sitcoms, so there you go. I'm extremely character-driven in my entertainment preferences and I just didn't like the characters. I found Cyril to be pompous, elitist, and blind to his own privilege, even if I happened to appreciate his understanding that possession should not be part of interpersonal relationships. I thought Fiona was selfish, predatory, and naively idealistic. Catherine, I just felt sorry for and wished she would grow a backbone. And Hugh! I have no idea why anyone liked Hugh. He was controlling, possessive, self-righteous, arrogant, dismissive, condescending, and filled with disgust. There is one scene in particular (that I won't describe so as to not give away spoilers) where he is such a hateful asshole that I immediately disliked every other character just because they overlooked Hugh's behaviour and attitudes. Even after he did something that I would have found unforgivable, it was everyone else's primary desire to make him feel better and keep him a part of the family. But they were trying to build a strong family, and for that, I have to give this movie credit ... or at least say that it's a poly-ish movie. Cyril and Fiona were not the typical movie couple, where the guy wants some hot chick & talks his wife into it. They both seemed equally enamored of the other couple & welcomed them and their children into their home. Cyril in particular tried very hard to reach out to the children and soothe the oldest, who noticed something going on and seemed resentful. Cyril and Fiona both did everything in their power to help Catherine during her own time of emotional crisis without putting their own relationship above everything else. So, I'd recommend this movie if dramas are your thing and you want to see a poly movie that doesn't end with polyamory destroying everyone's lives and, in fact, the polyamory is beneficial to providing an emotional support structure in difficult times. www.polyishmoviereviews.com
Special guest and dear friend Polya once sailed across the North Atlantic ocean on a viking-style ship. Now's she's written a book about it based on her notes and drawings from the onboard. We talk about the voyage, how the crew slept, ate, socialized and went to the bathroom on an open wooden ship at sea. Plus a bit about sex ed and drug prevention in Russian schools. *11 days left* to pre-order Polya's book by contributing: https://www.indiegogo.com/projects/how-i-sailed-a-viking-ship-across-the-ocean#/
How well does this particular fan recommendation hold up to Joreth's poly critique? Sometimes I think that maybe I'm actually speaking a different language from everyone else, and maybe I have some kind of universal translator or babelfish so that I can't tell, but that the translator is buggy or slightly off in some ways. Because people don't seem to use words in the same way that I do. Even with a dictionary, people use words differently, and I find that I am constantly having semantics arguments because we can't discuss a topic until we are all on the same page about what the words we are using mean. One of those words is polyamory. I'm a pretty big proponent of using the definition of a word that the person who made up the word uses. In some cases, I think the Argument from Authority is a good one. If you invented or coined a term, then you get to decide what it means. This is even more important, to me, the younger the word is. And if the word was invented or coined within the same generation (i.e. roughly 30-ish years) and the coiners are still alive, then there shouldn't be any debate about "living languages" and so forth. So, to me, polyamory is about having or wanting multiple simultaneous romantic relationships in which all parties consent to the arrangement. That means that they all know about it and agree to it willingly, not grudgingly. If you don't say yes, it's not consent. If you are coerced, it's not consent. If someone uses their position of authority over you, it's not consent. If you are not aware of any other options, it's not consent. If you are not allowed the opportunity to back out, it's not consent. And so on. Polyamory is also, to me, more about building intentional families (even if some of those relatives are "extended" relatives) than in experiencing sexual encounters (also explicit in the definition - a word's definition is not necessarily limited solely to it's literal translation, the intent and cultural context of a word is also taken into account). So when someone suggests a movie to me that they claim has polyamory in it, I am now highly dubious about that claim. I have been recommended all manner of cheating and swinging and other non-monogamous movies, but very rarely do I find actual polyamory in these films. Every so often, a cheating movie might make it into my Poly-ish Movie List because I believe from the context of the story that it would be polyamorous if not for the circumstances, like the era or culture, that prevents the characters from openly declaring their relationships that are, nonetheless, loving (like Same Time, Next Year) - I basically feel that the characters are poly but possibly trapped somewhen/somewhere that they can't express it properly. Many times, it's hard for me to really quantify why a particular borderline movie is poly and why this other one isn't. It usually boils down to tone, and a vague sense of "moralizing" that I may or may not get from the storytellers. This was the problem I had with The Unbearable Lightness of Being. I kept getting told that it was a poly movie, but there was just something wrong with its tone. Tomas is a philanderer who seems to be afraid of commitment and keeps his emotional entanglements to a minimum. Basically, he has sex with lots of women a few times and drops them when they start becoming "serious". Except for one woman, Sabina, who basically seems to have the same outlook as Tomas, in that she hightails it outta there as soon as a guy starts getting "serious" about her. They appear to have a mutual respect in addition to their mutual attraction and mutual passion because of their shared interest in not letting anyone get close to them. Ironically, that barrier that they both erect to keep people out is what ties them together. Along comes Tereza, an innocent young girl who manages to, as far as I could tell, guilt her way into Tomas' life. She shows up on his doorstep with no place to stay, and so breaks his rule about kicking every girl out before morning. After a whole bunch of these mornings, he finally ends up marrying her. This is yet another case of a couple who doesn't seem to have anything in common and doesn't seem to like each other very much. At least, the director and/or screenwriter didn't establish their relationship very well. We know what Tomas likes in Tereza - she's female - but we don't really see what brings the two such different characters together. She's young, naive, innocent, apolitical, and extremely jealous and insecure. He's worldly, sophisticated, educated, misogynistic, contemptuous of most people, and a horndog. Other than the fact that their bits fit together, I couldn't understand their relationship at all. Tomas continues to cheat on Tereza throughout their relationship, and every time Tereza catches him at it, she throws a huge fit that borders on emotional blackmail. I think she's probably depressive to the point of suicidal. Not that I'm defending Tomas either - Tereza doesn't consent to an open relationship, so he's cheating. Period. She deserves better. There is only one scene that could even possibly be confused for a pro-poly scene. And I have to say that I didn't even interpret the scene this way until someone else suggested it. I still don't see the scene this way, but I can at least see how someone else might. Tereza suspects Tomas of having an affair with Sabina, who has been introduced to the new Mrs. Tomas as his friend & occasionally socializes with them. So Tereza, who is told to get into photographing naked women if she wants to be taken seriously as a professional photographer, approaches Sabina to be Tereza's first nude model. Sabina, a confident, sexually liberated woman in the '60s, is the only person Tereza knows who might even consider the proposal. So we have a scene where Tereza photographs Sabina, and eventually Sabina (who is also a photographer and artist) talks Tereza into posing nude for her in return. The two women, who have before been very awkward together, gain some sort of comfort and familiarity with each other through this mutual nude photography session. I didn't see how this was poly, really. The argument was made that it was basically two metamours who had finally reached out to each other and were able to get past the jealousy to see each other maybe as how their mutual partner could see them. The reason why I didn't interpret the scene this way is because Tereza had only suspected Sabina as being Tomas' lover (he never confirmed) and neither woman spoke of anything relationship-oriented at all. So maybe they did get past some of their jealousy and learned to see each other as people, and maybe this was a bonding, and even a learning moment for both of them. But it was still cheating and still a secret and Tereza still never approved of Tomas' philandering, and the two women never saw each other again on screen. This movie was not about a poly vee. This was a political commentary on the war in Europe and the Soviet invasion of Czecheslovakia, using the characters as vehicles for the commentary. The movie was brilliantly made, using real footage and photographs from the invasion itself, as chronicled by art students at the university at the time, and staging the characters on the sets to flip back and forth seamlessly between the real archival footage and the movie. This was the first and best comprehensive collection of the record of the invasion ever made. This movie was based on the book by the same name, which is also widely touted as a brilliant piece of literature. It was critically acclaimed, although, like any book-based movie, many were disappointed with the conversion to film. So I recommend this movie if history and foreign films and high-brow media are your thing. I just didn't feel that it was particularly poly. ***SPOILERS*** Tomas and Tereza eventually settle down when Tereza convinces him to leave the city (and, hence, his ready supply of willing adulterers) and live in the country, and they seem to be happily monogamous for a time. So when a guy who can't remain sexually fidelitous is finally able to only by removing his access to other women, and when the couple is shown as finally happy when said other women are removed from the picture, I have a hard time accepting the badge of "polyamory" or even "poly-ish" that the movie has been given. It comes too close to "open relationships are a train-wreck and everyone is happier when they are monogamous" to me. Sabina does appear to have remained a close friend of Tomas, right up until the end, but even she was removed from his reach, and she had to love him from afar. She also proclaimed herself as "their closest friend", meaning a close friend to both Tomas and Tereza, but "close friend" from across the globe and not having seen or spoken to them in years is really tough for me to stretch into "poly". This is one of the few artsy-foreign films that I didn't dislike for being too artsy & foreign, and I'd like to read the book. I might have liked the movie better if I had just come across it on my own instead of having it recommended to me as a potential poly film, because I watched it through a filter of hopes and expectations of poly content. I will not be including this on the Poly-ish Movie List, but it was an interesting movie and I'm glad I saw it.
Can Spike Lee's inaugural film really be a poly movie as everyone claims? Joreth watches this groundbreaking movie to find out! There's something about student films and classic French movies that just do not work for me. Maybe it's the penchant for black and white even in a color era, or maybe it's the frequent complete lack of musical score or soundtrack, or maybe it's the excruciatingly slow pace and shitty acting, or maybe it's all those years I spent as a film student, forced to watch the painfully "artistic" films by my peers and dragged to pretentious indie art houses to see confusing avant garde movies. I don't know, whatever it is, they're just not my cuppa tea. And Spike Lee's debut movie fits squarely in the middle of that je ne sais quoi that makes my eyes glaze over. But you might have different tastes. She's Gotta Have It is another Netflix recommendation that I was expecting to be misleading at best. Plus, the black community, at least as it's portrayed in pop media, has never been sympathetic towards multiple partnerships, especially if it's the woman with the multiple partners. Nola is in love with 3 very different men. At first I thought it would be another cheating movie where the girl would eventually find The One (who, of course, was not one of the guys she was fucking, because sex is dirty, or something). But then I discovered that she was honest about her "friends", as she calls them, so I thought it was more like Cafe Au Lait, complete with detestable characters who didn't actually seem to like each other. It did feel a lot like a Brooklyn version of that movie - none of the guys liked each other, I didn't like any of them, and no one had any redeeming features to make me understand why she liked them or why they liked her. I kept waiting for her to get pregnant so they could have a Dysfunctionally Ever After ending. But then I noticed something. I noticed that the arguments the guys used to try and convince Nola to be monogamous were the exact same shit I got over the years from cowboys. When you're not monogamous living in a monogamous world, and you don't know anyone else like you to date and can only draw from the mono pool, this movie is exactly what you might get. I'm having trouble categorizing this one. On the one hand, she's honest about her multiple partners and claims to love them. On the other hand, they hate each other and are all competing to be "the winner" - the sole object for her affection. On yet another hand, this is very much what it feels like for some of us to be poly (or something not monogamous) without a community or support or understanding from anyone since no one else is like us. On the final hand, it was yet another movie with characters who didn't really like their dating partners. I think I want to include this on the Poly-ish Movie List because I think a lot of polys go through similar arguments before they find a community, and I think it's a valid part of the broader story of what it's like to be poly. But this was not a story of a poly relationship. If anything, it was the story of a poly-ish woman stuck in a mono world.
Digital Illusion provides just that - the bliss of the unreal, as our protagonists find out. Bahar strikes a business partnership, Polya witnesses the fragility of human beings, Sawako delivers her brand of justice and Kess confronts a vast conspiracy. The Gauntlet and everything related can be found at https://www.gauntlet-rpg.com/ The Veil and its supplements can be found at https://www.samjokopublishing.com/the-veil
Polyamory in the wild? Can a TV show that isn't about polyamory at all really have an episode with polyamorous characters in an open marriage and treat the subject well? Joreth reviews an episode of The Mentalist to find out!
Can a web series about a poly triad really be about polyamory? Yeah, it probably is. Joreth reviews the show Family, a creative endeavor by Teresa Greenan, a polyamorous filmmaker based out of Portland, OR.
Ah, French ... the culture of love! Where "alternative" relationship structures are not frowned upon and the people understand the power of passion! Or do they? Joreth reviews a movie filmed in the Swingin' '60s on recommendation from a listener, to see if there is any polyamory or ethical non-monogamy in this film made during a time of exploration and experimentation, or if it will just confirm monogamous tropes.
On today's episode, J.Y. speaks with 7Sager AccountsPlayable, David, who scored a 174 on his LSAT. David is currently a 1L at Harvard Law School but gaining admissions was not straight forward. He applied twice. They speak about what process was like, among other things related to LSAT prep and law school admissions. Links to other content mentioned in the episode: • Blind Review method: 7sage.com/the-blind-review-how…rep-for-lsat-part-1/ Links to books mentioned in the episode: • Introduction to Logic by Harry Gensler: https://www.amazon.com/Introduction-Logic-Harry-J-Gensler/dp/0415996511 • Informal Logic: A Pragmatic Approach by Douglas Walton: https://www.amazon.com/Informal-Logic-Pragmatic-Douglas-Walton/dp/0521713803 • How to Solve It: A New Mathematical Approach by G. Polya: https://www.amazon.com/How-Solve-Aspect-Mathematical-Method/dp/0691023565 Links to other 7Sage LSAT content: • 7Sage LSAT course: 7sage.com/enroll/ • Free logic games explanation lessons: 7sage.com/logic-game-explanations/ • Free LSAT preptest scorer and analyzer: 7sage.com/score-lsat-test/ • Free LSAT proctors: 7sage.com/free-lsat-prep-tools/ • Free LSAT discussion forum: 7sage.com/discussion/ • Free video explanations for every question in the June 2007 PrepTest: 7sage.com/lesson/preptest-june…s-for-all-questions/ More information, show notes, and other 7Sage content: https://7sage.com/2-ama-w-7sager-accountsplayable-150s-to-174/
A priest and a rabbi walk into an airport ... to meet their childhood best friend, a tomboy who has grown up into a beautiful, intelligent, independent, CEO. As she visits her hometown and her two best friends, the men struggle with their growing romantic feelings for the same woman. Could this really be a tale of polyamory, snuck into mainstream cinema? Joreth reviews this Ben Stiller film to see if a polyamorous MFM vee could really make it onto the silver screen. I think this is one of those movies that Netflix recommended to me based on adding some other "similar" movie. I wasn't even entirely sure, with a title like that, if the movie was on the list to review for polyamory or for my list of skeptical movies. But with the happy surprise of the last movie I reviewed (A Strange Affair), I was actually kind of hopeful about this one. It was the story of two young men who were best friends as kids, growing up to become a Jewish rabbi and a Catholic priest, and the tomboy who was also their best childhood friend coming back into town as a successful, beautiful, corporate CEO. Because it had big names in it, the movie was most likely to be not-poly, but the setup had some potential. Unfortunately, it flopped. Not that the movie wasn't good (that's debatable, based on whether you like romantic comedies and movies that involve secrets), but it wasn't poly at all and it should have been. These two men love this woman - she was perfect for them both. But because the rabbi is allowed to have sex (and because he is being pressured to find a wife before he becomes head of his temple, or whatever), he immediately acts on his crush when the priest does not because of his vows of celibacy. So the woman spends about half the movie developing a romantic relationship with the rabbi, but keeping the priest safely in a box labeled "do not touch". And as anyone who spends any time in the world of the Monogamous Mindset knows, when a girl puts a guy in the Friend Box, he's stuck there for life, no matter how strong her feelings for him ... those feelings are just very strong "friend" feelings.* So, anyway, by the time the priest confesses his love and he has just about talked himself into leaving the priesthood for her, she is already thoroughly immersed in her relationship with the rabbi and totally oblivious to the priest's growing attraction to her. So the priest has to swallow his embarassment and go back to thinking of her like a sister. Now, you might be able to put this movie in the poly analogues category, because the three of them remain a strong group throughout the whole movie. The priest somehow manages to only be angry at having their relationship hidden from him, but he doesn't seem to feel any major jealousy. Well, there is the one fight where he gets drunk and yells at the rabbi that the rabbi stole his girlfriend, but mostly the priest seems to recover from his one- or two-night bender and move right into compersion for his two best friends, only nursing the hurt feelings of being lied to (which, frankly, I can totally understand). ****SPOILER ALERT**** The movie ends happily ... for a monogamous movie ... with the rabbi and the woman back together and the priest happy for them both and everyone is one big happy (monogamous & platonic) family. So it might fall under the category of poly analogues, where the only difference between them and us is that the woman would be sleeping with the priest too if it was us. But the reason why I didn't like this movie is because I get upset at plots that put a convenient excuse in the way, basically cockblocking a poly relationship from happening. Usually, it's death, but in this case, it was vows of celibacy. See, in the world of the Monogamous Mindset, a person can only romantically love two people at the same time if one of them is dead. It is only acceptable for a woman to say she loves two men if she is referring to her dead husband and her new husband, whom she met a safe time-distance after the death of her first husband of course. So most Monogamous Mindset movies conveniently kill someone off to allow the person torn in the middle the freedom to love them both and to force her to make a choice (*ahem*Pearl Harbor*ahem*). In this case, the priest's celibacy interfered with his ability to pursue a relationship with the love interest and his religious faith gave him something to hold onto after he was rejected and allowed him to remain in the picture. Whereas with most romcom love triangles, when the love interest rejects one guy for another, he just disappears somehow (maybe he's a bad guy & goes to jail, or maybe he's a good guy and walks away voluntarily, whatever). But because this is a Catholic priest, he is safe enough to keep in the picture and safe enough for both the rabbi and the woman to continue loving because his faith and his vows make him a non-threat. In any other movie where he isn't a priest, the "other love" has to disappear because you can't have the "other love" hanging around your new wife. Or something. This kind of thing can often be more tone than something specific. It's not very easy to quantify why some movies that end with a dyad still make it to the poly list but other movies don't. It's something in the way the actors and the director interpreted the lines that affect the tone of the movie. These movies never have a bit of dialog where someone says "Whew! It's a good thing my husband was killed in that war, so I can safely love you now without falling out of love with him or having to choose!" So, in Strange Affair, where one partner had a serious illness that sort of forced the characters into a position where a love triangle could happen, the tone of that movie didn't strike me as negative. It suggested, to me, that these are people who live in a world where nonmonogamy was Just Not Done, so they needed some kind of extraordinary circumstances to leave them open to the possibility, to give them the impetus to even consider something outside of the norm. But this movie just didn't have that same feeling. The way it was portrayed suggested more of a situation where three people happened to love each other in a world where they shouldn't, so they wrote the circumstances in such a way as to give them a monogamously acceptable way to do that. Basically, they had to neuter one of the characters in order to keep him in the picture, which isn't the same as killing him off, but it belies a tone sprung from the same well. I would love to see this movie re-written, where the priest and the rabbi are forced to re-evaluate their religious faiths in light of their growing love and attraction for the same woman (of no particular faith); where the priest and the rabbi both decide that their mutual love for this woman is incompatible with what they have been taught about religion, which then makes them question everything else about religion, and which leads them to the realization that they have always been a happy threesome so there is no reason why they can't continue to be a happy threesome in a much fuller sense of the word. I'd love to see this movie where the woman does not put one of her best friends into the Friend Box, but allows her love for them both to flourish, and where she comes to the same realization - that they have always worked best as the Three Musketeers, and breaking off into a dyad + 1 would change the dynamic in an unacceptable way. Unfortunately, that was not the movie I watched. --------------------- *The Monogamous Mindset is a particular set of beliefs and viewpoints about monogamy that create the society in which I live. It does not mean that everyone who happens to be monogamous has this mindset, nor does it imply that people who are non-monogamous are automatically free of this mindset. The Monogamous Mindset is a set of rules and morés that dictate how relationships ought to be, many of which are inherently contradictory, selfish, and harmful. One such set of contradictory Monogamous Mindset rules is the rule that you are supposed to marry your best friend, but you're not allowed to be involved with your friends because that would ruin the friendship. And that's the one I'm referencing here. There is this weird rule out there that people, women especially, can't get romantically involved with their appropriately-gendered friends because that would automatically (or could most likely) ruin the friendship. Men's magazine articles and lonely guys online like to lament about the dreaded F word - "friend". Being called a friend is like the worst thing a woman can do to a man who is interested in her, because it means he will never have a chance. Of course *I* know this doesn't always happen and that there are exceptions, which is why I speak so condescendingly of the Monogamous Mindset and of this rule in particular, so please don't leave a comment like "but I married my best friend and it's the best relationship I've ever had!" I know, that's what makes this rule so irritating. But it's out there, and it permeates our society, and is quite possibly responsible for a significant amount of unnecessary heartache.
Can a made-for-tv movie about a broken marriage have polyamorous content in it? Joreth reviews this Judith Light film to see if there is any polyamory in a low-budget, '80s flick. The Netflix summary reads: "Judith Light stars in this sexy made-for-TV drama about a married woman who discovers that her husband of 23 years has been unfaithful. Just as she finds passionate love in another man's arms and prepares to divorce her husband, he suddenly has a stroke and becomes physically incapacitated. Will she move back in with her husband and take care of him ... even though she may risk losing her new lover?" When a movie arrives in my mailbox, I don't always remember if I put it in my queue because it was on a poly list somewhere or because Netflix recommended it to me as "similar" to the poly movies I just added to my queue. Judging by the summary, I assumed this was one of the latter types of "poly" movies. I sat down with this movie with the lowest of expectations, prepared to hate it for yet another cheating drama that would probably end with some kind of choice being made, and possibly even a choice I would think was toxic or foolish. I couldn't have been more wrong. And I love it when I'm wrong about things like this. First of all, the Netflix summary gets the order of events wrong, which is partially why I had such low expectations. Lisa is married to Eric, a charismatic, charming film maker who hasn't made a film in 7 years and spends his time gambling with the money he steals from his wife and fucking his secretary. We are introduced to this plot by meeting a loan shark's thug who has come to intimidate Lisa at work in the very first scene. Eric is the kind of guy I loathe - an idealistic dreamer who has absolutely no connection to reality and thinks his charm entitles him to break the rules and treat everyone around him like shit. But he's charming, and a lot of women find themselves in love with charming users like this. And once you're in love, it becomes all too easy to overlook, to excuse, and to rationalize, until you are trapped - held hostage by your own emotions. But Lisa finds her spine and prepares to leave now that both of her children are out of the house and in college. Except that the day she actually gets the courage to leave, she gets a call from her daughter saying that her husband has had a stroke. So Lisa returns home to care for her husband. What I really like about how the writer treated this situation is that he made no secret of the resentment that Lisa feels at being trapped again, by her love and her responsibility to Eric. She moves back home to care for him, but she is also excrutiatingly honest when she tells him that their marriage is over and she is only there because her conscience won't let her abandon a dying man who is also the father of her children. I found this to be a bold, courageous choice in storytelling because it is not socially acceptable to be "mean" to someone who is sick and/or dying. Being struck with a crippling illness doesn't erase that person's past as a jerk, and it doesn't necessarily change them, automatically, into a nice person either. It might be inconvenient timing, but leaving someone or disliking someone who has had a near-fatal incident doesn't necessarily make that person a bad person. And that's a really bitter pill for some people to swallow. The rest of the movie follows Lisa as she attempts to recover from the financial ruin her husband has put her into with his gambling while now being financially responsible for his medical care, and two people with a painful history learning to live together with a debilitating and life-threatening illness. Now for the poly stuff. Enter Art, the mechanic who takes pity on Lisa when her car breaks down and she tries to work out a payment plan because she can't afford to pay the bill. Art starts doing stuff around the house for her to make her life a little easier. And in the process, he falls in love. I won't give away the ending or the details, but what transpires is a very touching story of a woman who learns to fall back in love with her husband while discovering love with someone new. And, even more touching is the story of a man who loves his wife but who is ultimately selfish and is then forced to re-evaluate his priorities and deal with the fact that she loves another man. This is also the very touching story of a man who falls in love with a married woman, who shows us what true love is - the desire to see another person happy and to facilitate that happiness, whatever it means. If she still loves her husband, then her husband must be kept around and must be honored as the man she loves. I think this is a good example of the kinds of situations that people can relate to - a bridge between the poly and mono worlds. It's not really a poly analogue because she flat out says that she is in love with two men. We see the tension between the metamours, we see the disapproval of the children and the neighbors, we see the resentment of being held back, and the loving amazement when poly works well. It's just a story told within the framework of a situation that non-polys might be able to sympathize with ... a setup that puts a monogamous person in a very difficult position where things are no longer black and white. What do you do when your husband & father of your children is an asshole but you still love him? What do you do when you are trapped in a marriage that is over but love finds your doorstep anyway? What do you do when you are financially strapped and alone and someone offers no-strings-attached help simply because he thinks you could use it? What do you do when you fall in love with someone you are not supposed to love? This was one of those poly-ish type movies - a situation that lives on the fuzzy borders of what is and is not polyamory. But the tone of the movie, the scenes between the metamours, the complexity of emotion, the selfless version of love, all make me feel that this movie fits quite squarely into the polyamory category in spite of any debate over which configurations really "count". I recommend this movie, both for the poly-ish movie list and to watch.
We dive deep into our first interview where we discuss polyamory, relationships in the transgender community, vegan cheese, and Whole Foods romance. Nathan and Solomon give us a glance into their day to day. Follow us on Twitter and Instagram @TheOralReport Email us at TheOralReportPodcast@gmail.com to have your sexy questions answered or just to say hello! Glossary: Polyamory: The philosophy or state of being where one is romantically involved with more than one person at a time. There is a movement/idea within the community to use the shorthand "PolyA" instead of "Poly" to avoid confusion with Polynesian people. Misgender (verb): Referring to a person using a word or pronoun that does not reflect their personal gender identity. "My teacher misgendered me today so I corrected them - I hope in the future they use my preferred pronouns." Hoping to do some research? Check out the book, "The Ethical Slut - A Guide to Infinite Sexual Possibilities" by Dossie Easton and Janet Hardy. --- Support this podcast: https://anchor.fm/the-oral-report/support
In this episode we hone in on a historical example of non-monogamy, and learn a lot about the Oneida Community Silverware Company. It's a gas, and a special friend took a break from playing DnD to drop by and talk history with us. Starring: Sophie Lastnameredacted, Joe AliasThis episode was sponsored by: Exploring the Warm Castle: bit.ly/warmcastle Mechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 For Further reading on the Oneida Community: https://en.wikipedia.org/wiki/Oneida_Community http://www.wbur.org/hereandnow/2016/05/20/oneida-silverware https://www.nytimes.com/2007/08/03/travel/escapes/03Oneida.html https://socialwelfare.library.vcu.edu/religious/the-oneida-community-1848-1880-a-utopian-community/ http://www.nyhistory.com/central/oneida.htm http://xroads.virginia.edu/~hyper/hns/cities/oneida.html
In this episode we hone in on a historical example of non-monogamy, and learn a lot about the Oneida Community Silverware Company. It's a gas, and a special friend took a break from playing DnD to drop by and talk history with us. Starring: Sophie Lastnameredacted, Joe AliasThis episode was sponsored by: Exploring the Warm Castle: bit.ly/warmcastle Mechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 For Further reading on the Oneida Community: https://en.wikipedia.org/wiki/Oneida_Community http://www.wbur.org/hereandnow/2016/05/20/oneida-silverware https://www.nytimes.com/2007/08/03/travel/escapes/03Oneida.html https://socialwelfare.library.vcu.edu/religious/the-oneida-community-1848-1880-a-utopian-community/ http://www.nyhistory.com/central/oneida.htm http://xroads.virginia.edu/~hyper/hns/cities/oneida.html
Can a mainstream movie about an "open marriage" really have some polyamory in it? Joreth reviews the movie Fling, starring Brandon Routh, Steve Sandvoss, and Courtney Ford, to answer that very question.
This episode was supposed to be about quality time, but the first half of the episode is about this super awesome play from the 1930's that we love. We're not nerds, you're a nerd. Shut up!This episodes sponsors: Exploring the Warm Castle: bit.ly/warmcastle Mechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 Read the play mentioned in the show: https://archive.org/stream/designforliving00cowa/designforliving00cowa_djvu.txt
This episode was supposed to be about quality time, but the first half of the episode is about this super awesome play from the 1930's that we love. We're not nerds, you're a nerd. Shut up!This episodes sponsors: Exploring the Warm Castle: bit.ly/warmcastle Mechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 Read the play mentioned in the show: https://archive.org/stream/designforliving00cowa/designforliving00cowa_djvu.txt
This episode is pride-themed for pride month, and we get into a whole slew of things such as gate-keeping and pink capitalism. Happy pride; be gay do crime! This episodes sponsors: Exploring the Warm Castle: bit.ly/warmcastleMechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 Articles referenced in the show:bit.ly/flouritepolya https://ohdionne.com/2018/06/24/i-am-a-queer-woman-i-will-not-be-attending-another-pride/ https://mprnews.org/story/2018/06/24/photos-twin-cities-pride-parade-2018 http://bit.ly/pinkcapitalism
This episode is pride-themed for pride month, and we get into a whole slew of things such as gate-keeping and pink capitalism. Happy pride; be gay do crime! This episodes sponsors: Exploring the Warm Castle: bit.ly/warmcastleMechanic Shop Femme: bit.ly/carfemme Box Not Included Podcast: https://soundcloud.com/user-411861330 Articles referenced in the show:bit.ly/flouritepolya https://ohdionne.com/2018/06/24/i-am-a-queer-woman-i-will-not-be-attending-another-pride/ https://mprnews.org/story/2018/06/24/photos-twin-cities-pride-parade-2018 http://bit.ly/pinkcapitalism
Can a movie about a woman charged with bigamy on the day of her 6th wedding really be about polyamory? Joreth reviews this quirky Spanish film that challenges the standard narrative of a man and his harem, and questions everything a conservative judge ever thought he knew about love and relationships.
In this episode we talk about metamours, which make up the meat of the majority of most polya families and constellations. They're the partners of our partners, and love them or hate them we have to coexist with them! Tune in to find out more. This episode is sponsored by Exploring the Warm Castle! Buy the book here: http://bit.ly/warmcastle The article featured in the Polya Media Hightlight can be found here: https://bit.ly/2rIRvgQ The article featured near the middle of the episode can be found here: https://bit.ly/2IDrWrU
In this episode we talk about metamours, which make up the meat of the majority of most polya families and constellations. They're the partners of our partners, and love them or hate them we have to coexist with them! Tune in to find out more. This episode is sponsored by Exploring the Warm Castle! Buy the book here: http://bit.ly/warmcastle The article featured in the Polya Media Hightlight can be found here: https://bit.ly/2rIRvgQ The article featured near the middle of the episode can be found here: https://bit.ly/2IDrWrU
Rita, Sue, & Bob Too! was hailed as a landmark comedy in the '80s in Britain, and also passed around polyamorous online groups as a poly film. But is it? Is it both? One or the other? Neither? Joreth reviews this wildly acclaimed movie for any hint of polyamory, open relationships, or consensual and ethical non-monogamy to see if it lives up to the hype.
Could Alan Rickman possibly have started in a poly movie?! Joreth reviews this unusual film to see if a happy polyamorous V or triad family can be found among the backstabbing, vicious world of competitive hair styling.
This movie occasionally gets mentioned in discussions of poly movies. But is it? Joreth reviews Sex Monster for traces of an open marriage, triads, or any polyamory.
This episode is all about parenting in polyamory, which can be a hairy topic. Thankfully Tikva Wolfe, the author and illustrator of Kimchi Cuddles, is an expert. Featuring: Sophie Lastnameredacted, Tikva WolfeFor deaf folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/171277941363/ep10-polyamorous-parenting-with-tikva-wolfe-SophieSpecial thanks to Ivy, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
This episode is all about parenting in polyamory, which can be a hairy topic. Thankfully Tikva Wolfe, the author and illustrator of Kimchi Cuddles, is an expert. Featuring: Sophie Lastnameredacted, Tikva WolfeFor deaf folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/171277941363/ep10-polyamorous-parenting-with-tikva-wolfe-SophieSpecial thanks to Ivy, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
If you've ever wanted advice on dating as a survivor of sexual assault, or if you have ever wanted to be a better partner to someone who is a survivor of assault, this episode is for you! Trigger warnings for assault of course, but I will say we kept the conversation accessible and it shouldn't be too hard for folks to listen to unless the whole topic is off limits for you. Featuring: Sophie Lastnameredacted, Mara FakelastnameFor deaf and HoH folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/170759841073/ep-9-dating-as-a-survivor-SophieSpecial thanks to Ivy, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
If you've ever wanted advice on dating as a survivor of sexual assault, or if you have ever wanted to be a better partner to someone who is a survivor of assault, this episode is for you! Trigger warnings for assault of course, but I will say we kept the conversation accessible and it shouldn't be too hard for folks to listen to unless the whole topic is off limits for you. Featuring: Sophie Lastnameredacted, Mara FakelastnameFor deaf and HoH folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/170759841073/ep-9-dating-as-a-survivor-SophieSpecial thanks to Ivy, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
This episode is about an ugly topic we would all like to pretend doesn't exist. So uh... if y'all wanna make this one "the lost episode" that's fine, but I feel like there's some pretty good information in here, you know? Sometimes relationships end and it's good to know how to deal with it. Not your relationships of course, those will be around forever. But other people for sure. Is this too long? I feel like it's too long. This episode is sponsored by Funky Fool Design! Buy a sticker here: http://bit.ly/funkyfool Featuring: Sophie Lastnameredacted, Mara FakelastnameFor deaf and HoH folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/170243930713/ep-8-breakups -SophieSpecial thanks to Mara, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
This episode is about an ugly topic we would all like to pretend doesn't exist. So uh... if y'all wanna make this one "the lost episode" that's fine, but I feel like there's some pretty good information in here, you know? Sometimes relationships end and it's good to know how to deal with it. Not your relationships of course, those will be around forever. But other people for sure. Is this too long? I feel like it's too long. This episode is sponsored by Funky Fool Design! Buy a sticker here: http://bit.ly/funkyfool Featuring: Sophie Lastnameredacted, Mara FakelastnameFor deaf and HoH folks who would prefer a transcript: https://polyamradiotranscripts.tumblr.com/post/170243930713/ep-8-breakups -SophieSpecial thanks to Mara, who transcribed this podcast because I can't be trusted to be honest about the stupid things I say.
New(ish) polyamory mockumentary, Lutine, gets Joreth's special coverage to see if it's really polyamorous! Does this French fictional documentary do poly justice, or does it stick with the same old, tired, "opening up" stories and open marriages?
Is the indie film Shortbus really a poly film? This movie claims to have it all - alternative sexuality, BDSM, polyamory, consensual & ethical non-monogamy, swinging, "free love", and more.
Can a movie that actually describes itself as a "cautionary tale" make the Polyish Movie List? Joreth watches Kiss Me Again to see if this couple can open up their marriage to a bisexual women into a happy polyamorous triad, or is it yet another case of bad Unicorn Hunting?
We dive deep into our first interview where we discuss polyamory, relationships in the transgender community, vegan cheese, and Whole Foods romance. Nathan and Solomon give us a glance into their day to day. Follow us on Twitter and Instagram @TheOralReport Email us at TheOralReportPodcast@gmail.com to have your sexy questions answered or just to say hello! Glossary: Polyamory: The philosophy or state of being where one is romantically involved with more than one person at a time. **There is a movement/idea within the community to use the shorthand "PolyA" instead of "Poly" to avoid confusion with Polynesian people.** Misgender (verb): Referring to a person using a word or pronoun that does not reflect their personal gender identity. "My teacher misgendered me today so I corrected them - I hope in the future they use my preferred pronouns." Hoping to do some research? Check out the book, "The Ethical Slut - A Guide to Infinite Sexual Possibilities" by Dossie Easton --- Support this podcast: https://anchor.fm/the-oral-report/support
Rich Condit joins Nels and Vincent to explain how a vaccinia virus protein customizes ribosomes to favor the translation of viral mRNAs with a stretch of A residues in the 5'-untranslated region. Hosts: Nels Elde and Vincent Racaniello Become a patron of TWiEVO Rich Condit with Harry Noller (scroll down) Trans-Kingdom mimicry? (Nature) More on RACK1 (Nat Struct Mol Biol) Image credit Letters read on TWiEVO 21 This episode is brought to you by Blue Apron. Blue Apron is the #1 fresh ingredient and recipe delivery service in the country. See what’s on the menu this week and get your first 3 meals free with your first purchase – WITH FREE SHIPPING – by going to blueapron.com/twie. Science Picks Rich - Sniffing out significant “Pee values” Nels - Cuttlefish mimicking a hermit crab (evolutionary context) Vincent - Our first bioRxiv submission! Music on TWiEVO is performed by Trampled by Turtles Send your evolution questions and comments to twievo@microbe.tv
No problem being nice to Dickson in this episode, because he's absent for a discussion of a new giant virus that replicates in the cytoplasm yet transiently accesses the nucleus to bootstrap infection. Hosts: Vincent Racaniello, Alan Dove, Rich Condit, and Kathy Spindler Become a patron of TWiV! Links for this episode ASM Microbe Noumeavirus cytoplasmic replication depends on transient nuclear access (Nat Commun) Giants among viruses (TWiV 261) Image credit Letters read on TWiV 440 Weekly Science Picks Kathy - U-M Rubik’s Cube story #1 video #2 video #3 video Alan - You're not going to believe this Rich - High School student builds robot to solve Rubrik's cube Vincent - NIH limits grant money and The abomination of a bill Listener Pick Ken - The Fab Lab with Crazy Aunt LindseyLaurel - Sally Hoskin’s CREATE programMaureen - Simple Science Experiments You Can Do With Eggs Before Breakfast Intro music is by Ronald Jenkees. Send your virology questions and comments to twiv@microbe.tv
Jeremy Kilpatrick from the University of Georgia discusses his career in mathematics education, including his work on curriculum and the history of the field as well as the landmark report Adding It Up. Jeremy's Professional Website Jeremy's interview with Polya Free access to Adding It Up See comments for references mentioned during the interview. Complete list of episodes
Hosts: Vincent Racaniello, Alan Dove, Rich Condit, and Kathy Spindler Vincent, Alan, Rich, and Kathy resume the virology 101 series with a discussion of RNA capping, splicing, and export. Links for this episode: Slides for this episode (pdf) Spliced Ad 2 late mRNAs (Cell) An amazing sequence arrangement (Cell) Adenovirus late mRNA undecanucleotide (Cell) Aaron Shatkin, 72 (virology blog) Schrödinger's cat (Wikipedia) Letters read on TWiV 216 Weekly Science Picks Rich - Linus Pauling's explanation of science (YouTube)Alan - Underwater experimentsKathy - I'm a virus (YouTube)Vincent - Stem cells (Bizarro Comics) Listener Pick of the Week Tom - The President's AnalystDanielle - Overly honest methods (HuffPost and ASBMB Today) Send your virology questions and comments (email or mp3 file) to twiv@twiv.tv
Vincent, Rich, and Alan continue Virology 101 with a discussion of transcription, the process of making mRNA from a DNA template.
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02
Statistical relational learning analyzes the probabilistic constraints between the entities, their attributes and relationships. It represents an area of growing interest in modern data mining. Many leading researches are proposed with promising results. However, there is no easily applicable recipe of how to turn a relational domain (e.g. a database) into a probabilistic model. There are mainly two reasons. First, structural learning in relational models is even more complex than structural learning in (non-relational) Bayesian networks due to the exponentially many attributes an attribute might depend on. Second, it might be difficult and expensive to obtain reliable prior knowledge for the domains of interest. To remove these constraints, this thesis applies nonparametric Bayesian analysis to relational learning and proposes two compelling models: Dirichlet enhanced relational learning and infinite hidden relational learning. Dirichlet enhanced relational learning (DERL) extends nonparametric hierarchical Bayesian modeling to relational data. In existing relational models, the model parameters are global, which means the conditional probability distributions are the same for each entity and the relationships are independent of each other. To solve the limitations, we introduce hierarchical Bayesian (HB) framework to relational learning, such that model parameters can be personalized, i.e. owned by entities or relationships, and are coupled via common prior distributions. Additional flexibility is introduced in a nonparametric HB modeling, such that the learned knowledge can be truthfully represented. For inference, we develop an efficient variational method, which is motivated by the Polya urn representation of DP. DERL is demonstrated in a medical domain where we form a nonparametric HB model for entities involving hospitals, patients, procedures and diagnoses. The experiments show that the additional flexibility introduced by the nonparametric HB modeling results in a more accurate model to represent the dependencies between different types of relationships and gives significantly improved prediction performance about unknown relationships. In infinite hidden relational model (IHRM), we apply nonparametric mixture modeling to relational data, which extends the expressiveness of a relational model by introducing for each entity an infinite-dimensional hidden variable as part of a Dirichlet process (DP) mixture model. There are mainly three advantages. First, this reduces the extensive structural learning, which is particularly difficult in relational models due to the huge number of potential probabilistic parents. Second, the information can globally propagate in the ground network defined by the relational structure. Third, the number of mixture components for each entity class can be optimized by the model itself based on the data. IHRM can be applied for entity clustering and relationship/attribute prediction, which are two important tasks in relational data mining. For inference of IHRM, we develop four algorithms: collapsed Gibbs sampling with the Chinese restaurant process, blocked Gibbs sampling with the truncated stick breaking construction (SBC), and mean-field inference with truncated SBC, as well as an empirical approximation. IHRM is evaluated in three different domains: a recommendation system based on the MovieLens data set, prediction of the functions of yeast genes/proteins on the data set of KDD Cup 2001, and the medical data analysis. The experimental results show that IHRM gives significantly improved estimates of attributes/relationships and highly interpretable entity clusters in complex relational data.