POPULARITY
SNWA researchers have been at the forefront of emerging water issues for the past 25 years, and their discoveries have helped protect Southern Nevada's drinking water supply. SNWA scientists have helped stop quagga mussel infestations from blocking water intake pipes and helped implement ozonation to the water treatment process. Eric Wert, Water Quality Research & Development Manager, talks about the emerging issues the lab currently is tracking and what might be the next generation of water treatment on this episode of the Water Smarts Podcast. Hosts: Bronson Mack and Crystal Zuelkehttps://www.snwa.com/ https://www.snwa.com/
Previously on WQ: nos mudamos, parimos, y corrimos para lograrlo todo. Bienvenidos a una nueva temporada de: Wait, qué? No olviden suscribirse en YouTube, seguirnos en TikTok y en Instagram @waitquepod, y comprarnos un café. Si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en nuestro link para más información. Links: Youtube: https://www.youtube.com/@waitquepod Instagram: https://www.instagram.com/waitquepod TikTok: https://www.tiktok.com/@waitquepod Buy me a coffee: https://www.buymeacoffee.com/waitque OpcionYo: https://opciónyonueva.trb.ai/wa/18zyR0x
A segundos de la mudanza de Rach, de que Andry trajera al mundo nuestra primera nueva integrante de WQ, y de que Fri asimile mentalmente que se mudó, discutimos este final de temporada para todas. Sí hubo llanto. No olviden suscribirse en YouTube, seguirnos en TikTok y en Instagram @waitquepod, y comprarnos un café. Si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en nuestro link para más información. Links:Youtube: https://www.youtube.com/@waitquepod Instagram: https://www.instagram.com/waitquepod TikTok: https://www.tiktok.com/@waitquepod Buy me a coffee: https://www.buymeacoffee.com/waitque OpcionYo: https://opciónyonueva.trb.ai/wa/18zyR0x
En una de las ediciones MÁS especiales de WQ, uniformamos a nuestras parejas y los obligamos a hacer nuestro trabajo y hostear el podcast por un episodio completo. Acompañen a Kenny, Iván y Moi a responder todas sus preguntas, a vencer los nervios, y a tomar venganza hacia nosotras. No olviden suscribirse en YouTube, seguirnos en TikTok y en Instagram @waitquepod, y comprarnos un café. Si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en nuestro link para más información.Links:Youtube: https://www.youtube.com/@waitquepod Instagram: https://www.instagram.com/waitquepod TikTok: https://www.tiktok.com/@waitquepod Buy me a coffee: https://www.buymeacoffee.com/waitqueOpcionYo: https://opciónyonueva.trb.ai/wa/18zyR0x
Carl Philipp Emanuel Bach (1714 - 1788) - Orchester-Sinfonien No. 1-2-3-4Sinfonie für Orchester in D-Dur (Orchester-Sinfonien No. 1), H. 663, Wq. 183/101. Allegro di molto (00:00)02. Largo03. Presto Sinfonie für Orchester in Es-Dur (Orchester-Sinfonien No. 2), H. 664, Wq. 183/201. Allegro di molto (11:12)02. Larghetto03. Allegretto Sinfonie für Orchester in F-Dur (Orchester-Sinfonien No. 3), H. 665, Wq. 183/301. Allegro di molto (22:08)02. Larghetto03. Presto Sinfonie für Orchester in G-Dur (Orchester-Sinfonien No. 4), H. 666, Wq. 183/401. Allegro assai (32:25)02. Poco andante03. Presto The English ConcertAndrew Manze, violin e conductor
The (March) madness continues here on the CoffeHouse with MORE La Folia variations for you! We hope you enjoy this rendition from the classical era this week. Be sure to like and share with a friend! Music: https://imslp.org/wiki/12_Variations_%C3%BCber_die_Folie_d%27Espagne,_Wq.118/9_(H.263)_(Bach,_Carl_Philipp_Emanuel) https://creativecommons.org/licenses/by-nc/3.0/legalcode
Due to overwhelming demand (>15x applications:slots), we are closing CFPs for AI Engineer Summit NYC today. Last call! Thanks, we'll be reaching out to all shortly!The world's top AI blogger and friend of every pod, Simon Willison, dropped a monster 2024 recap: Things we learned about LLMs in 2024. Brian of the excellent TechMeme Ride Home pinged us for a connection and a special crossover episode, our first in 2025. The target audience for this podcast is a tech-literate, but non-technical one. You can see Simon's notes for AI Engineers in his World's Fair Keynote.Timestamp* 00:00 Introduction and Guest Welcome* 01:06 State of AI in 2025* 01:43 Advancements in AI Models* 03:59 Cost Efficiency in AI* 06:16 Challenges and Competition in AI* 17:15 AI Agents and Their Limitations* 26:12 Multimodal AI and Future Prospects* 35:29 Exploring Video Avatar Companies* 36:24 AI Influencers and Their Future* 37:12 Simplifying Content Creation with AI* 38:30 The Importance of Credibility in AI* 41:36 The Future of LLM User Interfaces* 48:58 Local LLMs: A Growing Interest* 01:07:22 AI Wearables: The Next Big Thing* 01:10:16 Wrapping Up and Final ThoughtsTranscript[00:00:00] Introduction and Guest Welcome[00:00:00] Brian: Welcome to the first bonus episode of the Tech Meme Write Home for the year 2025. I'm your host as always, Brian McCullough. Listeners to the pod over the last year know that I have made a habit of quoting from Simon Willison when new stuff happens in AI from his blog. Simon has been, become a go to for many folks in terms of, you know, Analyzing things, criticizing things in the AI space.[00:00:33] Brian: I've wanted to talk to you for a long time, Simon. So thank you for coming on the show. No, it's a privilege to be here. And the person that made this connection happen is our friend Swyx, who has been on the show back, even going back to the, the Twitter Spaces days but also an AI guru in, in their own right Swyx, thanks for coming on the show also.[00:00:54] swyx (2): Thanks. I'm happy to be on and have been a regular listener, so just happy to [00:01:00] contribute as well.[00:01:00] Brian: And a good friend of the pod, as they say. Alright, let's go right into it.[00:01:06] State of AI in 2025[00:01:06] Brian: Simon, I'm going to do the most unfair, broad question first, so let's get it out of the way. The year 2025. Broadly, what is the state of AI as we begin this year?[00:01:20] Brian: Whatever you want to say, I don't want to lead the witness.[00:01:22] Simon: Wow. So many things, right? I mean, the big thing is everything's got really good and fast and cheap. Like, that was the trend throughout all of 2024. The good models got so much cheaper, they got so much faster, they got multimodal, right? The image stuff isn't even a surprise anymore.[00:01:39] Simon: They're growing video, all of that kind of stuff. So that's all really exciting.[00:01:43] Advancements in AI Models[00:01:43] Simon: At the same time, they didn't get massively better than GPT 4, which was a bit of a surprise. So that's sort of one of the open questions is, are we going to see huge, but I kind of feel like that's a bit of a distraction because GPT 4, but way cheaper, much larger context lengths, and it [00:02:00] can do multimodal.[00:02:01] Simon: is better, right? That's a better model, even if it's not.[00:02:05] Brian: What people were expecting or hoping, maybe not expecting is not the right word, but hoping that we would see another step change, right? Right. From like GPT 2 to 3 to 4, we were expecting or hoping that maybe we were going to see the next evolution in that sort of, yeah.[00:02:21] Brian: We[00:02:21] Simon: did see that, but not in the way we expected. We thought the model was just going to get smarter, and instead we got. Massive drops in, drops in price. We got all of these new capabilities. You can talk to the things now, right? They can do simulated audio input, all of that kind of stuff. And so it's kind of, it's interesting to me that the models improved in all of these ways we weren't necessarily expecting.[00:02:43] Simon: I didn't know it would be able to do an impersonation of Santa Claus, like a, you know, Talked to it through my phone and show it what I was seeing by the end of 2024. But yeah, we didn't get that GPT 5 step. And that's one of the big open questions is, is that actually just around the corner and we'll have a bunch of GPT 5 class models drop in the [00:03:00] next few months?[00:03:00] Simon: Or is there a limit?[00:03:03] Brian: If you were a betting man and wanted to put money on it, do you expect to see a phase change, step change in 2025?[00:03:11] Simon: I don't particularly for that, like, the models, but smarter. I think all of the trends we're seeing right now are going to keep on going, especially the inference time compute, right?[00:03:21] Simon: The trick that O1 and O3 are doing, which means that you can solve harder problems, but they cost more and it churns away for longer. I think that's going to happen because that's already proven to work. I don't know. I don't know. Maybe there will be a step change to a GPT 5 level, but honestly, I'd be completely happy if we got what we've got right now.[00:03:41] Simon: But cheaper and faster and more capabilities and longer contexts and so forth. That would be thrilling to me.[00:03:46] Brian: Digging into what you've just said one of the things that, by the way, I hope to link in the show notes to Simon's year end post about what, what things we learned about LLMs in 2024. Look for that in the show notes.[00:03:59] Cost Efficiency in AI[00:03:59] Brian: One of the things that you [00:04:00] did say that you alluded to even right there was that in the last year, you felt like the GPT 4 barrier was broken, like IE. Other models, even open source ones are now regularly matching sort of the state of the art.[00:04:13] Simon: Well, it's interesting, right? So the GPT 4 barrier was a year ago, the best available model was OpenAI's GPT 4 and nobody else had even come close to it.[00:04:22] Simon: And they'd been at the, in the lead for like nine months, right? That thing came out in what, February, March of, of 2023. And for the rest of 2023, nobody else came close. And so at the start of last year, like a year ago, the big question was, Why has nobody beaten them yet? Like, what do they know that the rest of the industry doesn't know?[00:04:40] Simon: And today, that I've counted 18 organizations other than GPT 4 who've put out a model which clearly beats that GPT 4 from a year ago thing. Like, maybe they're not better than GPT 4. 0, but that's, that, that, that barrier got completely smashed. And yeah, a few of those I've run on my laptop, which is wild to me.[00:04:59] Simon: Like, [00:05:00] it was very, very wild. It felt very clear to me a year ago that if you want GPT 4, you need a rack of 40, 000 GPUs just to run the thing. And that turned out not to be true. Like the, the, this is that big trend from last year of the models getting more efficient, cheaper to run, just as capable with smaller weights and so forth.[00:05:20] Simon: And I ran another GPT 4 model on my laptop this morning, right? Microsoft 5. 4 just came out. And that, if you look at the benchmarks, it's definitely, it's up there with GPT 4. 0. It's probably not as good when you actually get into the vibes of the thing, but it, it runs on my, it's a 14 gigabyte download and I can run it on a MacBook Pro.[00:05:38] Simon: Like who saw that coming? The most exciting, like the close of the year on Christmas day, just a few weeks ago, was when DeepSeek dropped their DeepSeek v3 model on Hugging Face without even a readme file. It was just like a giant binary blob that I can't run on my laptop. It's too big. But in all of the benchmarks, it's now by far the best available [00:06:00] open, open weights model.[00:06:01] Simon: Like it's, it's, it's beating the, the metalamas and so forth. And that was trained for five and a half million dollars, which is a tenth of the price that people thought it costs to train these things. So everything's trending smaller and faster and more efficient.[00:06:15] Brian: Well, okay.[00:06:16] Challenges and Competition in AI[00:06:16] Brian: I, I kind of was going to get to that later, but let's, let's combine this with what I was going to ask you next, which is, you know, you're talking, you know, Also in the piece about the LLM prices crashing, which I've even seen in projects that I'm working on, but explain Explain that to a general audience, because we hear all the time that LLMs are eye wateringly expensive to run, but what we're suggesting, and we'll come back to the cheap Chinese LLM, but first of all, for the end user, what you're suggesting is that we're starting to see the cost come down sort of in the traditional technology way of Of costs coming down over time,[00:06:49] Simon: yes, but very aggressively.[00:06:51] Simon: I mean, my favorite thing, the example here is if you look at GPT-3, so open AI's g, PT three, which was the best, a developed model in [00:07:00] 2022 and through most of 20 2023. That, the models that we have today, the OpenAI models are a hundred times cheaper. So there was a 100x drop in price for OpenAI from their best available model, like two and a half years ago to today.[00:07:13] Simon: And[00:07:14] Brian: just to be clear, not to train the model, but for the use of tokens and things. Exactly,[00:07:20] Simon: for running prompts through them. And then When you look at the, the really, the top tier model providers right now, I think, are OpenAI, Anthropic, Google, and Meta. And there are a bunch of others that I could list there as well.[00:07:32] Simon: Mistral are very good. The, the DeepSeq and Quen models have got great. There's a whole bunch of providers serving really good models. But even if you just look at the sort of big brand name providers, they all offer models now that are A fraction of the price of the, the, of the models we were using last year.[00:07:49] Simon: I think I've got some numbers that I threw into my blog entry here. Yeah. Like Gemini 1. 5 flash, that's Google's fast high quality model is [00:08:00] how much is that? It's 0. 075 dollars per million tokens. Like these numbers are getting, So we just do cents per million now,[00:08:09] swyx (2): cents per million,[00:08:10] Simon: cents per million makes, makes a lot more sense.[00:08:12] Simon: Yeah they have one model 1. 5 flash 8B, the absolute cheapest of the Google models, is 27 times cheaper than GPT 3. 5 turbo was a year ago. That's it. And GPT 3. 5 turbo, that was the cheap model, right? Now we've got something 27 times cheaper, and the Google, this Google one can do image recognition, it can do million token context, all of those tricks.[00:08:36] Simon: But it's, it's, it's very, it's, it really is startling how inexpensive some of this stuff has got.[00:08:41] Brian: Now, are we assuming that this, that happening is directly the result of competition? Because again, you know, OpenAI, and probably they're doing this for their own almost political reasons, strategic reasons, keeps saying, we're losing money on everything, even the 200.[00:08:56] Brian: So they probably wouldn't, the prices wouldn't be [00:09:00] coming down if there wasn't intense competition in this space.[00:09:04] Simon: The competition is absolutely part of it, but I have it on good authority from sources I trust that Google Gemini is not operating at a loss. Like, the amount of electricity to run a prompt is less than they charge you.[00:09:16] Simon: And the same thing for Amazon Nova. Like, somebody found an Amazon executive and got them to say, Yeah, we're not losing money on this. I don't know about Anthropic and OpenAI, but clearly that demonstrates it is possible to run these things at these ludicrously low prices and still not be running at a loss if you discount the Army of PhDs and the, the training costs and all of that kind of stuff.[00:09:36] Brian: One, one more for me before I let Swyx jump in here. To, to come back to DeepSeek and this idea that you could train, you know, a cutting edge model for 6 million. I, I was saying on the show, like six months ago, that if we are getting to the point where each new model It would cost a billion, ten billion, a hundred billion to train that.[00:09:54] Brian: At some point it would almost, only nation states would be able to train the new models. Do you [00:10:00] expect what DeepSeek and maybe others are proving to sort of blow that up? Or is there like some sort of a parallel track here that maybe I'm not technically, I don't have the mouse to understand the difference.[00:10:11] Brian: Is the model, are the models going to go, you know, Up to a hundred billion dollars or can we get them down? Sort of like DeepSeek has proven[00:10:18] Simon: so I'm the wrong person to answer that because I don't work in the lab training these models. So I can give you my completely uninformed opinion, which is, I felt like the DeepSeek thing.[00:10:27] Simon: That was a bomb shell. That was an absolute bombshell when they came out and said, Hey, look, we've trained. One of the best available models and it cost us six, five and a half million dollars to do it. I feel, and they, the reason, one of the reasons it's so efficient is that we put all of these export controls in to stop Chinese companies from giant buying GPUs.[00:10:44] Simon: So they've, were forced to be, go as efficient as possible. And yet the fact that they've demonstrated that that's possible to do. I think it does completely tear apart this, this, this mental model we had before that yeah, the training runs just keep on getting more and more expensive and the number of [00:11:00] organizations that can afford to run these training runs keeps on shrinking.[00:11:03] Simon: That, that's been blown out of the water. So yeah, that's, again, this was our Christmas gift. This was the thing they dropped on Christmas day. Yeah, it makes me really optimistic that we can, there are, It feels like there was so much low hanging fruit in terms of the efficiency of both inference and training and we spent a whole bunch of last year exploring that and getting results from it.[00:11:22] Simon: I think there's probably a lot left. I think there's probably, well, I would not be surprised to see even better models trained spending even less money over the next six months.[00:11:31] swyx (2): Yeah. So I, I think there's a unspoken angle here on what exactly the Chinese labs are trying to do because DeepSea made a lot of noise.[00:11:41] swyx (2): so much for joining us for around the fact that they train their model for six million dollars and nobody quite quite believes them. Like it's very, very rare for a lab to trumpet the fact that they're doing it for so cheap. They're not trying to get anyone to buy them. So why [00:12:00] are they doing this? They make it very, very obvious.[00:12:05] swyx (2): Deepseek is about 150 employees. It's an order of magnitude smaller than at least Anthropic and maybe, maybe more so for OpenAI. And so what's, what's the end game here? Are they, are they just trying to show that the Chinese are better than us?[00:12:21] Simon: So Deepseek, it's the arm of a hedge, it's a, it's a quant fund, right?[00:12:25] Simon: It's an algorithmic quant trading thing. So I, I, I would love to get more insight into how that organization works. My assumption from what I've seen is it looks like they're basically just flexing. They're like, hey, look at how utterly brilliant we are with this amazing thing that we've done. And it's, it's working, right?[00:12:43] Simon: They but, and so is that it? Are they, is this just their kind of like, this is, this is why our company is so amazing. Look at this thing that we've done, or? I don't know. I'd, I'd love to get Some insight from, from within that industry as to, as to how that's all playing out.[00:12:57] swyx (2): The, the prevailing theory among the Local Llama [00:13:00] crew and the Twitter crew that I indexed for my newsletter is that there is some amount of copying going on.[00:13:06] swyx (2): It's like Sam Altman you know, tweet, tweeting about how they're being copied. And then also there's this, there, there are other sort of opening eye employees that have said, Stuff that is similar that DeepSeek's rate of progress is how U. S. intelligence estimates the number of foreign spies embedded in top labs.[00:13:22] swyx (2): Because a lot of these ideas do spread around, but they surprisingly have a very high density of them in the DeepSeek v3 technical report. So it's, it's interesting. We don't know how much, how many, how much tokens. I think that, you know, people have run analysis on how often DeepSeek thinks it is cloud or thinks it is opening GPC 4.[00:13:40] swyx (2): Thanks for watching! And we don't, we don't know. We don't know. I think for me, like, yeah, we'll, we'll, we basically will never know as, as external commentators. I think what's interesting is how, where does this go? Is there a logical floor or bottom by my estimations for the same amount of ELO started last year to the end of last year cost went down by a thousand X for the [00:14:00] GPT, for, for GPT 4 intelligence.[00:14:02] swyx (2): Would, do they go down a thousand X this year?[00:14:04] Simon: That's a fascinating question. Yeah.[00:14:06] swyx (2): Is there a Moore's law going on, or did we just get a one off benefit last year for some weird reason?[00:14:14] Simon: My uninformed hunch is low hanging fruit. I feel like up until a year ago, people haven't been focusing on efficiency at all. You know, it was all about, what can we get these weird shaped things to do?[00:14:24] Simon: And now once we've sort of hit that, okay, we know that we can get them to do what GPT 4 can do, When thousands of researchers around the world all focus on, okay, how do we make this more efficient? What are the most important, like, how do we strip out all of the weights that have stuff in that doesn't really matter?[00:14:39] Simon: All of that kind of thing. So yeah, maybe that was it. Maybe 2024 was a freak year of all of the low hanging fruit coming out at once. And we'll actually see a reduction in the, in that rate of improvement in terms of efficiency. I wonder, I mean, I think we'll know for sure in about three months time if that trend's going to continue or not.[00:14:58] swyx (2): I agree. You know, I [00:15:00] think the other thing that you mentioned that DeepSeq v3 was the gift that was given from DeepSeq over Christmas, but I feel like the other thing that might be underrated was DeepSeq R1,[00:15:11] Speaker 4: which is[00:15:13] swyx (2): a reasoning model you can run on your laptop. And I think that's something that a lot of people are looking ahead to this year.[00:15:18] swyx (2): Oh, did they[00:15:18] Simon: release the weights for that one?[00:15:20] swyx (2): Yeah.[00:15:21] Simon: Oh my goodness, I missed that. I've been playing with the quen. So the other great, the other big Chinese AI app is Alibaba's quen. Actually, yeah, I, sorry, R1 is an API available. Yeah. Exactly. When that's really cool. So Alibaba's Quen have released two reasoning models that I've run on my laptop.[00:15:38] Simon: Now there was, the first one was Q, Q, WQ. And then the second one was QVQ because the second one's a vision model. So you can like give it vision puzzles and a prompt that these things, they are so much fun to run. Because they think out loud. It's like the OpenAR 01 sort of hides its thinking process. The Query ones don't.[00:15:59] Simon: They just, they [00:16:00] just churn away. And so you'll give it a problem and it will output literally dozens of paragraphs of text about how it's thinking. My favorite thing that happened with QWQ is I asked it to draw me a pelican on a bicycle in SVG. That's like my standard stupid prompt. And for some reason it thought in Chinese.[00:16:18] Simon: It spat out a whole bunch of like Chinese text onto my terminal on my laptop, and then at the end it gave me quite a good sort of artistic pelican on a bicycle. And I ran it all through Google Translate, and yeah, it was like, it was contemplating the nature of SVG files as a starting point. And the fact that my laptop can think in Chinese now is so delightful.[00:16:40] Simon: It's so much fun watching you do that.[00:16:43] swyx (2): Yeah, I think Andrej Karpathy was saying, you know, we, we know that we have achieved proper reasoning inside of these models when they stop thinking in English, and perhaps the best form of thought is in Chinese. But yeah, for listeners who don't know Simon's blog he always, whenever a new model comes out, you, I don't know how you do it, but [00:17:00] you're always the first to run Pelican Bench on these models.[00:17:02] swyx (2): I just did it for 5.[00:17:05] Simon: Yeah.[00:17:07] swyx (2): So I really appreciate that. You should check it out. These are not theoretical. Simon's blog actually shows them.[00:17:12] Brian: Let me put on the investor hat for a second.[00:17:15] AI Agents and Their Limitations[00:17:15] Brian: Because from the investor side of things, a lot of the, the VCs that I know are really hot on agents, and this is the year of agents, but last year was supposed to be the year of agents as well. Lots of money flowing towards, And Gentic startups.[00:17:32] Brian: But in in your piece that again, we're hopefully going to have linked in the show notes, you sort of suggest there's a fundamental flaw in AI agents as they exist right now. Let me let me quote you. And then I'd love to dive into this. You said, I remain skeptical as to their ability based once again, on the Challenge of gullibility.[00:17:49] Brian: LLMs believe anything you tell them, any systems that attempt to make meaningful decisions on your behalf, will run into the same roadblock. How good is a travel agent, or a digital assistant, or even a research tool, if it [00:18:00] can't distinguish truth from fiction? So, essentially, what you're suggesting is that the state of the art now that allows agents is still, it's still that sort of 90 percent problem, the edge problem, getting to the Or, or, or is there a deeper flaw?[00:18:14] Brian: What are you, what are you saying there?[00:18:16] Simon: So this is the fundamental challenge here and honestly my frustration with agents is mainly around definitions Like any if you ask anyone who says they're working on agents to define agents You will get a subtly different definition from each person But everyone always assumes that their definition is the one true one that everyone else understands So I feel like a lot of these agent conversations, people talking past each other because one person's talking about the, the sort of travel agent idea of something that books things on your behalf.[00:18:41] Simon: Somebody else is talking about LLMs with tools running in a loop with a cron job somewhere and all of these different things. You, you ask academics and they'll laugh at you because they've been debating what agents mean for over 30 years at this point. It's like this, this long running, almost sort of an in joke in that community.[00:18:57] Simon: But if we assume that for this purpose of this conversation, an [00:19:00] agent is something that, Which you can give a job and it goes off and it does that thing for you like, like booking travel or things like that. The fundamental challenge is, it's the reliability thing, which comes from this gullibility problem.[00:19:12] Simon: And a lot of my, my interest in this originally came from when I was thinking about prompt injections as a source of this form of attack against LLM systems where you deliberately lay traps out there for this LLM to stumble across,[00:19:24] Brian: and which I should say you have been banging this drum that no one's gotten any far, at least on solving this, that I'm aware of, right.[00:19:31] Brian: Like that's still an open problem. The two years.[00:19:33] Simon: Yeah. Right. We've been talking about this problem and like, a great illustration of this was Claude so Anthropic released Claude computer use a few months ago. Fantastic demo. You could fire up a Docker container and you could literally tell it to do something and watch it open a web browser and navigate to a webpage and click around and so forth.[00:19:51] Simon: Really, really, really interesting and fun to play with. And then, um. One of the first demos somebody tried was, what if you give it a web page that says download and run this [00:20:00] executable, and it did, and the executable was malware that added it to a botnet. So the, the very first most obvious dumb trick that you could play on this thing just worked, right?[00:20:10] Simon: So that's obviously a really big problem. If I'm going to send something out to book travel on my behalf, I mean, it's hard enough for me to figure out which airlines are trying to scam me and which ones aren't. Do I really trust a language model that believes the literal truth of anything that's presented to it to go out and do those things?[00:20:29] swyx (2): Yeah I definitely think there's, it's interesting to see Anthropic doing this because they used to be the safety arm of OpenAI that split out and said, you know, we're worried about letting this thing out in the wild and here they are enabling computer use for agents. Thanks. The, it feels like things have merged.[00:20:49] swyx (2): You know, I'm, I'm also fairly skeptical about, you know, this always being the, the year of Linux on the desktop. And this is the equivalent of this being the year of agents that people [00:21:00] are not predicting so much as wishfully thinking and hoping and praying for their companies and agents to work.[00:21:05] swyx (2): But I, I feel like things are. Coming along a little bit. It's to me, it's kind of like self driving. I remember in 2014 saying that self driving was just around the corner. And I mean, it kind of is, you know, like in, in, in the Bay area. You[00:21:17] Simon: get in a Waymo and you're like, Oh, this works. Yeah, but it's a slow[00:21:21] swyx (2): cook.[00:21:21] swyx (2): It's a slow cook over the next 10 years. We're going to hammer out these things and the cynical people can just point to all the flaws, but like, there are measurable or concrete progress steps that are being made by these builders.[00:21:33] Simon: There is one form of agent that I believe in. I believe, mostly believe in the research assistant form of agents.[00:21:39] Simon: The thing where you've got a difficult problem and, and I've got like, I'm, I'm on the beta for the, the Google Gemini 1. 5 pro with deep research. I think it's called like these names, these names. Right. But. I've been using that. It's good, right? You can give it a difficult problem and it tells you, okay, I'm going to look at 56 different websites [00:22:00] and it goes away and it dumps everything to its context and it comes up with a report for you.[00:22:04] Simon: And it's not, it won't work against adversarial websites, right? If there are websites with deliberate lies in them, it might well get caught out. Most things don't have that as a problem. And so I've had some answers from that which were genuinely really valuable to me. And that feels to me like, I can see how given existing LLM tech, especially with Google Gemini with its like million token contacts and Google with their crawl of the entire web and their, they've got like search, they've got search and cache, they've got a cache of every page and so forth.[00:22:35] Simon: That makes sense to me. And that what they've got right now, I don't think it's, it's not as good as it can be, obviously, but it's, it's, it's, it's a real useful thing, which they're going to start rolling out. So, you know, Perplexity have been building the same thing for a couple of years. That, that I believe in.[00:22:50] Simon: You know, if you tell me that you're going to have an agent that's a research assistant agent, great. The coding agents I mean, chat gpt code interpreter, Nearly two years [00:23:00] ago, that thing started writing Python code, executing the code, getting errors, rewriting it to fix the errors. That pattern obviously works.[00:23:07] Simon: That works really, really well. So, yeah, coding agents that do that sort of error message loop thing, those are proven to work. And they're going to keep on getting better, and that's going to be great. The research assistant agents are just beginning to get there. The things I'm critical of are the ones where you trust, you trust this thing to go out and act autonomously on your behalf, and make decisions on your behalf, especially involving spending money, like that.[00:23:31] Simon: I don't see that working for a very long time. That feels to me like an AGI level problem.[00:23:37] swyx (2): It's it's funny because I think Stripe actually released an agent toolkit which is one of the, the things I featured that is trying to enable these agents each to have a wallet that they can go and spend and have, basically, it's a virtual card.[00:23:49] swyx (2): It's not that, not that difficult with modern infrastructure. can[00:23:51] Simon: stick a 50 cap on it, then at least it's an honor. Can't lose more than 50.[00:23:56] Brian: You know I don't, I don't know if either of you know Rafat Ali [00:24:00] he runs Skift, which is a, a travel news vertical. And he, he, he constantly laughs at the fact that every agent thing is, we're gonna get rid of booking a, a plane flight for you, you know?[00:24:11] Brian: And, and I would point out that, like, historically, when the web started, the first thing everyone talked about is, You can go online and book a trip, right? So it's funny for each generation of like technological advance. The thing they always want to kill is the travel agent. And now they want to kill the webpage travel agent.[00:24:29] Simon: Like it's like I use Google flight search. It's great, right? If you gave me an agent to do that for me, it would save me, I mean, maybe 15 seconds of typing in my things, but I still want to see what my options are and go, yeah, I'm not flying on that airline, no matter how cheap they are.[00:24:44] swyx (2): Yeah. For listeners, go ahead.[00:24:47] swyx (2): For listeners, I think, you know, I think both of you are pretty positive on NotebookLM. And you know, we, we actually interviewed the NotebookLM creators, and there are actually two internal agents going on internally. The reason it takes so long is because they're running an agent loop [00:25:00] inside that is fairly autonomous, which is kind of interesting.[00:25:01] swyx (2): For one,[00:25:02] Simon: for a definition of agent loop, if you picked that particularly well. For one definition. And you're talking about the podcast side of this, right?[00:25:07] swyx (2): Yeah, the podcast side of things. They have a there's, there's going to be a new version coming out that, that we'll be featuring at our, at our conference.[00:25:14] Simon: That one's fascinating to me. Like NotebookLM, I think it's two products, right? On the one hand, it's actually a very good rag product, right? You dump a bunch of things in, you can run searches, that, that, it does a good job of. And then, and then they added the, the podcast thing. It's a bit of a, it's a total gimmick, right?[00:25:30] Simon: But that gimmick got them attention, because they had a great product that nobody paid any attention to at all. And then you add the unfeasibly good voice synthesis of the podcast. Like, it's just, it's, it's, it's the lesson.[00:25:43] Brian: It's the lesson of mid journey and stuff like that. If you can create something that people can post on socials, you don't have to lift a finger again to do any marketing for what you're doing.[00:25:53] Brian: Let me dig into Notebook LLM just for a second as a podcaster. As a [00:26:00] gimmick, it makes sense, and then obviously, you know, you dig into it, it sort of has problems around the edges. It's like, it does the thing that all sort of LLMs kind of do, where it's like, oh, we want to Wrap up with a conclusion.[00:26:12] Multimodal AI and Future Prospects[00:26:12] Brian: I always call that like the the eighth grade book report paper problem where it has to have an intro and then, you know But that's sort of a thing where because I think you spoke about this again in your piece at the year end About how things are going multimodal and how things are that you didn't expect like, you know vision and especially audio I think So that's another thing where, at least over the last year, there's been progress made that maybe you, you didn't think was coming as quick as it came.[00:26:43] Simon: I don't know. I mean, a year ago, we had one really good vision model. We had GPT 4 vision, was, was, was very impressive. And Google Gemini had just dropped Gemini 1. 0, which had vision, but nobody had really played with it yet. Like Google hadn't. People weren't taking Gemini [00:27:00] seriously at that point. I feel like it was 1.[00:27:02] Simon: 5 Pro when it became apparent that actually they were, they, they got over their hump and they were building really good models. And yeah, and they, to be honest, the video models are mostly still using the same trick. The thing where you divide the video up into one image per second and you dump that all into the context.[00:27:16] Simon: So maybe it shouldn't have been so surprising to us that long context models plus vision meant that the video was, was starting to be solved. Of course, it didn't. Not being, you, what you really want with videos, you want to be able to do the audio and the images at the same time. And I think the models are beginning to do that now.[00:27:33] Simon: Like, originally, Gemini 1. 5 Pro originally ignored the audio. It just did the, the, like, one frame per second video trick. As far as I can tell, the most recent ones are actually doing pure multimodal. But the things that opens up are just extraordinary. Like, the the ChatGPT iPhone app feature that they shipped as one of their 12 days of, of OpenAI, I really can be having a conversation and just turn on my video camera and go, Hey, what kind of tree is [00:28:00] this?[00:28:00] Simon: And so forth. And it works. And for all I know, that's just snapping a like picture once a second and feeding it into the model. The, the, the things that you can do with that as an end user are extraordinary. Like that, that to me, I don't think most people have cottoned onto the fact that you can now stream video directly into a model because it, it's only a few weeks old.[00:28:22] Simon: Wow. That's a, that's a, that's a, that's Big boost in terms of what kinds of things you can do with this stuff. Yeah. For[00:28:30] swyx (2): people who are not that close I think Gemini Flashes free tier allows you to do something like capture a photo, one photo every second or a minute and leave it on 24, seven, and you can prompt it to do whatever.[00:28:45] swyx (2): And so you can effectively have your own camera app or monitoring app that that you just prompt and it detects where it changes. It detects for, you know, alerts or anything like that, or describes your day. You know, and, and, and the fact that this is free I think [00:29:00] it's also leads into the previous point of it being the prices haven't come down a lot.[00:29:05] Simon: And even if you're paying for this stuff, like a thing that I put in my blog entry is I ran a calculation on what it would cost to process 68, 000 photographs in my photo collection, and for each one just generate a caption, and using Gemini 1. 5 Flash 8B, it would cost me 1. 68 to process 68, 000 images, which is, I mean, that, that doesn't make sense.[00:29:28] Simon: None of that makes sense. Like it's, it's a, for one four hundredth of a cent per image to generate captions now. So you can see why feeding in a day's worth of video just isn't even very expensive to process.[00:29:40] swyx (2): Yeah, I'll tell you what is expensive. It's the other direction. So we're here, we're talking about consuming video.[00:29:46] swyx (2): And this year, we also had a lot of progress, like probably one of the most excited, excited, anticipated launches of the year was Sora. We actually got Sora. And less exciting.[00:29:55] Simon: We did, and then VO2, Google's Sora, came out like three [00:30:00] days later and upstaged it. Like, Sora was exciting until VO2 landed, which was just better.[00:30:05] swyx (2): In general, I feel the media, or the social media, has been very unfair to Sora. Because what was released to the world, generally available, was Sora Lite. It's the distilled version of Sora, right? So you're, I did not[00:30:16] Simon: realize that you're absolutely comparing[00:30:18] swyx (2): the, the most cherry picked version of VO two, the one that they published on the marketing page to the, the most embarrassing version of the soa.[00:30:25] swyx (2): So of course it's gonna look bad, so, well, I got[00:30:27] Simon: access to the VO two I'm in the VO two beta and I've been poking around with it and. Getting it to generate pelicans on bicycles and stuff. I would absolutely[00:30:34] swyx (2): believe that[00:30:35] Simon: VL2 is actually better. Is Sora, so is full fat Sora coming soon? Do you know, when, when do we get to play with that one?[00:30:42] Simon: No one's[00:30:43] swyx (2): mentioned anything. I think basically the strategy is let people play around with Sora Lite and get info there. But the, the, keep developing Sora with the Hollywood studios. That's what they actually care about. Gotcha. Like the rest of us. Don't really know what to do with the video anyway. Right.[00:30:59] Simon: I mean, [00:31:00] that's my thing is I realized that for generative images and images and video like images We've had for a few years and I don't feel like they've broken out into the talented artist community yet Like lots of people are having fun with them and doing and producing stuff. That's kind of cool to look at but what I want you know that that movie everything everywhere all at once, right?[00:31:20] Simon: One, one ton of Oscars, utterly amazing film. The VFX team for that were five people, some of whom were watching YouTube videos to figure out what to do. My big question for, for Sora and and and Midjourney and stuff, what happens when a creative team like that starts using these tools? I want the creative geniuses behind everything, everywhere all at once.[00:31:40] Simon: What are they going to be able to do with this stuff in like a few years time? Because that's really exciting to me. That's where you take artists who are at the very peak of their game. Give them these new capabilities and see, see what they can do with them.[00:31:52] swyx (2): I should, I know a little bit here. So it should mention that, that team actually used RunwayML.[00:31:57] swyx (2): So there was, there was,[00:31:57] Simon: yeah.[00:31:59] swyx (2): I don't know how [00:32:00] much I don't. So, you know, it's possible to overstate this, but there are people integrating it. Generated video within their workflow, even pre SORA. Right, because[00:32:09] Brian: it's not, it's not the thing where it's like, okay, tomorrow we'll be able to do a full two hour movie that you prompt with three sentences.[00:32:15] Brian: It is like, for the very first part of, of, you know video effects in film, it's like, if you can get that three second clip, if you can get that 20 second thing that they did in the matrix that blew everyone's minds and took a million dollars or whatever to do, like, it's the, it's the little bits and pieces that they can fill in now that it's probably already there.[00:32:34] swyx (2): Yeah, it's like, I think actually having a layered view of what assets people need and letting AI fill in the low value assets. Right, like the background video, the background music and, you know, sometimes the sound effects. That, that maybe, maybe more palatable maybe also changes the, the way that you evaluate the stuff that's coming out.[00:32:57] swyx (2): Because people tend to, in social media, try to [00:33:00] emphasize foreground stuff, main character stuff. So you really care about consistency, and you, you really are bothered when, like, for example, Sorad. Botch's image generation of a gymnast doing flips, which is horrible. It's horrible. But for background crowds, like, who cares?[00:33:18] Brian: And by the way, again, I was, I was a film major way, way back in the day, like, that's how it started. Like things like Braveheart, where they filmed 10 people on a field, and then the computer could turn it into 1000 people on a field. Like, that's always been the way it's around the margins and in the background that first comes in.[00:33:36] Brian: The[00:33:36] Simon: Lord of the Rings movies were over 20 years ago. Although they have those giant battle sequences, which were very early, like, I mean, you could almost call it a generative AI approach, right? They were using very sophisticated, like, algorithms to model out those different battles and all of that kind of stuff.[00:33:52] Simon: Yeah, I know very little. I know basically nothing about film production, so I try not to commentate on it. But I am fascinated to [00:34:00] see what happens when, when these tools start being used by the real, the people at the top of their game.[00:34:05] swyx (2): I would say like there's a cultural war that is more that being fought here than a technology war.[00:34:11] swyx (2): Most of the Hollywood people are against any form of AI anyway, so they're busy Fighting that battle instead of thinking about how to adopt it and it's, it's very fringe. I participated here in San Francisco, one generative AI video creative hackathon where the AI positive artists actually met with technologists like myself and then we collaborated together to build short films and that was really nice and I think, you know, I'll be hosting some of those in my events going forward.[00:34:38] swyx (2): One thing that I think like I want to leave it. Give people a sense of it's like this is a recap of last year But then sometimes it's useful to walk away as well with like what can we expect in the future? I don't know if you got anything. I would also call out that the Chinese models here have made a lot of progress Hyde Law and Kling and God knows who like who else in the video arena [00:35:00] Also making a lot of progress like surprising him like I think maybe actually Chinese China is surprisingly ahead with regards to Open8 at least, but also just like specific forms of video generation.[00:35:12] Simon: Wouldn't it be interesting if a film industry sprung up in a country that we don't normally think of having a really strong film industry that was using these tools? Like, that would be a fascinating sort of angle on this. Mm hmm. Mm hmm.[00:35:25] swyx (2): Agreed. I, I, I Oh, sorry. Go ahead.[00:35:29] Exploring Video Avatar Companies[00:35:29] swyx (2): Just for people's Just to put it on people's radar as well, Hey Jen, there's like there's a category of video avatar companies that don't specifically, don't specialize in general video.[00:35:41] swyx (2): They only do talking heads, let's just say. And HeyGen sings very well.[00:35:45] Brian: Swyx, you know that that's what I've been using, right? Like, have, have I, yeah, right. So, if you see some of my recent YouTube videos and things like that, where, because the beauty part of the HeyGen thing is, I, I, I don't want to use the robot voice, so [00:36:00] I record the mp3 file for my computer, And then I put that into HeyGen with the avatar that I've trained it on, and all it does is the lip sync.[00:36:09] Brian: So it looks, it's not 100 percent uncanny valley beatable, but it's good enough that if you weren't looking for it, it's just me sitting there doing one of my clips from the show. And, yeah, so, by the way, HeyGen. Shout out to them.[00:36:24] AI Influencers and Their Future[00:36:24] swyx (2): So I would, you know, in terms of like the look ahead going, like, looking, reviewing 2024, looking at trends for 2025, I would, they basically call this out.[00:36:33] swyx (2): Meta tried to introduce AI influencers and failed horribly because they were just bad at it. But at some point that there will be more and more basically AI influencers Not in a way that Simon is but in a way that they are not human.[00:36:50] Simon: Like the few of those that have done well, I always feel like they're doing well because it's a gimmick, right?[00:36:54] Simon: It's a it's it's novel and fun to like Like that, the AI Seinfeld thing [00:37:00] from last year, the Twitch stream, you know, like those, if you're the only one or one of just a few doing that, you'll get, you'll attract an audience because it's an interesting new thing. But I just, I don't know if that's going to be sustainable longer term or not.[00:37:11] Simon: Like,[00:37:12] Simplifying Content Creation with AI[00:37:12] Brian: I'm going to tell you, Because I've had discussions, I can't name the companies or whatever, but, so think about the workflow for this, like, now we all know that on TikTok and Instagram, like, holding up a phone to your face, and doing like, in my car video, or walking, a walk and talk, you know, that's, that's very common, but also, if you want to do a professional sort of talking head video, you still have to sit in front of a camera, you still have to do the lighting, you still have to do the video editing, versus, if you can just record, what I'm saying right now, the last 30 seconds, If you clip that out as an mp3 and you have a good enough avatar, then you can put that avatar in front of Times Square, on a beach, or whatever.[00:37:50] Brian: So, like, again for creators, the reason I think Simon, we're on the verge of something, it, it just, it's not going to, I think it's not, oh, we're going to have [00:38:00] AI avatars take over, it'll be one of those things where it takes another piece of the workflow out and simplifies it. I'm all[00:38:07] Simon: for that. I, I always love this stuff.[00:38:08] Simon: I like tools. Tools that help human beings do more. Do more ambitious things. I'm always in favor of, like, that, that, that's what excites me about this entire field.[00:38:17] swyx (2): Yeah. We're, we're looking into basically creating one for my podcast. We have this guy Charlie, he's Australian. He's, he's not real, but he pre, he opens every show and we are gonna have him present all the shorts.[00:38:29] Simon: Yeah, go ahead.[00:38:30] The Importance of Credibility in AI[00:38:30] Simon: The thing that I keep coming back to is this idea of credibility like in a world that is full of like AI generated everything and so forth It becomes even more important that people find the sources of information that they trust and find people and find Sources that are credible and I feel like that's the one thing that LLMs and AI can never have is credibility, right?[00:38:49] Simon: ChatGPT can never stake its reputation on telling you something useful and interesting because That means nothing, right? It's a matrix multiplication. It depends on who prompted it and so forth. So [00:39:00] I'm always, and this is when I'm blogging as well, I'm always looking for, okay, who are the reliable people who will tell me useful, interesting information who aren't just going to tell me whatever somebody's paying them to tell, tell them, who aren't going to, like, type a one sentence prompt into an LLM and spit out an essay and stick it online.[00:39:16] Simon: And that, that to me, Like, earning that credibility is really important. That's why a lot of my ethics around the way that I publish are based on the idea that I want people to trust me. I want to do things that, that gain credibility in people's eyes so they will come to me for information as a trustworthy source.[00:39:32] Simon: And it's the same for the sources that I'm, I'm consulting as well. So that's something I've, I've been thinking a lot about that sort of credibility focus on this thing for a while now.[00:39:40] swyx (2): Yeah, you can layer or structure credibility or decompose it like so one thing I would put in front of you I'm not saying that you should Agree with this or accept this at all is that you can use AI to generate different Variations and then and you pick you as the final sort of last mile person that you pick The last output and [00:40:00] you put your stamp of credibility behind that like that everything's human reviewed instead of human origin[00:40:04] Simon: Yeah, if you publish something you need to be able to put it on the ground Publishing it.[00:40:08] Simon: You need to say, I will put my name to this. I will attach my credibility to this thing. And if you're willing to do that, then, then that's great.[00:40:16] swyx (2): For creators, this is huge because there's a fundamental asymmetry between starting with a blank slate versus choosing from five different variations.[00:40:23] Brian: Right.[00:40:24] Brian: And also the key thing that you just said is like, if everything that I do, if all of the words were generated by an LLM, if the voice is generated by an LLM. If the video is also generated by the LLM, then I haven't done anything, right? But if, if one or two of those, you take a shortcut, but it's still, I'm willing to sign off on it.[00:40:47] Brian: Like, I feel like that's where I feel like people are coming around to like, this is maybe acceptable, sort of.[00:40:53] Simon: This is where I've been pushing the definition. I love the term slop. Where I've been pushing the definition of slop as AI generated [00:41:00] content that is both unrequested and unreviewed and the unreviewed thing is really important like that's the thing that elevates something from slop to not slop is if A human being has reviewed it and said, you know what, this is actually worth other people's time.[00:41:12] Simon: And again, I'm willing to attach my credibility to it and say, hey, this is worthwhile.[00:41:16] Brian: It's, it's, it's the cura curational, curatorial and editorial part of it that no matter what the tools are to do shortcuts, to do, as, as Swyx is saying choose between different edits or different cuts, but in the end, if there's a curatorial mind, Or editorial mind behind it.[00:41:32] Brian: Let me I want to wedge this in before we start to close.[00:41:36] The Future of LLM User Interfaces[00:41:36] Brian: One of the things coming back to your year end piece that has been a something that I've been banging the drum about is when you're talking about LLMs. Getting harder to use. You said most users are thrown in at the deep end.[00:41:48] Brian: The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out. I mean, it's, it's literally going back to the command line. The command line was defeated [00:42:00] by the GUI interface. And this is what I've been banging the drum about is like, this cannot be.[00:42:05] Brian: The user interface, what we have now cannot be the end result. Do you see any hints or seeds of a GUI moment for LLM interfaces?[00:42:17] Simon: I mean, it has to happen. It absolutely has to happen. The the, the, the, the usability of these things is turning into a bit of a crisis. And we are at least seeing some really interesting innovation in little directions.[00:42:28] Simon: Just like OpenAI's chat GPT canvas thing that they just launched. That is at least. Going a little bit more interesting than just chat, chats and responses. You know, you can, they're exploring that space where you're collaborating with an LLM. You're both working in the, on the same document. That makes a lot of sense to me.[00:42:44] Simon: Like that, that feels really smart. The one of the best things is still who was it who did the, the UI where you could, they had a drawing UI where you draw an interface and click a button. TL draw would then make it real thing. That was spectacular, [00:43:00] absolutely spectacular, like, alternative vision of how you'd interact with these models.[00:43:05] Simon: Because yeah, the and that's, you know, so I feel like there is so much scope for innovation there and it is beginning to happen. Like, like, I, I feel like most people do understand that we need to do better in terms of interfaces that both help explain what's going on and give people better tools for working with models.[00:43:23] Simon: I was going to say, I want to[00:43:25] Brian: dig a little deeper into this because think of the conceptual idea behind the GUI, which is instead of typing into a command line open word. exe, it's, you, you click an icon, right? So that's abstracting away sort of the, again, the programming stuff that like, you know, it's, it's a, a, a child can tap on an iPad and, and make a program open, right?[00:43:47] Brian: The problem it seems to me right now with how we're interacting with LLMs is it's sort of like you know a dumb robot where it's like you poke it and it goes over here, but no, I want it, I want to go over here so you poke it this way and you can't get it exactly [00:44:00] right, like, what can we abstract away from the From the current, what's going on that, that makes it more fine tuned and easier to get more precise.[00:44:12] Brian: You see what I'm saying?[00:44:13] Simon: Yes. And the this is the other trend that I've been following from the last year, which I think is super interesting. It's the, the prompt driven UI development thing. Basically, this is the pattern where Claude Artifacts was the first thing to do this really well. You type in a prompt and it goes, Oh, I should answer that by writing a custom HTML and JavaScript application for you that does a certain thing.[00:44:35] Simon: And when you think about that take and since then it turns out This is easy, right? Every decent LLM can produce HTML and JavaScript that does something useful. So we've actually got this alternative way of interacting where they can respond to your prompt with an interactive custom interface that you can work with.[00:44:54] Simon: People haven't quite wired those back up again. Like, ideally, I'd want the LLM ask me a [00:45:00] question where it builds me a custom little UI, For that question, and then it gets to see how I interacted with that. I don't know why, but that's like just such a small step from where we are right now. But that feels like such an obvious next step.[00:45:12] Simon: Like an LLM, why should it, why should you just be communicating with, with text when it can build interfaces on the fly that let you select a point on a map or or move like sliders up and down. It's gonna create knobs and dials. I keep saying knobs and dials. right. We can do that. And the LLMs can build, and Claude artifacts will build you a knobs and dials interface.[00:45:34] Simon: But at the moment they haven't closed the loop. When you twiddle those knobs, Claude doesn't see what you were doing. They're going to close that loop. I'm, I'm shocked that they haven't done it yet. So yeah, I think there's so much scope for innovation and there's so much scope for doing interesting stuff with that model where the LLM, anything you can represent in SVG, which is almost everything, can now be part of that ongoing conversation.[00:45:59] swyx (2): Yeah, [00:46:00] I would say the best executed version of this I've seen so far is Bolt where you can literally type in, make a Spotify clone, make an Airbnb clone, and it actually just does that for you zero shot with a nice design.[00:46:14] Simon: There's a benchmark for that now. The LMRena people now have a benchmark that is zero shot app, app generation, because all of the models can do it.[00:46:22] Simon: Like it's, it's, I've started figuring out. I'm building my own version of this for my own project, because I think within six months. I think it'll just be an expected feature. Like if you have a web application, why don't you have a thing where, oh, look, the, you can add a custom, like, so for my dataset data exploration project, I want you to be able to do things like conjure up a dashboard, just via a prompt.[00:46:43] Simon: You say, oh, I need a pie chart and a bar chart and put them next to each other, and then have a form where submitting the form inserts a row into my database table. And this is all suddenly feasible. It's, it's, it's not even particularly difficult to do, which is great. Utterly bizarre that these things are now easy.[00:47:00][00:47:00] swyx (2): I think for a general audience, that is what I would highlight, that software creation is becoming easier and easier. Gemini is now available in Gmail and Google Sheets. I don't write my own Google Sheets formulas anymore, I just tell Gemini to do it. And so I think those are, I almost wanted to basically somewhat disagree with, with your assertion that LMS got harder to use.[00:47:22] swyx (2): Like, yes, we, we expose more capabilities, but they're, they're in minor forms, like using canvas, like web search in, in in chat GPT and like Gemini being in, in Excel sheets or in Google sheets, like, yeah, we're getting, no,[00:47:37] Simon: no, no, no. Those are the things that make it harder, because the problem is that for each of those features, they're amazing.[00:47:43] Simon: If you understand the edges of the feature, if you're like, okay, so in Google, Gemini, Excel formulas, I can get it to do a certain amount of things, but I can't get it to go and read a web. You probably can't get it to read a webpage, right? But you know, there are, there are things that it can do and things that it can't do, which are completely undocumented.[00:47:58] Simon: If you ask it what it [00:48:00] can and can't do, they're terrible at answering questions about that. So like my favorite example is Claude artifacts. You can't build a Claude artifact that can hit an API somewhere else. Because the cause headers on that iframe prevents accessing anything outside of CDNJS. So, good luck learning cause headers as an end user in order to understand why Like, I've seen people saying, oh, this is rubbish.[00:48:26] Simon: I tried building an artifact that would run a prompt and it couldn't because Claude didn't expose an API with cause headers that all of this stuff is so weird and complicated. And yeah, like that, that, the more that with the more tools we add, the more expertise you need to really, To understand the full scope of what you can do.[00:48:44] Simon: And so it's, it's, I wouldn't say it's, it's, it's, it's like, the question really comes down to what does it take to understand the full extent of what's possible? And honestly, that, that's just getting more and more involved over time.[00:48:58] Local LLMs: A Growing Interest[00:48:58] swyx (2): I have one more topic that I, I [00:49:00] think you, you're kind of a champion of and we've touched on it a little bit, which is local LLMs.[00:49:05] swyx (2): And running AI applications on your desktop, I feel like you are an early adopter of many, many things.[00:49:12] Simon: I had an interesting experience with that over the past year. Six months ago, I almost completely lost interest. And the reason is that six months ago, the best local models you could run, There was no point in using them at all, because the best hosted models were so much better.[00:49:26] Simon: Like, there was no point at which I'd choose to run a model on my laptop if I had API access to Cloud 3. 5 SONNET. They just, they weren't even comparable. And that changed, basically, in the past three months, as the local models had this step changing capability, where now I can run some of these local models, and they're not as good as Cloud 3.[00:49:45] Simon: 5 SONNET, but they're not so far away that It's not worth me even using them. The other, the, the, the, the continuing problem is I've only got 64 gigabytes of RAM, and if you run, like, LLAMA370B, it's not going to work. Most of my RAM is gone. So now I have to shut down my Firefox tabs [00:50:00] and, and my Chrome and my VS Code windows in order to run it.[00:50:03] Simon: But it's got me interested again. Like, like the, the efficiency improvements are such that now, if you were to like stick me on a desert island with my laptop, I'd be very productive using those local models. And that's, that's pretty exciting. And if those trends continue, and also, like, I think my next laptop, if when I buy one is going to have twice the amount of RAM, At which point, maybe I can run the, almost the top tier, like open weights models and still be able to use it as a computer as well.[00:50:32] Simon: NVIDIA just announced their 3, 000 128 gigabyte monstrosity. That's pretty good price. You know, that's that's, if you're going to buy it,[00:50:42] swyx (2): custom OS and all.[00:50:46] Simon: If I get a job, if I, if, if, if I have enough of an income that I can justify blowing $3,000 on it, then yes.[00:50:52] swyx (2): Okay, let's do a GoFundMe to get Simon one it.[00:50:54] swyx (2): Come on. You know, you can get a job anytime you want. Is this, this is just purely discretionary .[00:50:59] Simon: I want, [00:51:00] I want a job that pays me to do exactly what I'm doing already and doesn't tell me what else to do. That's, thats the challenge.[00:51:06] swyx (2): I think Ethan Molik does pretty well. Whatever, whatever it is he's doing.[00:51:11] swyx (2): But yeah, basically I was trying to bring in also, you know, not just local models, but Apple intelligence is on every Mac machine. You're, you're, you seem skeptical. It's rubbish.[00:51:21] Simon: Apple intelligence is so bad. It's like, it does one thing well.[00:51:25] swyx (2): Oh yeah, what's that? It summarizes notifications. And sometimes it's humorous.[00:51:29] Brian: Are you sure it does that well? And also, by the way, the other, again, from a sort of a normie point of view. There's no indication from Apple of when to use it. Like, everybody upgrades their thing and it's like, okay, now you have Apple Intelligence, and you never know when to use it ever again.[00:51:47] swyx (2): Oh, yeah, you consult the Apple docs, which is MKBHD.[00:51:49] swyx (2): The[00:51:51] Simon: one thing, the one thing I'll say about Apple Intelligence is, One of the reasons it's so disappointing is that the models are just weak, but now, like, Llama 3b [00:52:00] is Such a good model in a 2 gigabyte file I think give Apple six months and hopefully they'll catch up to the state of the art on the small models And then maybe it'll start being a lot more interesting.[00:52:10] swyx (2): Yeah. Anyway, I like This was year one And and you know just like our first year of iPhone maybe maybe not that much of a hit and then year three They had the App Store so Hey I would say give it some time, and you know, I think Chrome also shipping Gemini Nano I think this year in Chrome, which means that every app, every web app will have for free access to a local model that just ships in the browser, which is kind of interesting.[00:52:38] swyx (2): And then I, I think I also wanted to just open the floor for any, like, you know, any of us what are the apps that, you know, AI applications that we've adopted that have, that we really recommend because these are all, you know, apps that are running on our browser that like, or apps that are running locally that we should be, that, that other people should be trying.[00:52:55] swyx (2): Right? Like, I, I feel like that's, that's one always one thing that is helpful at the start of the [00:53:00] year.[00:53:00] Simon: Okay. So for running local models. My top picks, firstly, on the iPhone, there's this thing called MLC Chat, which works, and it's easy to install, and it runs Llama 3B, and it's so much fun. Like, it's not necessarily a capable enough novel that I use it for real things, but my party trick right now is I get my phone to write a Netflix Christmas movie plot outline where, like, a bunch of Jeweller falls in love with the King of Sweden or whatever.[00:53:25] Simon: And it does a good job and it comes up with pun names for the movies. And that's, that's deeply entertaining. On my laptop, most recently, I've been getting heavy into, into Olama because the Olama team are very, very good at finding the good models and patching them up and making them work well. It gives you an API.[00:53:42] Simon: My little LLM command line tool that has a plugin that talks to Olama, which works really well. So that's my, my Olama is. I think the easiest on ramp to to running models locally, if you want a nice user interface, LMStudio is, I think, the best user interface [00:54:00] thing at that. It's not open source. It's good.[00:54:02] Simon: It's worth playing with. The other one that I've been trying with recently, there's a thing called, what's it called? Open web UI or something. Yeah. The UI is fantastic. It, if you've got Olama running and you fire this thing up, it spots Olama and it gives you an interface onto your Olama models. And t
Volvimos a invitar a nuestra queridísima Karen Eskenazi, esta vez para explorar en muuucha profundidad su sabiduría sobre Wicked. Este ha sido hasta ahora el episodio más largo en la historia de WQ. No olviden seguirnos en TikTok y en Instagram @waitquepod, suscribirse en YouTube, y comprarnos un café. Si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en https://opciónyonueva.trb.ai/wa/18zyR0x para más información. Links: Youtube: https://www.youtube.com/@waitquepod Instagram: https://www.instagram.com/waitquepod TikTok: https://www.tiktok.com/@waitquepod Buy me a coffee: https://www.buymeacoffee.com/waitque OpcionYo: https://opciónyonueva.trb.ai/wa/18zyR0x --- Support this podcast: https://podcasters.spotify.com/pod/show/waitque/support
Donald Macleod showcases the life and music of Christoph Willibald Gluck Christoph Willibald Gluck (1714-1787) arguably did more to transform opera than any composer of his generation: thinking deeply about how text and music should work together, and trying to strip away fripperies to ensure it was urgent, powerful and arresting. His radical approaches made him one of the most influential composers in history - and yet today, he's known in the concert hall almost exclusively for one work: his masterpiece “Orpheus and Eurydice”. This week, Donald Macleod puts that right: showcasing Gluck's dazzling and enchanting music from across his life - whilst also showing off his most famous work.Music Featured: Dance of the Blessed Spirits (Orfeo ed Eurydice) Non hai cor per un'impresa (Ipermestra, Wq 7) Sperai vicino il lido (Demofoonte, Wq 3) Se in campo armato (La Sofonisba, Wq 5) Nobil onda (La Sofonisba, Wq 5) Orfeo ed Euridice (excerpts) M'opprime, m'affanna (La Sofonisba, Wq 5) Qual ira intempestiva … Oggi per me non sudi; Oggi per me sudi (La Contesa de'numi, Wq 14) Trio Sonata no I in C Major (1st mvt) Ciascun siegua il suo stile...Maggior follia non v'e (La Semiramide riconosciuta, Wq 13) Misera, dove son…; Ah! non son io (Ezio, Wq 15) Dance of the Furies (Orphee et Eurydice: Act 2, Scene 1) Tremo fra dubbi miei (La Clemenza di Tito, Wq 16) (Act 3) Son lungi e non mi brami (Le Cinesi, Wq 18) Berenice che fai (Antigono, Wq 21) Don Juan (selection) Divinités du Styx (Alceste, Wq 37) O Del Mio Dolce Ardor; Le Belle Immagini (Paride ed Elena, Wq 39) Vous essayez en vain - Par la crainte; Adieu, conservez dans votre âme (Iphigénie en Aulide, Wq 40) Gluck (arr Schubert) Rien de la nature (Echo et Narcisse) Armide (Act 5 opening) Iphigenie en Tauride, Wq 46 (excerpts) De Profundis Orphee et Eurydice (1774 Paris edition): Act 3 (finale)Presented by Donald Macleod Produced by Steven Rajam for BBC Audio Wales & WestFor full track listings, including artist and recording details, and to listen to the pieces featured in full (for 30 days after broadcast) head to the series page for Christoph Willibald Gluck (1714-1787) https://www.bbc.co.uk/programmes/m0022znr And you can delve into the A-Z of all the composers we've featured on Composer of the Week here: http://www.bbc.co.uk/programmes/articles/3cjHdZlXwL7W41XGB77X3S0/composers-a-to-z
A night of retro TV ads has Andrew and Vieves wondering why certain product categories seem to have disappeared from modern commercials, even though they're still sold in stores and online. Or are they? Whither the buggy whip ads? Here are links to the ads we talked about on this week's show: Super Glue (1977) https://www.youtube.com/watch?v=X3eqqShGvIw Gorilla Glue (2023) https://www.ispot.tv/ad/17oO/gorilla-glue-fence Isotoner gloves commercial (1984) https://www.youtube.com/watch?v=W3mUnA7O2OQ Isotoner Gloves (ft. Dan Marino) https://www.youtube.com/watch?v=bnsi472UqOM Citizen Watches (1980s) https://youtu.be/iE1J9zEu_IQ?si=d_5ZwNqcqIX12k7_ Seico (1986) https://www.youtube.com/watch?v=RSishtZKTQo Casio (1987) https://youtu.be/V2aERsHoKxQ?si=lznUaxZhYRpyuDJx Apple Watch - Sleep Goals (2023) https://www.ispot.tv/ad/5h4t/apple-watch-sleep-goals-song-by-audrey-nuna Samsung Galaxy Watch (2022) https://www.ispot.tv/ad/bfWZ/samsung-mobile-night-owls Citizen Watch - Beach Time (2023) https://www.ispot.tv/ad/1_Wq/citizen-watch-beach-time-song-by-miley-virus Citizen Watch - Holiday Gift / Powered by Light https://www.ispot.tv/ad/1M6f/citizen-watch-holidays-a-gift-to-the-world L'eggs Sheer Energy (1989) https://www.youtube.com/watch?v=gdniscIjx7E L'eggs Sheer Elegance (1980) https://www.youtube.com/watch?v=Axk_p5jAANE Ultrasense Panty Hose (1980s) https://www.youtube.com/watch?v=Pf05Swz8jvY Run Free Pantyhose https://www.ispot.tv/ad/wvSB/run-free-pantyhose-microknit-design-prevents-runs Sheertex TV Spot, 'Lab Testing' https://www.ispot.tv/ad/254B/sheertex-lab-testing Folgers (19080s) https://www.youtube.com/watch?v=nrBZasAtBX4 General Foods International Coffees (1980) https://youtu.be/0gmgp8pVviA?si=B-N9Inphr-Sa4lz_ General Foods International Coffees (1982) https://youtu.be/2PN8k3W4a_g?si=kcM_pKpl2X3UC1lV Folgers (July 2024) https://www.ispot.tv/ad/fLGC/folgers-reintroducing-folgers
This Day in Legal History: Act of Toleration EnactedOn May 24, 1689, the Parliament of England enacted the Act of Toleration, a pivotal law that granted religious freedom to English Protestants. This legislation marked a significant shift in England's religious landscape, as it allowed non-Anglican Protestants, such as Baptists and Congregationalists, to practice their faith without fear of persecution. However, this tolerance came with limitations: it excluded Roman Catholics and non-Trinitarian Protestants, leaving them outside the protection of the Act.The Act of Toleration emerged in the context of the Glorious Revolution, which saw William of Orange and his wife Mary ascend to the English throne. Their reign, beginning in 1688, was characterized by a move towards greater religious and political stability. The Act was a response to the religious strife that had plagued England for decades, providing a framework for more inclusive, albeit limited, religious coexistence.Despite its exclusions, the Act of Toleration laid the groundwork for future expansions of religious freedom. It required dissenting Protestants to pledge allegiance to the Crown and reject the authority of the Pope, thus maintaining a degree of control over the newly tolerated groups. This compromise allowed for religious diversity while ensuring loyalty to the monarchy.The Act's passage was a milestone in the evolution of religious liberty in England, reflecting the changing attitudes towards religious pluralism. While it did not end all religious discrimination, it represented a step towards a more tolerant society. Over time, the principles enshrined in the Act influenced broader movements for religious freedom and civil rights, both in England and beyond.The significance of the Act of Toleration lies not only in its immediate effects but also in its lasting impact on the development of religious tolerance as a fundamental value in democratic societies.A Democratic operative, Steve Kramer, faces state criminal charges and a federal fine for using AI to fake President Joe Biden's voice in robocalls aimed at discouraging Democratic voters in the New Hampshire primary. Kramer, working for Biden's primary challenger Dean Phillips, was charged with 13 felony counts of voter suppression and 13 misdemeanors for impersonating a candidate. The FCC proposed a $6 million fine for the robocalls, which spoofed a local political consultant's number.New Hampshire Attorney General John M. Formella emphasized that these actions aim to deter election interference using AI. The incident has heightened concerns about AI's potential misuse in elections. FCC Chairwoman Jessica Rosenworcel proposed a rule requiring political advertisers to disclose AI use in ads, while the FCC also proposed a $2 million fine against Lingo Telecom for transmitting the calls.The AI-generated robocall, circulated just before the primary, used Biden's catchphrase and urged voters to stay home. Despite this, Democratic leaders encouraged a write-in campaign for Biden, leading to high voter turnout in his favor.Faked Biden Robocall Results in Charges for Democratic OperativeThe US Supreme Court has made it more challenging for Black and minority voters to contest the use of race in legislative redistricting, according to civil rights advocates. In a 6-3 ruling, the conservative majority determined that South Carolina voters failed to prove that race, rather than partisanship, influenced Republican legislators when drawing district lines. This decision raises the bar for proving racial gerrymandering and could impact redistricting cases nationwide, not just in South Carolina's 1st Congressional District.Leah Aden of the NAACP Legal Defense Fund expressed concern that it is becoming increasingly difficult for plaintiffs to demonstrate racial discrimination. The ruling, which precedes the upcoming November election, could affect similar challenges in states like North Carolina and Tennessee.Justice Samuel Alito, writing for the majority, emphasized a presumption that legislatures act in good faith, making it harder to prove racial intent without blatant evidence. Critics argue this standard allows legislators to use partisan motives as a defense against claims of racial gerrymandering.The decision follows the Supreme Court's 2019 ruling that federal courts cannot oversee partisan gerrymandering claims, further complicating challenges to discriminatory redistricting. Justice Elena Kagan, in her dissent, criticized the majority for favoring state arguments and making it tougher for challengers to succeed. This case underscores the evolving legal landscape surrounding voting rights and redistricting in the US.Supreme Court Conservatives Add New Minority Voter RoadblocksA Jackson Walker partner alleged that former Texas bankruptcy judge David R. Jones requested the firm to file a potentially false disclosure about his relationship with attorney Elizabeth Freeman. This disclosure came amidst ongoing litigation involving Jones, Freeman, and Jackson Walker, who are accused of concealing their relationship. The scandal follows Jones' resignation after admitting to the romance.In late 2022, Jones wanted the relationship kept secret as Jackson Walker negotiated with Freeman regarding its disclosure. Despite Freeman's earlier claims that the relationship had ended, the firm discovered in February 2022 that it was ongoing. After confronting Freeman, she admitted the relationship had been rekindled.Jackson Walker's recent filings argue they shouldn't be held liable for Jones' misconduct and urge rejection of the US Trustee's efforts to reclaim $13 million in fees. Jones allegedly provided a misleading proposed disclosure that omitted the romantic aspect of his relationship with Freeman and insisted the firm use it in future cases. Jackson Walker refused and proceeded to separate from Freeman.The firm claims it acted reasonably and didn't breach any ethical rules, pointing out that the US Trustee hasn't penalized Jones or Freeman. The Justice Department's bankruptcy monitor seeks to recover fees from cases where Jackson Walker failed to disclose the relationship. The case highlights the complex ethical and legal issues surrounding judicial conduct and professional responsibilities.Jackson Walker Says Judge Tried to Mislead Court on Romance (2)The U.S. Justice Department, along with 30 states, has filed a lawsuit against Live Nation and its Ticketmaster unit, accusing them of monopolizing concert tickets and promotions. The case, filed in Manhattan federal court, aims to break up Live Nation. Leading the legal team is Jonathan Kanter, head of the DOJ's antitrust division, with Bonny Sweeney as the lead attorney. Sweeney, a veteran antitrust litigator, previously co-headed the antitrust group at Hausfeld and has extensive experience in high-profile cases against companies like Google, Apple, and major credit card firms.Live Nation and Ticketmaster are defended by teams from Latham & Watkins and Cravath, Swaine & Moore, which have deep experience in antitrust defense. The companies deny the allegations and plan to fight the lawsuit. Latham & Watkins, which has long defended Live Nation in private consumer lawsuits and was involved in the 2010 merger approval, has Daniel Wall, a seasoned antitrust defender, as their executive vice president for corporate and regulatory affairs. Cravath's team, led by Christine Varney, former head of the DOJ's antitrust division, also represents major clients like Epic Games in similar high-stakes litigation.US legal team in Live Nation lawsuit includes veteran plaintiffs' attorney | ReutersThis week's closing theme is by Carl Philipp Emanuel Bach. This week's closing theme takes us back to the 18th century, honoring a pivotal figure in the transition from the Baroque to the Classical era: Carl Philipp Emanuel Bach. Born in 1714, C.P.E. Bach was the second surviving son of prolific composer Johann Sebastian Bach. Despite his illustrious lineage, C.P.E. Bach carved out his own distinct legacy, becoming one of the most influential composers of his time in his own right.Today, we commemorate his contributions to classical music as we mark the anniversary of his death on May 24, 1788. Known for his expressive and innovative style, C.P.E. Bach's music bridges the complexity of Baroque counterpoint with the emerging Classical clarity and form. His works had a profound impact on later composers, including Haydn, Mozart, and Beethoven.One of his most celebrated pieces is the "Solfeggietto in C minor," H. 220, Wq. 117/2. This energetic and technically demanding keyboard composition remains a favorite among pianists and continues to captivate audiences with its vibrant character and virtuosic passages. The "Solfeggietto" exemplifies C.P.E. Bach's mastery of the empfindsamer Stil, or 'sensitive style,' characterized by its emotional expressiveness and dynamic contrasts.As we listen to the "Solfeggietto," let us reflect on the enduring legacy of Carl Philipp Emanuel Bach, whose music continues to inspire and delight over two centuries after his passing. Join us in celebrating his remarkable contributions as we close this week with the lively and spirited sounds of his timeless composition.Without further ado, “Solfeggietto in C minor” by Carl Philipp Emanuel Bach, enjoy. Get full access to Minimum Competence - Daily Legal News Podcast at www.minimumcomp.com/subscribe
In 1773, Carl Philipp Emanuel Bach sat down to record his life story. He'd been asked to write it down for a new book on German music and it made him one of the first composers to produce an autobiography. This week, Donald Macleod follows the composer's story, using Bach's own account as his guide. Bach's words provide fascinating insights into the things he considered most important but it's possible that what he chose to leave out is even more revealing.Music Featured: L'Aly Rupalich, Wq 117 No 27 Symphony for Strings and Continuo in G major, Wq 182 No 1 Fantasia for keyboard in C major, Wq 61 No 6 Trio Sonata in B minor, Wq 143 Keyboard Concerto in G major, Wq 3 Symphony in G major, Wq 173 (1st mvt) Trio Sonata in A Minor, Wq 148 Sonata in A minor, Wq 132 (1st mvt) Cello Concerto No 3 in A major, Wq 172 (2nd & 3rd mvts) Sonata in E minor, Wq 49 No 3 Magnificat in D, Wq 215 (1, Magnificat anima mea Dominum; 5, Fecit potentiam; 10. Sicut erat in principio) Keyboard Sonata in E flat major, Wq 52 No 1( 2nd & 3rd mvts) Sonata in C minor ‘Sanguineus and Melancholicus' Wq 161 No 1 Phyllis and Thirsis, Wq 232 (excerpt) Sinfonia in B-Flat Major, Wq 182 No 2 (3rd mvt) 30 Geistliche Gesänge mit Melodien, Book 2, Wq 198: (Nos 2 & 8) Die Israeliten in der Wüste, Wq 238 (extract from Part 1) Symphony in B minor, Wq 182 No 5 Rondo in E Major, Wq 58 No 3 Rondo in F Major, Wq 57 No 5 Sonata in D Minor, Wq 57 No 4 (2nd mvt) Quartet in G Major, Wq 95 (3rd mvt) Heilig, Wq 217 Keyboard Sonatina in D Major, Wq 109 Freye Fantasie in F sharp minor, Wq 80Presented by Donald Macleod Produced by Chris Taylor for BBC Audio Wales and WestFor full track listings, including artist and recording details, and to listen to the pieces featured in full (for 30 days after broadcast) head to the series page for CPE Bach (1714-1788) https://www.bbc.co.uk/programmes/m001yr0r And you can delve into the A-Z of all the composers we've featured on Composer of the Week here: http://www.bbc.co.uk/programmes/articles/3cjHdZlXwL7W41XGB77X3S0/composers-a-to-z
En este súper episodio, el WQ team le da la bienvenida a Susan Garzón, DPT (doctor of physical therapy) especializada en el piso pélvico, y amiga de la casa, para hablar de absolutamente todos sus temas favoritos, incluyendo cómo aprender a no hacerse pipí involuntariamente, a tener mejores orgasmos, y a hacer mejor pupú. Obviamente hubo infinito caos. No olviden seguirlas en Instagram @waitquepod, suscribirse en YouTube, y comprarles un café. Si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en www.opcionyo.com/waitque para más información. Pregunta del día patrocinada por Unspoken, la red social para iniciar conversación. Descárgala aquí. Links: Youtube: www.youtube.com/@waitquepod Instagram: www.instagram.com/waitquepod Buy me a coffee: www.buymeacoffee.com/waitque OpcionYo: www.opcionyo.com/waitque Unspoken App: https://apps.apple.com/app/apple-store/id6448198433?pt=120271073&ct=WaitQuePod&mt=8 Cuenta de Susan: https://www.instagram.com/shedandbloom/ Squatty potty: https://a.co/d/7LM23SJ --- Support this podcast: https://podcasters.spotify.com/pod/show/waitque/support
Show notes Last year, I hosted two training challenges within WS. The second one, dubbed Warrior Queen 2.0: Play to Win, concluded in November. This challenge revolved around mastering the game. Warriors selected their preferred level to play, each tier featuring specific tasks for completion. These tasks earned them points, and there were additional bonus points up for grabs. In today's episode, I'm joined by Warrior Meghan Meredith, making her second appearance on the podcast. Meghan delves into her journey during Warrior Queen 2.0: Play to Win, sharing the hurdles she encountered and how she conquered them. She highlights her wins over the 10-week challenge and discusses her anticipated ventures for 2024. Key Insights include: Meghan's interpretation of playing to win Her initial thoughts on the WQ challenge presentation The level she opted to engage in and her reasons behind it Details on the tasks involved in her chosen level The challenges faced throughout the 10-week journey and her strategies to overcome them Her overall achievements during the period Meghan's most significant lesson learned and achievement during these ten weeks Her enthusiastic outlook for the year 2024 Featured on the show Warrior School https://warriorschool.co/ Related podcast episodes Episode 175: Squatting 190lbs, deadlifting 210lbs and learning to respect her body with Meghan Meredith
Show notes This year, I ran two training challenges inside of WS. We just wrapped up our second one in November - Warrior Queen 2.0 Play to Win. This challenge was all about how well you play the game. The Warriors chose a level to play at. Each level had certain tasks they had to complete. They accumulated certain points for each task, and then had the option of collecting bonus points. In today's EP, I have Warrior Kelsea Sutton on the podcast (for the third time). Kels talks about her experience during Warrior Queen 2.0 Play to Win. She shares the challenges she faced during the challenge and how she overcame them. She talks about her wins over the 10-weeks and what she's excited about for 2024. What you'll discover What playing to win means to Kelsea How she felt about the WQ challenge when it was first presented What level she played at during the challenge and why What was involved with her level The challenges she faced over the 10-weeks and how she overcame them Her results over the 10-weeks Kelsea's biggest learning and win during the ten weeks What she is excited about for 2024 Featured on the show Conquer your first pull-up course https://warrior-school.circle.so/c/conquer-your-first-pull-up-course/ About Warrior School https://warriorschool.co Apply for Warrior School https://docs.google.com/forms/d/e/1FAIpQLSdS7OVobSu60FFc1pyIQVySKasuwjWvWnttTJOMRrgj6MLEPw/viewform Related podcast episodes Episode 111: Owning your training and building a capable body with Kelsea Sutton https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000559154961
En este día tan especial, Fri, Andry y Rach deciden celebrar los 100 episodios de WQ contando y recordando de dónde realmente nació la idea del podcast. Acompáñenlas a compartir las historias traumáticas que vivieron, normalizaron, reprimieron, convirtieron en comedia, eventualmente procesaron, y las hicieron decir: wait, qué? Trigger warning: acoso sexual Para celebrar el episodio número 100, no olviden seguirlas en Instagram @waitquepod, suscribirse en YouTube, comprarles un café. Y si este episodio les dio ganas de empezar terapia, recuerden usar este link para una primera consultoría en OpcionYo con 25% de descuento. Ingresen en www.opcionyo.com/waitque para más información. Links: Youtube: www.youtube.com/@waitquepod Instagram: www.instagram.com/waitquepod Buy me a coffee: www.buymeacoffee.com/waitque OpcionYo: www.opcionyo.com/waitque --- Support this podcast: https://podcasters.spotify.com/pod/show/waitque/support
Show notes This year, I ran two training challenges inside of WS. We just wrapped up our second one in November - Warrior Queen 2.0 Play to Win. This challenge was all about how well you play the game. The Warriors chose a level to play at. Each level had certain tasks they had to complete. They accumulated certain points for each task, and then had the option of collecting bonus points. In today's EP, I have Warrior Leah Rife on the podcast. Leah talks about her experience during Warrior Queen 2.0 Play to Win. She shares the challenges she faced during the challenge and how she overcame them. She talks about her wins over the 10-weeks and what she's excited about for 2024. What you'll discover What playing to win means to Leah (and how that has that changed for her over the last two years) How she felt about the WQ challenge when it was first presented How Leah overcomes fear and resistance What level she played at during the challenge and why What was involved with her level Her challenges with tracking her food and how we modified her approach Other challenges she faced over the 10-weeks and how she overcame them Her results over the 10-weeks Leah's biggest learning or win during the ten weeks What she is excited about for 2024 Featured on the show Leah Rife Wedding Photographer Website https://leahrifephoto.com/ Her work 'gram' https://www.instagram.com/leahrife/ Conquer your first pull-up course https://warrior-school.circle.so/c/conquer-your-first-pull-up-course/ About Warrior School https://warriorschool.co Apply for Warrior School https://docs.google.com/forms/d/e/1FAIpQLSdS7OVobSu60FFc1pyIQVySKasuwjWvWnttTJOMRrgj6MLEPw/viewform Related podcast episodes Episode 214: Playing to win Episode 174: Squatting and deadlifting 200lbs and have fun in her training with Leah Rife Episode 152: The journey of a Warrior with Leah Rife Episode 127: How consistency gave her more confidence in her life with Leah Rife
今天去晨跑了。跑步的时候是否可以听古典音乐?分享几首可能合适的作品,也想听听大家的意见。- 曲目 -Philip Glass - Opening (From Glassworks)Mozart - March No. 1 in D Major, K. 335 (320a)Beethoven - Symphonie No. 3 - E-sharp-Major, Op. 55 “Eroica” I. Allegro con brioBruckner - Symphony No. 4 in E-Flat Major, WAB 104 “Romantic” (1874 Version, Ed. L. Nowak) I. AllegroCPE Bach - Cello Concerto in A minor, Wq.170/H.439 I. Allegro assaiMendelssohn - String Quartet No. 5 in E flat major, Op.44/3 - 4. Molto Allegro con fuocoPhilip Glass - Knee Play No.4 (From Einstein on the Beach)- 聊天的人 -顾超(微博@天方乐谈超人,公众号“天方乐谈Intermezzo”)- 收听方式 -推荐您使用「苹果播客」、小宇宙或任意泛用型播客客户端订阅收听《天方乐谈》,也可通过喜马拉雅等app收听。- 互动方式 -节目微信公众号:天方乐谈Intermezzo听友群管理员微信号:guchaodemajia
After the huge success of his previous Deutsche Grammophon album, devoted to works by Mozart, oboist Albrecht Mayer turns his attention to the uniquely talented Bach family. For Bach Generations, he has chosen a selection of music by four members of the family: Johann Sebastian himself (1685-1750), Johann Christoph (1642-1703), Carl Philipp Emanuel (1714-1788) and Johann Christoph Friedrich (1732-1795). There is also a transcription of a work by Gottfried Heinrich Stölzel, previously attributed to J.S. Bach. Recorded with the Berliner Barock Solisten and Gottfried von der Goltz (solo violin/concertmaster), Bach Generations is set for release on CD and digitally on 4 August 2023. Track Listing:1 Bach, J S: Keyboard Concerto No. 4 in A major, BWV1055 / I. Allegro2 II. Larghetto3 III. Allegro ma non tanto4 Bach, J S: Orchestral Suite No. 3 in D major, BWV1068: Air ('Air on a G String')5 Bach, J S: Orchestral Suite No. 2 in B minor, BWV1067: Badinerie6 Bach, J C F: Keyboard Concerto in A major, YC91 / I. Allegro (Cadenza: Mayer)7 II. Andante ma non troppo (Cadenza: Mayer)8 III. Allegro (Cadenza: Mayer)9 Stölzel: Bist du bei mir10 Bach, J S: Easter Oratorio BWV249: Sanfte soll mein Todeskummer11 Bach, C P E: Keyboard Concerto in G major, Wq. 9 (H412) / I. Allegro (Cadenza: Mayer)12 II. Adagio (Cadenza: Mayer)13 III. Allegro assai (Cadenza: Mayer)14 Bach, J C'ph: Ach, daB ich Wassers genug hätteHelp support our show by purchasing this album at:Downloads (classicalmusicdiscoveries.store) Classical Music Discoveries is sponsored by Uber. @CMDHedgecock#ClassicalMusicDiscoveries #KeepClassicalMusicAlive#CMDGrandOperaCompanyofVenice #CMDParisPhilharmonicinOrléans#CMDGermanOperaCompanyofBerlin#CMDGrandOperaCompanyofBarcelonaSpain#ClassicalMusicLivesOn#Uber#AppleClassical Please consider supporting our show, thank you!Donate (classicalmusicdiscoveries.store) staff@classicalmusicdiscoveries.com This album is broadcasted with the permission of Crossover Media Music Promotion (Zachary Swanson and Amanda Bloom).
This week hosts Pete and Chris dive into the latest WQ and Store Champ lists that are topping the tournament charts and that means a lot of First Order talk! Also check out our friends at Planning Phase Syndicate, they were gracious enough to invite us on this week, give it a listen! X-Wing Chat begins at ~9:40
Shohei Ohtani Fanatics Link as talked about on Alex Garrett Podcasting https://fanatics.93n6tx.net/g1... Get Your Irish gear HERE: https://fanatics.93n6tx.net/Wq... Survey Junkie affiliate link: https://bmv.biz/?a=7280&c=... Link about Loudoun County latest as referenced on Alex Garrett Podcast Network: https://www.foxnews.com/us/for...
C. P. E. BACH: Concierto para órgano o clave y orquesta en Mi bemol mayor WQ 35 (18.22). M.-C. Alain (órg.), Orq. de Cámara. Dir.: J.-F. Paillard. BACH: Concierto para órgano nº 2 BWV 593 (arr. del Concierto para dos violines, cuerda y continuo en La menor, Op. 3 nº 8 RV 522 de Antonio Vivaldi) (11.25). M.-C. Alain (órg.). FRANCK: Pastorale, Op. 19 (8.39). M.-C. Alain (órg.). TELEMANN: La Vaillance (12 Marchas Heroicas para violín o instrumentos de viento y continuo) (transcripción de M.-C. Alain para tp. y órg.) (1.02). M. André (tp.), H. Bilgram (órg.).Escuchar audio
Show notes In today's episode, I am joined by Warrior Woman Nicole McGill. Nicole is an amazing hand balancer, but two years ago she fell and tore her labrum and bicep tendon in her right shoulder. Just over a year ago, Nicole had shoulder surgery to repair it. In this episode, she shares her rehabilitation journey over the last 12 months and how she went from not being able to hold a barbell on her back and in her hands to squatting and deadlifting 90kg! Nic also shares some HOT words of wisdom and advice for women who are struggling with an injury or their training. What you will discover Nicole's shoulder injury and rehabilitation journey How she went from not being able to lift her arm to returning to hand balancing Her biggest learnings and insights over the last 12 months after her surgery Her PRs (AKA the results she got in the twelve-week WQ challenge) How she rebuilt her confidence to train, lift heavy weights and push herself again after surgery Her three biggest pieces of advice to women who feel stuck with their training Featured on the show About Warrior School https://warriorschool.co Apply for Warrior School https://docs.google.com/forms/d/e/1FAIpQLSdS7OVobSu60FFc1pyIQVySKasuwjWvWnttTJOMRrgj6MLEPw/viewform Download my FREE bodyweight strength program https://amykatebowe.ck.page/9925443715 Related podcasts Episode 183: Overcoming back pain, getting strong and having fun with Meg Thomasson https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000612889553 Episode 181: Training around limitations, leg pressing 330lbs and feeling more confident with Shannon Dalby https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000611982440 Episode 178: Form zero strength training to training four days a week with Tina Albrecht https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000610726260 Episode 176: Meeting herself with compassion and learning to trust the process with Nadine Allaham https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000609651232 Episode 175: Squatting 190lbs and deadlifting 210lbs and learning to respect her body with Meg Meredith https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000609079807 Episode 174: Squatting and deadlifting 200lbs and having fun in her training with Leah Rife https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000608623092 Episode 130: How to move through pain and injury https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252?i=1000571020747
durée : 00:20:33 - Disques de légende du mardi 06 juin 2023 - Carl Philipp Emanuel Bach dirigé par la chef et violoniste Petra Müllejans avec l'Orchestre Baroque de Fribourg et le claveciniste Andrés Staier.
Show notes In today's episode, I talk to Warrior Meg Thomasson. Meg shares her biggest insights and learnings from the Warrior Queen Challenge and her training over the last 12 months. A year ago she had back pain and didn't like to train. In the WQ challenge, she squatted 75kg - a 35-40kg PR! And she deadlifted 80kg. She also got 7 assisted chin-ups on the TOPS of her feet. Meg also dishes up some HOT words of wisdom and advice for the women who are struggling with their training. WHAT YOU WILL DISCOVER Meg's biggest learnings over the last 12 months and from the WQ challenge Her PRs (AKA the results she got over the twelve weeks) How she built her confidence to train, lift heavy weights and push herself The importance of food preparation and being organised How she trained 4 times a week while travelling a lot for work The importance of tracking and the data Her three biggest pieces of advice to women who feel stuck with their training Featured on the show About Warrior School https://warriorschool.co Apply for Warrior School https://docs.google.com/forms/d/e/1FAIpQLSdS7OVobSu60FFc1pyIQVySKasuwjWvWnttTJOMRrgj6MLEPw/viewform Listen to the other Warriors' journeys and results - EP 174, 175, 176, 178 https://podcasts.apple.com/ca/podcast/warrior-school/id1470895252
Daniel Lozakovich's rich, romantic style of playing often sees him likened to the iconic violinists of the 20th century. On Spirits, his latest Deutsche Grammophon recording, he celebrates some of his forebears in the hope of passing on their style and repertoire to younger generations. “I've chosen a selection of very accessible miniatures, which I associate with different violinists,” he explains. “All these musicians had such strong, soulful spirits that it's impossible to forget their sound.” Partnered by pianist Stanislav Soloviev, Lozakovich performs his favorite encores by Elgar, Debussy, Falla, Gluck, Brahms, and Kreisler.Track Listing:1 ELGAR Salut d'amour, Op_ 122 ELGAR La Capricieuse Op_ 173 DEBUSSY Suite bergamasque, L_ 75 - III_ Clair de lune4 FALLA La vida breve - Danse espagnole5 GLUCK Melodie from ‘Orfeo ed Euridice', Wq_ 306 BRAHMS 21 Hungarian Dances, WoO 1 - No_ 2 in D Minor_ Allegro non assai7 BRAHMS 21 Hungarian Dances, WoO 1 - No_ 6 in D-Flat Major_ Vivace8 KREISLER 3 Old Viennese Dances - II_ LiebesleidHelp support our show by purchasing this album at:Downloads (classicalmusicdiscoveries.store) Classical Music Discoveries is sponsored by Uber and Apple Classical. @CMDHedgecock#ClassicalMusicDiscoveries #KeepClassicalMusicAlive#CMDGrandOperaCompanyofVenice #CMDParisPhilharmonicinOrléans#CMDGermanOperaCompanyofBerlin#CMDGrandOperaCompanyofBarcelonaSpain#ClassicalMusicLivesOn#Uber#AppleClassical Please consider supporting our show, thank you!Donate (classicalmusicdiscoveries.store) staff@classicalmusicdiscoveries.com This album is broadcasted with the permission of Crossover Media Music Promotion (Zachary Swanson and Amanda Bloom).
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Residual stream norms grow exponentially over the forward pass, published by Stefan Heimersheim on May 7, 2023 on The AI Alignment Forum. Summary: For a range of language models and a range of input prompts, the norm of each residual stream grows exponentially over the forward pass, with average per-layer growth rate of about 1.045 in GPT2-XL. We show a bunch of evidence for this. We discuss to what extent different weights and parts of the network are responsible. We find that some model weights increase exponentially as a function of layer number. We finally note our current favored explanation: Due to LayerNorm, it's hard to cancel out existing residual stream features, but easy to overshadow existing features by just making new features 4.5% larger. Thanks to Aryan Bhatt, Marius Hobbhahn, Neel Nanda, and Nicky Pochinkov for discussion. Plots showing exponential norm and variance growth Our results are reproducible in this Colab. Alex noticed exponential growth in the contents of GPT-2-XL's residual streams. He ran dozens of prompts through the model, plotted for each layer the distribution of residual stream norms in a histogram, and found exponential growth in the L2 norm of the residual streams: Here's the norm of each residual stream for a specific prompt: Stefan had previously noticed this phenomenon in GPT2-small, back in MATS 3.0: Basic Facts about Language Model Internals also finds a growth in the norms of the attention-out matrices WO and the norms of MLP out matrices Wout ("writing weights"), while they find stable norms for WQ, WK, and Win ("reading weights"): Comparison of various transformer models We started our investigation by computing these residual stream norms for a variety of models, recovering Stefan's results (rescaled by √dmodel=√768) and Alex's earlier numbers. We see a number of straight lines in these logarithmic plots, which shows phases of exponential growth. We are surprised by the decrease in Residual Stream norm in some of the EleutherAI models. We would have expected that, because the transformer blocks can only access the normalized activations, it's hard for the model to "cancel out" a direction in the residual stream. Therefore, the norm always grows. However, this isn't what we see above. One explanation is that the model is able to memorize or predict the LayerNorm scale. If the model does this well enough it can (partially) delete activations and reduce the norm by writing vectors that cancel out previous activations. The very small models (distillgpt2, gpt2-small) have superexponential norm growth, but most models show exponential growth throughout extended periods. For example, from layer 5 to 41 in GPT2-XL, we see an exponential increase in residual stream norm at a rate of ~1.045 per layer. We showed this trend as an orange line in the above plot, and below we demonstrate the growth for a specific example: BOS and padding tokens In our initial tests, we noticed some residual streams showed a irregular and surprising growth curve: As for the reason behind this shape, we expect that the residual stream (norm) is very predictable at BOS and padding positions. This is because these positions cannot attend to other positions and thus always have the same values (up to positional embedding). Thus it would be no problem for the model to cancel out activations, and our arguments about this being hard do not hold for BOS and padding positions. We don't know whether there is a particular meaning behind this shape. We suspect that is the source of the U-shape shown in Basic facts about language models during training: Theories for the source of the growth From now on we focus on the GPT2-XL case. Here is the residual stream growth curve again (orange dots), but also including the resid_mid hook between the two Attention and MLP sub...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Residual stream norms grow exponentially over the forward pass, published by StefanHex on May 7, 2023 on LessWrong. Summary: For a range of language models and a range of input prompts, the norm of each residual stream grows exponentially over the forward pass, with average per-layer growth rate of about 1.045 in GPT2-XL. We show a bunch of evidence for this. We discuss to what extent different weights and parts of the network are responsible. We find that some model weights increase exponentially as a function of layer number. We finally note our current favored explanation: Due to LayerNorm, it's hard to cancel out existing residual stream features, but easy to overshadow existing features by just making new features 4.5% larger. Thanks to Aryan Bhatt, Marius Hobbhahn, Neel Nanda, and Nicky Pochinkov for discussion. Plots showing exponential norm and variance growth Our results are reproducible in this Colab. Alex noticed exponential growth in the contents of GPT-2-XL's residual streams. He ran dozens of prompts through the model, plotted for each layer the distribution of residual stream norms in a histogram, and found exponential growth in the L2 norm of the residual streams: Here's the norm of each residual stream for a specific prompt: Stefan had previously noticed this phenomenon in GPT2-small, back in MATS 3.0: Basic Facts about Language Model Internals also finds a growth in the norms of the attention-out matrices WO and the norms of MLP out matrices Wout ("writing weights"), while they find stable norms for WQ, WK, and Win ("reading weights"): Comparison of various transformer models We started our investigation by computing these residual stream norms for a variety of models, recovering Stefan's results (rescaled by √dmodel=√768) and Alex's earlier numbers. We see a number of straight lines in these logarithmic plots, which shows phases of exponential growth. We are surprised by the decrease in Residual Stream norm in some of the EleutherAI models. We would have expected that, because the transformer blocks can only access the normalized activations, it's hard for the model to "cancel out" a direction in the residual stream. Therefore, the norm always grows. However, this isn't what we see above. One explanation is that the model is able to memorize or predict the LayerNorm scale. If the model does this well enough it can (partially) delete activations and reduce the norm by writing vectors that cancel out previous activations. The very small models (distillgpt2, gpt2-small) have superexponential norm growth, but most models show exponential growth throughout extended periods. For example, from layer 5 to 41 in GPT2-XL, we see an exponential increase in residual stream norm at a rate of ~1.045 per layer. We showed this trend as an orange line in the above plot, and below we demonstrate the growth for a specific example: BOS and padding tokens In our initial tests, we noticed some residual streams showed a irregular and surprising growth curve: As for the reason behind this shape, we expect that the residual stream (norm) is very predictable at BOS and padding positions. This is because these positions cannot attend to other positions and thus always have the same values (up to positional embedding). Thus it would be no problem for the model to cancel out activations, and our arguments about this being hard do not hold for BOS and padding positions. We don't know whether there is a particular meaning behind this shape. We suspect that is the source of the U-shape shown in Basic facts about language models during training: Theories for the source of the growth From now on we focus on the GPT2-XL case. Here is the residual stream growth curve again (orange dots), but also including the resid_mid hook between the two Attention and MLP sub-layers (blue dots). O...
Welcome to the brand new episode of “The Conscious Living Podcast.” Today I have the pleasure of interviewing an amazingly evolved soul, entrepreneur, and dear friend and colleague, Mr. Terri Britt. Former Miss USA, Terri Britt, is a Spiritual Coach, Intuitive Healer, TEDx Speaker, award-winning author, and the founder of Women Leaders of Love global community, as well as the co-owner of JumpinGoat Coffee Roasters with her hubby, Charlie Britt. Terri's been seen and heard on dozens of radio and television shows and has shared the stage with transformational leaders such as Dr. John Gray, Marianne Williamson, and Jack Canfield. Terri is a wife, mom, stepmom, nana, and the former tv host of Movietime, now known as the E! Channel. Here's what you will discover in the brand new episode: 1- What is the old family paradigm and why do you feel it isn't working? 2- How do you break this cycle? 3- Explain what your Worthiness Quotient is and why you feel it's the key to leading a life that you love. 4- How do you raise your WQ? 5- Why is this so important for the family? & so much more.
CLAIR DE LUNE, the album that gives continuity to SALUT D'AMOUR, presents a selection of short pieces with a characteristic expressive aspect. Yuriy Rakevich and Olga Kopylova demonstrate a unique inspiration and rhythm, which unify the miniature works, marking a concert in memory.Tracks1. 2 Canciones Mexicanas: II. Estrellita (Arr. for Violin and Piano by Jascha Heifetz) (02:50)2. Orfeo ed Euridice, Wq.30: Melodie (Arr. for Violin and Piano by Fritz Kreisler) (02:57)3. Dance of the Maidens, Op. 48 (Arr. for Violin and Piano by Fritz Kreisler) (02:33)4. Romance in D Major, Op. 3 (04:14)5. Suite Bergamasque, L. 75: III. Clair de Lune (Arr. for Violin and Piano by Alexandre Roelens) (04:16)6. Frasquita: Serenade (Arr. for Violin and Piano by Fritz Kreisler) (02:33)7. Danny Boy (Londonderry Air) [Arr. for Violin and Piano by Fritz Kreisler] (03:58)8. La plus que lente, L. 121 (Arr. for Violin and Piano by Léon Roques) (04:18)9. Marionettes No. 2: La poupée valsante “Dancing Doll” (Arr. for Violin and Piano by Fritz Kreisler) (02:33)10. Albumblatt, WWV 94 (Arr. for Violin and Piano by August Wilhelmj) (04:13)11. Six Pieces, Op. 51, TH 143: VI. Valse Sentimentale (01:48)12. Cantabile for Violin and Piano in D Major, Op. 17 (03:07)13. Poeme Op. 39 “At Twilight” (Arr. for Violin and Piano by Vilmos Tátrai) (01:57)14. 2 Nocturnes, Op. 5: No. 1 in F-Sharp Minor (Arr. for Violin and Piano by Alexander Mogilevsky) (03:20)15. Three Miniatures No. 3: Valse. Allegretto (Arr. for Violin and Piano by Galina Barinova) (03:47)Classical Music Discoveries is sponsored by Uber and Apple Classical. @CMDHedgecock#ClassicalMusicDiscoveries #KeepClassicalMusicAlive#LaMusicaFestival #CMDGrandOperaCompanyofVenice #CMDParisPhilharmonicinOrléans#CMDGermanOperaCompanyofBerlin#CMDGrandOperaCompanyofBarcelonaSpain#ClassicalMusicLivesOn#Uber Please consider supporting our show, thank you!Donate (classicalmusicdiscoveries.store) staff@classicalmusicdiscoveries.com This album is broadcast with the permission of Bárbara Leu from Azul Music.
En este episodio, Fri, Andry y Rach discuten uno de sus grandes amores: el cine. Cuáles géneros funcionan para diferentes tipos de personas? Qué estilo de películas te “deberían” gustar para considerarte fan del cine? Cuál es el problema de la humanidad con las películas tristes o de miedo? Qué significado tienen las películas para el WQ team? Todas estas respuestas, y más, después de una intensa ronda de “yo nunca nunca.” Antes de salir al cine, recuerden seguirlas en Instagram @waitquepod, suscribirse en YouTube, y comprarles un café. O unas cotufas. Este episodio es patrocinado por @raquel.viptravel --- Support this podcast: https://anchor.fm/waitque/support
Hoy WQ cumple un año!!! En este episodio, Rach, Fri y Andry discuten qué tanto de juzgar es naturaleza humana, y qué deberíamos controlar. Acompáñenlas a confesar todas las cosas que juzgan sobre las otras personas, cómo se sienten cuando son juzgadas, y todas las maneras en las que se juzgan a sí mismas, todo dentro del rabbit hole de gustos musicales, la polémica de body image, y el imposible challenge de letting people enjoy things. Celebren el primer aniversario de WQ el team siguiéndolas en Instagram @waitquepod, subscribiéndose en YouTube, y comprándoles un café. Este episodio es patrocinado por @raquel.viptravel. Para joyas y bisutería, recuerden visitar www.bissuterie.com y usar el código WAITQUE15 para 15% de descuento. --- Support this podcast: https://anchor.fm/waitque/support
En este episodio, Andry, Fri y Rach recuentan su historial de mascotas, desde el primer hámster de feria que sin duda no mataron (jaja), hasta sus hijos biológicos actuales. Conoce a las mascotas WQ, y acompáñalas a hablar de ellas con el nivel de amor más grande que han sentido, a reconocer momentos de self growth, y a no poder evitar historias de muertes trágicas, ni juzgar a la gente que no sabe tener mascotas, ni llorar. Mucho. No olvides seguirlas en Instagram @waitquepod, suscribirte en YouTube, y comprarles un café. Para sus próximos viajes, sigan a @raquel.viptravel Este episodio es patrocinado por @jaqui.levyhara --- Support this podcast: https://anchor.fm/waitque/support
Morgen komt op luchtbasis Rammstein een grote vertegenwoordiging van de NAVO bijeen om te praten over een nieuwe levering van wapens aan Oekraïne. President Zelensky vraagt al langer om veel zwaarder materieel, zoals tanks en langeafstandsraketten. Rutte liet deze week bij zijn bezoek aan Biden weten dat er vanuit Nederland in ieder geval steun wordt verleend aan het Patriotsproject van Duitsland en de Verenigde Staten, om de luchtafweercapaciteiten van Oekraïne te verbeteren. Gaan er inderdaad op grote schaal tanks richting Oekraïne? Of blijven bondgenoten zoals Duitsland dwarsliggen? Dat en véél meer bespreken we met kolonel Han Bouwmeester. Fragmenten uit aflevering: * Op 17 februari staan we in Tivoli met de VPRO! Hou onze socials in de gaten voor meer info. * Rutte bij Jake Tapper (https://twitter.com/theleadcnn/status/1615495311199174656?s=48&t=E0dMg9O1lJoVIl-HMRyYyQ) * Thierry komt achter de gróte geheimen van het WEF (https://twitter.com/thierrybaudet/status/1615678726246645763?s=46&t=B7uEcz6Zg-SyEejXto5LPQ) * Duitsland twijfelt over leveren van tanks (https://twitter.com/thierrybaudet/status/1615678726246645763?s=46&t=B7uEcz6Zg-SyEejXto5LPQ) * Reportage Nieuwsuur over mogelijk nieuw offensief vanuit Belarus (https://twitter.com/nieuwsuur/status/1614362900554223616?s=46&t=vbhaKmYp8hvzvIztq9DEPA) * Bach: Sonata for violin in C major, Wq. 73, H. 504 (https://www.youtube.com/watch?v=u3V7NPSnABI)
En esta edición triplemente especial de WQ, el team celebra su episodio #50, el primero de este año, e invitan a Eugenia Siso, su podcaster, creadora de contenido y comediante de confianza. Únete a este rollercoaster de emociones que cubre las veces que se han lanzado a hacer cosas fuera de su comfort zone, cómo hacen para empezar algo intimidante, un regaño de Eugenia para los que no creen en sí mismos, cómo enfrentar la jungla salvaje que es Twitter, unos buenos ataques de risa, y el peligro de la pasta seca. Empieza el 2023 con el pie derecho siguiéndolas en Instagram @waitquepod, subscribiéndose a su youtube, y comprándoles un café. Also, recuerden seguir a @jaqui.levyhara for all your energetic healing needs. --- Support this podcast: https://anchor.fm/waitque/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Induction heads - illustrated, published by TheMcDouglas on January 2, 2023 on LessWrong. TL;DR This is my illustrated walkthrough of induction heads. I created it in order to concisely capture all the information about how the circuit works. There are 2 versions of the walkthrough: Version 1 is the one included in this post. It's slightly shorter, and focuses more on the intuitions than the actual linear operations going on. Version 2 can be found at my personal website. It has all the same stuff as version 1, with a bit of added info about the mathematical details, and how you might go about reverse-engineering this circuit in a real model. The final image from version 1 is inline below, and depending on your level of familiarity with transformers, looking at this diagram might provide most of the value of this post. If it doesn't make sense to you, then read on for the full walkthrough, where I build up this diagram bit by bit. Introduction Induction heads are a well-studied and understood circuit in transformers. They allow a model to perform in-context learning, of a very specific form: if a sequence contains a repeated subsequence e.g. of the form A B ... A B (where A and B stand for generic tokens, e.g. the first and last name of a person who doesn't appear in any of the model's training data), then the second time this subsequence occurs the transformer will be able to predict that B follows A. Although this might seem like weirdly specific ability, it turns out that induction circuits are actually a pretty massive deal. They're present even in large models (despite being originally discovered in 2-layer models), they can be linked to macro effects like bumps in loss curves during training, and there is some evidence that induction heads might even constitute the mechanism for the actual majority of all in-context learning in large transformer models. I think induction heads can be pretty confusing unless you fully understand the internal mechanics, and it's easy to come away from them feeling like you get what's going on without actually being able to explain things down to the precise details. My hope is that these diagrams help people form a more precise understanding of what's actually going on. Prerequisites This post is aimed at people who already understand how a transformer is structured (I'd recommend Neel Nanda's tutorial for that), and the core ideas in the Mathematical Framework for Transformer Circuits paper. If you understand everything on this list, it will probably suffice: The central object in the transformer is the residual stream. Different heads in each layer can be thought of as operating independently of each other, reading and writing into the residual stream. Heads can compose to form circuits. For instance, K-composition is when the output of one head is used to generate the key vector in the attention calculations of a subsequent head. We can describe the weight matrices WQ, WK and WV as reading from (or projecting from) the residual stream, and WO as writing to (or embedding into) the residual stream. We can think of the combined operations WQ and WK in terms of a single ,ow-rank matrix WQK:=WQWTK, called the QK circuit. This matrix defines a bilinear form on the vectors in the residual stream: vTiWQKvj is the attention paid by the ith token to the jth token. Conceptually, this matrix tells us which tokens information is moved to & from in the residual stream. We can think of the combined operations WV and WO in terms of a single matrix WOV:=WVWO, called the OV circuit. This matrix defines a map from residual stream vectors to residual stream vectors: if vj is the residual stream vector at the source token, then vTjWOV is the vector that gets moved from token j to the destination token (if j is attended to). Conceptually, this matr...
Faltan 5 pa' las 12:00 y el WQ team discute los pros y cons de celebrar el Año Nuevo, y de crear New Year's resolutions. Acompáñenlas a recordar buenas fiestas, ansiedades, noches inesperadamente chéveres, y todo lo que han aprendido que en un 31 de diciembre no se hace más. Te proponemos de resolution que las sigas en Instagram @waitquepod, te subscribas en YouTube, y les compres un café. Para el ratón. --- Support this podcast: https://anchor.fm/waitque/support
海顿、莫扎特、贝多芬这三位“维也纳古典音乐三巨头”并不是横空产生的,在他们声名鹊起之前,古典音乐世界为他们的到来做了哪些准备呢?在这个维也纳古典时代的前夜,还有很多值得我们今天聊聊的内容。包含曲目:0:26- La serva padrona: "Stizzoso, mio stizzoso"(Sing Along Karaoke Version)4:21- Beethoven: Sonata for Cello and Piano No. 4 in C Major, Op. 102 No. 1 - I. Andante7:28- Sinfonia in D, Wq 183 No. 1 - C.P.E. Bach: Sinfonia in D, Wq 183 No. 1 - 1. Allegro di molto13:15- Symphony No. 63 in B-Flat Major - I. Allegro assai18:35- Iphigénie en Tauride - Act 1 - Scène 1. Introduction et choeur. "Grands Dieux ! soyez-nous secouables"
Brussel is in rep en roer vanwege misschien wel het grootste corruptieschandaal in de geschiedenis van de Europese Unie. Europarlementariërs zouden steekpenningen hebben aangenomen van onder andere Qatar, en inmiddels zouden zelfs tassen met cash geld gevonden zijn. Juist op het moment dat alle regeringsleiders bij elkaar komen om hernieuwde steun aan Oekraïne te bespreken, overschaduwt dit schandaal de gehele Europese top. Europacorrespondent Kysia Hekster is net terug uit een Oekraïne dat in grote problemen verkeert qua energievoorziening, en is nu alweer onderweg naar het rumoerige Brussel. Ze vertelt erover in een nieuwe aflevering van Europa Draait Door. Fragmenten en links uit aflevering: - Stem hier op het non-fictie boek van het jaar (https://www.nporadio1.nl/nieuws/cultuur-media/accd5ee4-d230-4482-b912-44de58827744/stem-humberto-beste-non-fictieboek-2022) - De tirade van Jonathan Pie over de Britse regering - De complimenteuze woorden voor Qatar van de inmiddels opgepakte Europarlementariër - Kati Piri in Bureau Buitenland over het corruptieschandaal- - De reportage van Kysia Hekster - Tim bij NPO Radio 4 - Georg Kallweit - Violin Sonata in C Major, Wq. 73, H. 504: II. Andante
BENDA: Sinfonía nº 1 en Re mayor (9.27). Stradivaria-Ensemble Baroque de Nantes. Dir.: D. Cuiller. C. P. E. BACH: Sonata para flauta y clave en Re mayor WQ 83 H 505 (15.23). F. Theuns (fl.), E. Demeyere (clv.). Sonata para teclado nº 12 en Do menor (14.42). S. Georgieva (clv.). GRAUN: “Sinfonía Federico” para cuerda y continuo en Sol mayor (Segundo y tercer movimientos: Andante y Presto) (3.41). Batzdorfer Hofkapelle. Escuchar audio
En esta edición más seria de WQ, Andry, Rach y Fri open up sobre sus propias experiencias con el antisemitismo, los miedos que han tenido a lo largo de sus vidas, las historias de sus familias, y los comentarios que han recibido que parecen inapropiados pero realmente terminan siendo simplemente antisemitas. Acompáñenlas a entender un poco más sobre este clima político con respecto al judaísmo, y qué cosas necesitan cambiar antes de que todo se ponga peor. Les dio tanto nervio grabar este episodio que se cayó el video y no quedó de otra que usar Zoom. Si quieren ser aliados del bien, síganlas en Instagram @waitquepod, subscríbanse en YouTube, y cómprenles un café. Shalom.
Esta es una edición doblemente especial de WQ, en la que Fri, Andry y Rach le dan la bienvenida a Saul Mauricio Mendoza para interrogarlo sobre su experiencia habiendo estado en una relación abierta, y compartir qué los ha hecho cambiar su perspectiva en el tema. Aprovechando que este invitado especial es actor, y además un bonche, hoy el WQ team finalmente inaugura el formato de video para episodios enteros, gracias al apoyo de tanta gente querida. Ahora, celebren con nosotras. Sígannos en Instagram @waitquepod, subscríbanse a YouTube para vernos llorar de la risa, y para seguir apoyándonos: buy us a coffee
WQ is together again, y aprovechan este reencuentro en persona para tratar de definir si aman u odian los métodos anticonceptivos. Acompañen a Andry, Fri y Rach a quejarse de lo malo, agradecer lo bueno, asignar responsabilidades, compartir experiencias e idealizar el futuro sobre birth control, todo sin poder controlar el gafo subido de estar in the same room again. El grupito de allá atrás se me separa. Asuman su responsabilidad de seguirnos en Instagram @waitquepod y suscribirse en YouTube. Mejor prevenir que amamantar.
En esta edición aún más personal de WQ, Fri, Andry y Rach invitan a sus oyentes a ayudarlas con las decisiones difíciles que cada una está currently enfrentando, tratando de entender por qué se sienten tan challenged con sus situaciones actuales, y cómo a cada una le sirve llevarla: avoidance at all costs. Obvio. No se olviden de seguirnos en Instagram @waitquepod y suscribirse en YouTube. Esa decisión está fácil.
Este es un episodio muy muy especial de WQ, en el que Rach, Andry y Fri se infiltran en el estudio de Escuela de Nada en CDMX para hablar con tres amigos queridísimos, Chris Andrade, Nacho Redondo y Lino Cáceres (Leo, te extrañamos). Acompáñenlas a tratar de mantener un hilo en este zaperoco de conversación que va de amistades entre el mismo género vs. con el sexo opuesto, rating people, la mamá de Nacho y nuestras tendencias de ser serial killers. Also, no se olviden de seguirnos en Instagram @waitquepod y en YouTube.
Ó Carlos, entre nesse avião com cuidado para nao pisar cocó mas não coma essas batatas! Full4156http://podcastmcr.iol.pt/rcomercial/WQ
En este episodio de WQ, Fri, Rach y Andry le dan la bienvenida a la primera invitada especial del podcast, para hablar de nada más y nada menos que la experiencia de ~traer una vida al mundo~. Get a glimpse del alboroto que es being our friend, y acompáñanos a oír en detalle sobre el embarazo y el parto de Dana, nuestros miedos muy no racionales, y un heated debate about pajamas.
Movie tie in games get fun. Do you have someone play the movie and they know everything that should be happening? Or do you have a movie tie in game that tries to go beyond the movie and risk making something that people don't like? And does it matter if it's a super successful everyone in the world has seen it movie vs more of a niche film? We're taking a stab at those questions and more when Adam Reck from Battle of the Atom stops by the show to talk Blade 2 for the PS2 and Xbox. Learn such things as: How do you have a vampire hunter movie after you kill all of the head vampires? Does it matter if you get the actors from the movie to do voices for the game? How does the Spider-Man animated series fit into this movie series? And so much more! You can find Adam on Twitter @arthurstacy, Instagram @adam.reck, Tumbler @adamreck, and of course Battle of the Atom over at Comics XF If you want to be a guest on the show please check out the Be a A Guest on the Show page and let me know what you're interested in. If you want to help support the show check out the Play Comics Patreon page or head over to the Support page if you want to go another route. You can also check out the Play Comics Merch Store. Use the coupon code “ireadshownotes” for 15% off your order. Play Comics is part of the Gonna Geek Network, which is a wonderful collection of geeky podcasts. Be sure to check out the other shows on Gonna Geek if you need more of a nerd fix. You can find Play Comics @playcomicscast on Twitter and in the Play Comics Podcast Fan Group on Facebook. A big thanks to Capes on the Couch as well as m WQ&A for the promos today. Intro/Outro Music by Best Day, who wants to date a vampire.
Bienvenidos a esta edición Explicit de WQ, donde nos pusimos incómodas solo para ustedes. En esta ocasión, las tres nos abrimos (jaja) sobre the M word. Cuándo fue que descubrimos este maravilloso mundo, lo taboo que es hablar sobre self-pleasure as women, y cuál es nuestro vibrador favorito. Sin ningún filtro, y sin miedo al éxito.
Court and Saint discuss everything we know about void 3.0 as we countdown to the WQ release! Includes discussion on early build ideas for each class and potential impacts to the meta.