POPULARITY
Categories
Gäster: Gabriella Fäldt, Carin Sollenberg, Jonathan Rollins För 90SEK/mån får du 5 avsnitt i veckan:4 Vanliga AMK MORGON + AMK FREDAG med Isak Wahlberg Se till att bli Patron via webben och inte direkt i iPhones Patreon-app för att undvika Apples extraavgifter:Öppna istället din browser och gå till www.patreon.com/amkmorgon Relevanta länkar:... ...Hoodienhttps://www.dropbox.com/scl/fi/vnt2hi3r1nvilhlg190mh/HOODIE.png?rlkey=p4dvctnwuk0vfd0n0cwclwu73&dl=0 ...Paul Newmanhttps://m.media-amazon.com/images/M/MV5BMTkwOTg5NzcyN15BMl5BanBnXkFtZTgwODI0NjU5MTE@._V1_.jpg https://i0.wp.com/bamfstyle.com/wp-content/uploads/2020/01/tcom1-cl2-shrt3.jpg?ssl=1 https://i0.wp.com/bamfstyle.com/wp-content/uploads/2020/01/tcom1-cl2-shrt2-swtr.jpg?ssl=1 ...Stilover40https://www.instagram.com/stilover40/ ...All in holehttps://play.google.com/store/apps/details?id=com.homagames.studio.allinhole&hl=en&pli=1 ...Donut Countyhttps://apps.apple.com/us/app/donut-county/id1292099839 ...Deepstashhttps://deepstash.com/ ...ANTM-dokumentärenhttps://www.netflix.com/tudum/articles/reality-check-inside-americas-next-top-model-release-date-news https://amandaczerniawski.wordpress.com/wp-content/uploads/2015/02/whitney-in-seventeen-magazine-whitney-thompson-1485350-1561-2084.jpg ...Jelena från Molkomhttps://www.svt.se/nyheter/lokalt/varmland/jelena-fran-molkom-fribloder-anvander-inte-mensskydd ...vita killen med gitarren och fejsethttps://www.instagram.com/wellesmusic/reel/DTQVRstjWlh/ Alla låtar finns i AMK Morgons spellista här:https://open.spotify.com/user/amk.morgon/playlist/6V9bgWnHJMh9c4iVHncF9j?si=so0WKn7sSpyufjg3olHYmg
In this week's episode of the Biz/Dev Podcast, the script gets flipped. David is in the hot seat, talking candidly about Teela and the real journey to a V1 launch.We unpack where Teela started, the messy operational problems that sparked it, and the decisions, missteps, and tradeoffs that shaped the product along the way. If you are a founder, operator, or builder navigating your own V1 moment, this episode offers real perspective on what it takes to move from idea to something people can actually use and trust.LINKS:TeelaDavid on LinkedIn___________________________________ Submit Your Questions to: hello@thebigpixel.net OR comment on our YouTube videos! - Big Pixel, LLC - YouTube Our Hosts David Baxter - CEO of Big Pixel Gary Voigt - Creative Director at Big Pixel The Podcast David Baxter has been designing, building, and advising startups and businesses for over ten years. His passion, knowledge, and brutal honesty have helped dozens of companies get their start. In Biz/Dev, David and award-winning Creative Director Gary Voigt talk about current events and how they affect the world of startups, entrepreneurship, software development, and culture. Contact Us hello@thebigpixel.net 919-275-0646 www.thebigpixel.net FB | IG | LI | TW | TT : @bigpixelNC Big Pixel 1772 Heritage Center Dr Suite 201 Wake Forest, NC 27587 Music by: BLXRR
New URL for the Podcast: http://www.miniaturemodelspodcast.com Podcast questions and inquiries? Email us: mattandmattoscaletrains@gmail.com The Miniature Models Podcast Crew reviews the latest 2026 V1 catalog from Lionel. We give our unbiased and unfiltered look at what Lionel is bringing to the table in 2026. Join Our Community Discord Discord Server Link: https://discord.gg/5rpxw8F4DY Please note that you will need to read the rules and click a box to verify that you understand them before you're able to join the server. We want this community to be a welcoming and respectful place. The Miniature Models Podcast is part of the Trainz.com Partner Program. If you plan on buying from them, please use our Affiliate Link: https://www.trainz.com/MMOP You can also use our unique Promo Code: MMOP for $10 off a single purchase on the Trainz.com website Show Notes and Links: Miniature Models Facebook Page: https://www.facebook.com/MattandMattTrainsPodcast Miniature Models YouTube Page: https://www.youtube.com/@miniaturemodelspodcast Miniature Models Instagram Page: https://www.instagram.com/miniature_models_podcast/ Miniature Models Podcast Merchandise https://www.redbubble.com/people/MandM-Podcast/shop NPL Customs Store etsy.com/shop/NPLCustoms This Podcast is now available on Spotify and Amazon Music / Audible. We are also available from our usual sources like Apple Podcasts, Google Podcasts, and YouTube. Spotify: https://open.spotify.com/show/0OOWgO2vvI38ZFOtF4BxkU?si=2a853e2b36a44f80 YouTube Music Podcasts: https://music.youtube.com/playlist?list=PLbs761BIEfXYancom0rY3kTQSjFbGvsnJ&si=cjCOiYwOQhxMPV4R Apple Podcasts: https://podcasts.apple.com/us/podcast/miniature-models-podcast/id1527505788 Amazon Music: https://music.amazon.com/podcasts/f8fe369d-985c-4d14-af1f-dc4abf112b06/the-matt-and-matt-o-scale-trains-podcast Where you can find the Hosts: Matt R (WC Model Railroad) YouTube: https://www.youtube.com/@wcmodelrailroad Facebook: https://www.facebook.com/westchicagorailroad Instagram: https://www.instagram.com/wc_model_railroad/ Matt Z (TrainLover9943) YouTube: https://www.youtube.com/@trainlover9943 Facebook: https://www.facebook.com/trainlover9943 Instagram: https://www.instagram.com/matts.hobbies/ Johnny N (Audamus) YouTube: https://www.youtube.com/@Audamus Facebook: https://www.facebook.com/p/Audamus-Trains-100075521222163/ Instagram: https://www.instagram.com/audamus_trains/ John S (RetroMikado) YouTube: https://www.youtube.com/@retromikado Instagram: https://www.instagram.com/retromikado96/ Sid (Sid's Trains) YouTube - https://www.youtube.com/@SidsTrains Facebook - https://www.facebook.com/sidney.flumbaum Instagram - https://www.instagram.com/sidneystrains/ Questions about upgrading your engine with Sid? Email him: sidstrains773@gmail.com Music: Good Vibe by Twisterium from Pixabay Please Note: The primary purpose of this Podcast is to educate. We do not constitute advice or services. Please seek advice from an appropriate legal professional in your state, county, or city.
Your boys are back with a lead-in to Happy Nude Year, talking about a grimy ol' erotic thriller this week. But first, the guys disuss what's coming to theaters this February. Spoiler alert: Almost nothing! Then they discuss "Call Me" from 1988, an erotic thriller about a woman who witnesses a murder in the bathroom stall of a Polish bar in New York. She also gets horny when a sleazy anonymous pervert calls her on the phone. It stars Stephen McHattie and Steve Buscemi! All this plus the horrors of the world, voice mails, Kyle from Kentucky, Travis, Kevin Chat, new details on the Sarah Squirm vs. Parker feud and so much more! Direct Donloyd Here!! After the episode, send us a voice mail or join the Patreon perhaps?
In this episode, Travis and his producer get brutally honest about the offers, products, and business models they would never build again—and why that matters if you want to make more money with less stress. From overpriced courses to overbuilt software and Travis's hard “no” on ever starting a restaurant from scratch, this conversation focuses on pattern recognition: how to spot red flags earlier, avoid expensive mistakes, and build offers people actually want to buy. On this episode we talk about: Why Travis's early webinar-to-course funnel “worked” on paper but could never scale without a real backend offer. How he would redesign that same funnel today: free or low-ticket course, then selling implementation (done-with-you and done-for-you). The expensive lessons from building a software company before validating demand—and why you must build what the market wants, then deliver what it needs. The hidden stress and complexity of certain business models (like restaurants) and why Travis would never launch one from scratch. How ego, perfectionism, and “romanticizing your idea” can cost you time, money, and opportunity. Top 3 Takeaways Courses aren't dead—but information isn't enough. Use courses as lead magnets and make real money on implementation offers (coaching, consulting, done-for-you services). Validate before you build big. Especially with software, ship the embarrassing V1, get feedback fast, and only scale what people are already using and asking for. Choose business models that match your life. Some ideas (like restaurants) can be wildly profitable for the right person, but come with low margins, high stress, and operational headaches Travis doesn't want. Notable Quotes “You have to build what they want—and then give them what they need on the back end.” “If you're not embarrassed by the first version of your product, you launched too late really hit home for me—because I launched way too late.” “I would never start a restaurant from scratch. One successful store isn't enough reward for all the headache it takes to get there.” ✖️✖️✖️✖️
Zvi Band is a developer, serial founder, and relationship-driven entrepreneur best known for building Contactually, the much-loved CRM he scaled to over $10 million in revenue before selling to real estate giant Compass in a deal valued north of $20 million. In addition to founding and exiting venture-backed companies, he's written a book, coached thousands of professionals, and now leads Relatable, a personal CRM designed to help people deepen trusted relationships instead of just “monetizing contacts.” In this conversation, he unpacks how AI is blowing the doors off traditional software gatekeeping and what non-technical founders can realistically build in the next 30 days. On this episode we talk about: How AI has collapsed the barrier to building software—from needing a technical co-founder or expensive dev team to being able to spin up a working web app in a matter of hours. What non-technical founders should actually learn first (hint: product thinking and clear specs) instead of trying to become full-stack engineers. Which AI-powered tools can help you go from “idea in your head” to V1 MVP—covering product specs, code, hosting, and iteration. How to think about UX/UI in an AI world, including using real-world visuals and brand cues to guide your app's look and feel. Where AI is taking the software and career landscape next, from solo-built seven–eight figure products to massive retraining opportunities as lower-level jobs get automated. Top 3 Takeaways 1. You no longer need a technical co-founder to ship a real product; if you can clearly describe what you want and think like a product manager, AI can handle most of the coding and infrastructure for a basic business app.2. The real “execution risk” has shifted from writing clean code to building the right thing, matching real user journeys, and finding distribution in an increasingly noisy, AI-generated world.3. AI will both automate low-level work and open up huge opportunities in enablement—helping industries adopt AI, retraining displaced workers, and giving more people a viable path into software and entrepreneurship. Notable Quotes "Even if the code is ‘throwaway,' it costs you next to nothing now to have AI build a V1 while you sleep." "Anyone can tell an AI to make a CRM; very few people can make a CRM informed by fifteen years of thinking deeply about relationships." "As AI takes more tasks off your plate, the real question is whether you'll use that freed-up time to invest in relationships or just scroll more content." Connect with Zvi Band: Website: https://www.zviband.com Relatable (personal CRM): https://relatable.one ✖️✖️✖️✖️
New Year's Eve, no guest, and somehow the cockpit still turns into a full-blown sitcom. Fig & RePete kick off with a takeoff that goes sideways at V1 when “rotate” gets called… and apparently translated into “stare blankly into the void.” From there, it's the perfect hangout episode: Top Gun continuity crimes (medals disappear, sunglasses teleport), a hard pivot into the Air India 787 post-rotation dual-engine power-loss mystery (and why one explanation feels disturbingly too plausible), and a buffet of leadership horror stories that'll make you grateful for every normal human you've ever flown with. Plus: quiet professionals, jumpseat survival tactics, and one legendary “turn the checklist 90 degrees” power move that ends exactly how it should. Funny, sharp, and just unhinged enough to feel like the crew room after midnight.
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
2025 wasn't a failed bull market. It was the start of a structural bear. In this episode, we break down why Bitcoin holding the “blue zone” may signal maturity rather than weakness, and why that shift breaks many of the assumptions crypto has relied on for the last decade. Slower upside, collapsing speculative volume, and pressure on miners aren't anomalies — they're consequences. We revisit the biggest signals from this cycle: Trumpcoin, treasury-company leverage, crypto AI hype, and why on-chain activity quietly evaporated. Then we pivot into AI-generated content, dissecting a viral video that fooled millions and what it reveals about authenticity, persuasion, and trust in the AI era. From there, we look ahead to 2026: – Miner revenue compression and Bitcoin's security budget problem – Why “fees will fix it” isn't enough – Neobanking + stablecoins as the real onboarding wave – Regulation turning crypto into structured internet capital markets We close with the NAT thesis: Bitcoin's long-term sustainability depends on a second subsidy. NAT is explored as a non-arbitrary, miner-aligned solution with a clear catalyst timeline (V1, V2, adoption, flywheel). This isn't about hype. It's about whether crypto becomes infrastructure — or breaks under its own assumptions. Topics: First up, break down why Bitcoin holding the “blue zone” may signal maturity rather than weakness Next, revisit the biggest signals from this cycle: Trumpcoin, treasury-company leverage, crypto AI hype, and why on-chain activity quietly evaporated. Finally, Our prediction for 2026 Please like and subscribe on your favorite podcasting app! Sign up for a free newsletter: www.theblockrunner.com Follow us on: Youtube: https://bit.ly/TBlkRnnrYouTube Twitter: bit.ly/TBR-Twitter Telegram: bit.ly/TBR-Telegram Discord: bit.ly/TBR-Discord $NAT Telegram: https://t.me/dmt_nat
Mixing Music with Dee Kei | Audio Production, Technical Tips, & Mindset
In Episode 353, Dee Kei and Lu break down how to communicate timelines like a pro, even when you're booked out. They talk about setting clear expectations for V1 delivery, revisions, and approvals, why daily updates can change everything, and how “underpromise and overdeliver” builds trust long term. They also warn against a common trap in mixing: thinking more plugins, more automation, or more hours automatically equals more value, and why your pricing should not be tied to effort or session complexity.SUBSCRIBE TO OUR PATREON FOR EXCLUSIVE CONTENT!SUBSCRIBE TO YOUTUBEJoin the ‘Mixing Music Podcast' Discord!HIRE DEE KEIHIRE LUHIRE JAMESFind Dee Kei and Lu on Social Media:Instagram: @DeeKeiMixes @MasteredbyLu @JamesParrishMixesTwitter: @DeeKeiMixes @MasteredbyLuThe Mixing Music Podcast is sponsored by Izotope, Antares (Auto Tune), Sweetwater, Plugin Boutique, Lauten Audio, Filepass, & CanvaThe Mixing Music Podcast is a video and audio series on the art of music production and post-production. Dee Kei, Lu, and James are professionals in the Los Angeles music industry having worked with names like Odetari, 6arelyhuman, Trey Songz, Keyshia Cole, Benny the Butcher, carolesdaughter, Crying City, Daphne Loves Derby, Natalie Jane, charlieonnafriday, bludnymph, Lay Bankz, Rico Nasty, Ayesha Erotica, ATEEZ, Dizzy Wright, Kanye West, Blackway, The Game, Dylan Espeseth, Tara Yummy, Asteria, Kets4eki, Shaquille O'Neal, Republic Records, Interscope Records, Arista Records, Position Music, Capital Records, Mercury Records, Universal Music Group, apg, Hive Music, Sony Music, and many others.This podcast is meant to be used for educational purposes only. This show is filmed and recorded at Dee Kei's private studio in North Hollywood, California. If you would like to sponsor the show, please email us at deekeimixes@gmail.com.Support this podcast at — https://redcircle.com/mixing-music-music-production-audio-engineering-and-music/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Stop chasing shiny objects and start driving real business outcomes. Marathon Health CTO Venkat Chittoor joins the show to explain why AI is the ultimate enabler for digital transformation but only when it is anchored by a rock solid business strategy. Essential Insights for Tech LeadersAI is not a standalone strategy. It is a powerful tool to accelerate a pre-existing business North Star. Success in digital transformation follows a specific maturity curve. Start with personal productivity, move to replacing mundane tasks, and eventually aim for cognitive automation. Governance must come before experimentation. Establishing guardrails for data privacy is critical before launching any AI pilot. Measure value through tangible efficiency gains. In healthcare, this means reducing administrative burden or "pajama time" so providers can focus on patient care. Don't let marketing speak fool you. Always validate vendor claims against your specific industry use cases. Timestamped Highlights00:50 Defining advanced primary care and the mission of Marathon Health 02:44 Why AI strategy is useless without a defined business strategy 05:01 The three steps of AI adoption from productivity to cognition 12:14 How to define success metrics for a pilot versus a scaled V1 solution 16:40 Real world ROI including call deflections and charting efficiency 21:43 Advice for leaders on data quality and avoiding vendor traps A Perspective to CarryAI is actually enabling [efficiency], but without a solid business strategy, AI strategy is not useful. Tactical Advice for the FieldWhen launching an AI initiative, focus heavily on the underlying data quality. Ensure your team accounts for data recency, accuracy, and potential biases, as these factors determine whether an experiment succeeds or fails. Start small with pilots to build muscle memory before attempting to scale complex systems. Join the ConversationIf you found these insights helpful, subscribe to the podcast for more deep dives into the tech landscape. You can also connect with Venkat Chittoor on LinkedIn to follow his work in healthcare innovation.
Today Nemo buys all of the buffies and then ranks them (V1) . He also explores the new update and goes over some of the highlights of the patch notes. He buys world champion max and collects some brawlidays presents too! Make sure to leave a 5 star review on Spotify and leave a comment to be featured in a future episodeConsider subscribing to the Patreon for exclusive content! patreon.com/eternalbrawlYT- nemoBSEmail- eternalbrawlpodcast@gmail.com Main Club- Eternal Legion | #2UGJVQJVV2nd Club- Eternal Army | #R9YJCUVU
El cohete V2, nombre técnico A4 (Aggregat 4), fue un misil balístico desarrollado a principios de la Segunda Guerra Mundial en Alemania, dirigido específicamente hacia Bélgica y lugares del sudeste de Inglaterra. Este cohete fue el primer misil balístico de combate de largo alcance del mundo y el primer artefacto humano conocido que hizo un vuelo sub orbital. Fue el progenitor de todos los cohetes modernos, incluyendo los utilizados por los programas espaciales de Estados Unidos y de la Unión Soviética, que tuvieron acceso a los científicos y diseños alemanes a través de la Operación Paperclip y la Operación Osoaviakhim respectivamente. La Wehrmacht alemana lanzó en torno a 3.000 cohetes militares V2 contra objetivos Aliados durante la guerra, principalmente Londres y posteriormente Amberes, dando por resultado la muerte de un número estimado de 7.250 personas, tanto civiles como militares. El arma fue presentada por la propaganda Nazi como una venganza por los bombardeos sobre las ciudades alemanas desde 1942 hasta el final de la guerra. Diseñados por Wernher von Braun, muchos de estos misiles fueron disparados desde las costas francesas hacia Londres con el fin de provocar la mayor devastación posible, así como minar la moral del enemigo. Sucesor de la V1 (que era un misil de crucero), este diseño no vio la luz hasta muy avanzada la guerra, por lo que tuvo poco impacto real en esta. El V2 fue uno de los avances más relevantes en tecnología armamentística logrados hasta ese momento. Sin embargo, no pudo cambiar el curso de la guerra, que ya había tomado, en 1944, un giro decisivo hacia la victoria aliada.
Predicazione espositiva del Pastore Daniel Ransom di Filippesi capitolo 2 versetti da 1 a 4. Registrata presso il Centro Evangelico Battista di Perugia il 7 Dicembre 2025.Titolo del messaggio: "Quali devono essere la base e la direzione dei nostri pensieri"FILIPPESI 2 V1-41 Se dunque v'è qualche consolazione in Cristo, se vi è qualche conforto d'amore, se vi è qualche comunione di Spirito, se vi è qualche tenerezza di affetto e qualche compassione, 2 rendete perfetta la mia gioia, avendo un medesimo pensare, un medesimo amore, essendo di un animo solo e di un unico sentimento. 3 Non fate nulla per spirito di parte o per vanagloria, ma ciascuno, con umiltà, stimi gli altri superiori a se stesso, 4 cercando ciascuno non il proprio interesse, ma anche quello degli altri.
Gäster: Emma-Lee Andersson, Clara Kristiansen, Viktor Elsnitz, Behrad Rouzbeh, Johannes Finnlaugsson För 90SEK/mån får du 5 avsnitt i veckan:4 Vanliga AMK MORGON + AMK FREDAG med Isak Wahlberg Se till att bli Patron via webben och inte direkt i iPhones Patreon-app för att undvika Apples extraavgifter:Öppna istället din browser och gå till www.patreon.com/amkmorgon Gå på Daddies With Issues på Kappa Bar i Uppsala 12/12:https://billetto.se/en/e/daddies-with-issues-kappa-bar-uppsala-biljetter-1670810 Gå på Jofis "Året är 2025"https://underjord.nu/biljetter/aret-ar-2025/ Gå på "Nära Vänner" med Marcus Thapper och Clara Kristiansen på Scalateatern 5:e marshttps://billetto.se/e/nara-vanner-stockholm-biljetter-1763300?bref=eyJzIjoiYmlsbGV0dG8gYWR2ZXJ0aXNpbmciLCJtIjoiYmlsbGV0dG8iLCJjIjoiY2l0eSBndWlkZSIsImNvIjoibC0xNi1zYy0zMDkzLXNlIiwidCI6MTc2NDY2MjQwMH0%3D Relevanta länkar: ...Watainhttps://upload.wikimedia.org/wikipedia/commons/4/49/Watain_live_hole_in_the_sky_festival_bergen_norway_28_august_2010.jpg ...browserhistorikenhttps://www.bbc.com/news/articles/c1dz0g2ykpeo https://blog.mozilla.org/en/mozilla/leadership/usa-freedom-and-browsing-history/ ...Iran mot Egypten i Seattlehttps://www.aftonbladet.se/sportbladet/fotboll/a/gkP4lJ/iran-vill-stoppa-pridematch-i-vm ...Washington's Dreamhttps://www.youtube.com/watch?v=JYqfVE-fykk ...Korossade Tomaterhttps://www.dropbox.com/scl/fi/u0h89wtzyereqdvnz4c2f/KOROSSADE_TOMATER.png?rlkey=bob0u0em4066wepmoo24t1bfb&dl=0 ...Australiens sociala medierhttps://www.svt.se/nyheter/utrikes/australiens-unika-forbud-mot-sociala-medier-har-tratt-i-kraft ...sommar/vinterhttps://www.visualcapitalist.com/wp-content/uploads/2022/01/Earths-Orbit.png https://www.quora.com/Which-country-has-the-most-seasons ...Ovinterhttps://www.svt.se/vader/ovinter-den-nya-arstiden-som-ersatter-vintern ...Britt/Indiansommarhttps://sv.wikipedia.org/wiki/Brittsommar https://sv.wikipedia.org/wiki/Indiansommar ...Timothy Chalamethttps://www.tiktok.com/@calabasaswings/video/7582330033570614550?is_from_webapp=1&sender_device=pc https://m.media-amazon.com/images/M/MV5BNTc0YmQxMjEtODI5MC00NjFiLTlkMWUtOGQ5NjFmYWUyZGJhXkEyXkFqcGc@._V1_.jpg https://www.instagram.com/p/DSESywqjQLi/ Låtarna som spelades var:Monsoon - Tokio HotelPay For Me - Whale Alla låtar finns i AMK Morgons spellista här:https://open.spotify.com/user/amk.morgon/playlist/6V9bgWnHJMh9c4iVHncF9j?si=so0WKn7sSpyufjg3olHYmg
Every few years, the world of product management goes through a phase shift. When I started at Microsoft in the early 2000s, we shipped Office in boxes. Product cycles were long, engineering was expensive, and user research moved at the speed of snail mail. Fast forward a decade and the cloud era reset the speed at which we build, measure, and learn. Then mobile reshaped everything we thought we knew about attention, engagement, and distribution.Now we are standing at the edge of another shift. Not a small shift, but a tectonic one. Artificial intelligence is rewriting the rules of product creation, product discovery, product expectations, and product careers.To help make sense of this moment, I hosted a panel of world class product leaders on the Fireside PM podcast:• Rami Abu-Zahra, Amazon product leader across Kindle, Books, and Prime Video• Todd Beaupre, Product Director at YouTube leading Home and Recommendations• Joe Corkery, CEO and cofounder of Jaide Health • Tom Leung (me), Partner at Palo Alto Foundry• Lauren Nagel, VP Product at Mezmo• David Nydegger, Chief Product Officer at OvivaThese are leaders running massive consumer platforms, high stakes health tech, and fast moving developer tools. The conversation was rich, honest, and filled with specific examples. This post summarizes the discussion, adds my own reflections, and offers a practical guide for early and mid career PMs who want to stay relevant in a world where AI is redefining what great product management looks like.Table of Contents* What AI Cannot Do and Why PM Judgment Still Matters* The New AI Literacy: What PMs Must Know by 2026* Why Building AI Products Speeds Up Some Cycles and Slows Down Others* Whether the PM, Eng, UX Trifecta Still Stands* The Biggest Risks AI Introduces Into Product Development* Actionable Advice for Early and Mid Career PMs* My Takeaways and What Really Matters Going Forward* Closing Thoughts and Coaching Practice1. What AI Cannot Do and Why PM Judgment Still MattersWe opened the panel with a foundational question. As AI becomes more capable every quarter, what is left for humans to do. Where do PMs still add irreplaceable value. It is the question every PM secretly wonders.Todd put it simply: “At the end of the day, you have to make some judgment calls. We are not going to turn that over anytime soon.”This theme came up again and again. AI is phenomenal at synthesizing, drafting, exploring, and narrowing. But it does not have conviction. It does not have lived experience. It does not feel user pain. It does not carry responsibility.Joe from Jaide Health captured it perfectly when he said: “AI cannot feel the pain your users have. It can help meet their goals, but it will not get you that deep understanding.”There is still no replacement for sitting with a frustrated healthcare customer who cannot get their clinical data into your system, or a creator on YouTube who feels the algorithm is punishing their art, or a devops engineer staring at an RCA output that feels 20 percent off.Every PM knows this feeling: the moment when all signals point one way, but your gut tells you the data is incomplete or misleading. This is the craft that AI does not have.Why judgment becomes even more important in an AI worldDavid, who runs product at a regulated health company, said something incredibly important: “Knowing what great looks like becomes more essential, not less. The PM's that thrive in AI are the ones with great product sense.”This is counterintuitive for many. But when the operational work becomes automated, the differentiation shifts toward taste, intuition, sequencing, and prioritization.Lauren asked the million dollar question. “How are we going to train junior PMs if AI is doing the legwork. Who teaches them how to think.”This is a profound point. If AI closes the gap between junior and senior PMs in execution tasks, the difference will emerge almost entirely in judgment. Knowing how to probe user problems. Knowing when a feature is good enough. Knowing which tradeoffs matter. Knowing which flaw is fatal and which is cosmetic.AI is incredible at writing a PRD. AI is terrible at knowing whether the PRD is any good.Which means the future PM becomes more strategic, more intuitive, more customer obsessed, and more willing to make thoughtful bets under uncertainty.2. The New AI Literacy: What PMs Must Know by 2026I asked the panel what AI literacy actually means for PMs. Not the hype. Not the buzzwords. The real work.Instead of giving gimmicky answers, the discussion converged on a clear set of skills that PMs must master.Skill 1: Understanding context engineeringDavid laid this out clearly: “Knowing what LMS are good at and what they are not good at, and knowing how to give them the right context, has become a foundational PM skill.”Most PMs think prompt engineering is about clever phrasing. In reality, the future is about context engineering. Feeding models the right data. Choosing the right constraints. Deciding what to ignore. Curating inputs that shape outputs in reliable ways.Context engineering is to AI product development what Figma was to collaborative design. If you cannot do it, you are not going to be effective.Skill 2: Evals, evals, evalsRami said something that resonated with the entire panel: “Last year was all about prompts. This year is all about evals.”He is right.• How do you build a golden dataset.• How do you evaluate accuracy.• How do you detect drift.• How do you measure hallucination rates.• How do you combine UX evals with model evals.• How do you decide what good looks like.• How do you define safe versus unsafe boundaries.AI evaluation is now a core PM responsibility. Not exclusively. But PMs must understand what engineers are testing for, what failure modes exist, and how to design test sets that reflect the real world.Lauren said her PMs write evals side by side with engineering. That is where the world is going.Skill 3: Knowing when to trust AI output and when to override itTodd noted: “It is one thing to get an answer that sounds good. It is another thing to know if it is actually good.”This is the heart of the role. AI can produce strategic recommendations that look polished, structured, and wise. But the real question is whether they are grounded in reality, aligned with your constraints, and consistent with your product vision.A PM without the ability to tell real insight from confident nonsense will be replaced by someone who can.Skill 4: Understanding the physics of model changesThis one surprised many people, but it was a recurring point.Rami noted: “When you upgrade a model, the outputs can be totally different. The evals start failing. The experience shifts.”PMs must understand:• Models get deprecated• Models drift• Model updates can break well tuned prompts• API pricing has real COGS implications• Latency varies• Context windows vary• Some tasks need agents, some need RAG, some need a small finetuned modelThis is product work now. The PM of 2026 must know these constraints as well as a PM of the cloud era understood database limits or API rate limits.Skill 5: How to construct AI powered prototypes in hours, not weeksIt now takes one afternoon to build something meaningful. Zero code required. Prompt, test, refine. Whether you use Replit, Cursor, Vercel, or sandboxed agents, the speed is shocking.But this makes taste and problem selection even more important. The future PM must be able to quickly validate whether a concept is worth building beyond the demo stage.3. Why Building AI Products Speeds Up Some Cycles and Slows Down OthersThis part of the conversation was fascinating because people expected AI to accelerate everything. The panel had a very different view.Fast: Prototyping and concept validationLauren described how her teams can build working versions of an AI powered Root Cause Analysis feature in days, test it with customers, and get directional feedback immediately.“You can think bigger because the cost of trying things is much lower,” she said.For founders, early PMs, and anyone validating hypotheses, this is liberating. You can test ten ideas in a week. That used to take a quarter.Slow: Productionizing AI featuresThe surprising part is that shipping the V1 of an AI feature is slower than most expect.Joe noted: “You can get prototypes instantly. But turning that into a real product that works reliably is still hard.”Why. Because:• You need evals.• You need monitoring.• You need guardrails.• You need safety reviews.• You need deterministic parts of the workflow.• You need to manage COGS.• You need to design fallbacks.• You need to handle unpredictable inputs.• You need to think about hallucination risk.• You need new UI surfaces for non deterministic outputs.Lauren said bluntly: “Vibe coding is fast. Moving that vibe code to production is still a four month process.”This should be printed on a poster in every AI startup office.Very Slow: Iterating on AI powered featuresAnother counterintuitive point. Many teams ship a great V1 but struggle to improve it significantly afterward.David said their nutrition AI feature launched well but: “We struggled really hard to make it better. Each iteration was easy to try but difficult to improve in a meaningful way.”Why is iteration so difficult.Because model improvements may not translate directly into UX improvements. Users need consistency. Drift creates churn. Small changes in context or prompts can cause large changes in behavior.Teams are learning a hard truth: AI powered features do not behave like typical deterministic product flows. They require new iteration muscles that most orgs do not yet have.4. The PM, Eng, UX Trifecta in the AI EraI asked whether the classic PM, Eng, UX triad is still the right model. The audience was expecting disagreement. The panel was surprisingly aligned.The trifecta is not going anywhereRami put it simply: “We still need experts in all three domains to raise the bar.”Joe added: “AI makes it possible for PMs to do more technical work. But it does not replace engineering. Same for design.”AI blurs the edges of the roles, but it does not collapse them. In fact, each role becomes more valuable because the work becomes more abstract.• PMs focus on judgment, sequencing, evaluation, and customer centric problem framing• Engineers focus on agents, systems, architecture, guardrails, latency, and reliability• Designers focus on dynamic UX, non deterministic UX patterns, and new affordances for AI outputsWhat does changeAI makes the PM-Eng relationship more intense. The backbone of AI features is a combination of model orchestration, evaluation, prompting, and context curation. PMs must be tighter than ever with engineering to design these systems.David noted that his teams focus more on individual talents. Some PMs are great at context engineering. Some designers excel at polishing AI generated layouts. Some engineers are brilliant at prompt chaining. AI reveals strengths quickly.The trifecta remains. The skill distribution within it evolves.5. The Biggest Risks AI Introduces Into Product DevelopmentWhen we asked what scares PMs most about AI, the conversation became blunt and honest. Risk 1: Loss of user trustLauren warned: “If people keep shipping low quality AI features, user trust in AI erodes. And then your good AI product suffers from the skepticism.”This is very real. Many early AI features across industries are low quality, gimmicky, or unreliable. Users quickly learn to distrust these experiences.Which means PMs must resist the pressure to ship before the feature is ready.Risk 2: Skill atrophyTodd shared a story that hit home for many PMs. “Junior folks just want to plug in the prompt and take whatever the AI gives them. That is a recipe for having no job later.”PMs who outsource their thinking to AI will lose their judgment. Judgment cannot be regained easily.This is the silent career killer.Risk 3: Safety hazards in sensitive domainsDavid was direct: “If we have one unsafe output, we have to shut the feature off. We cannot afford even small mistakes.”In healthcare, finance, education, and legal industries, the tolerance for error is near zero. AI must be monitored relentlessly. Human in the loop systems are mandatory. The cycles are slower but the stakes are higher.Risk 4: The high bar for AI compared to humansJoe said something I have thought about for years: “AI is held to a much higher standard than human decision making. Humans make mistakes constantly, but we forgive them. AI makes one mistake and it is unacceptable.”This slows adoption in certain industries and creates unrealistic expectations.Risk 5: Model deprecation and instabilityRami described a real problem AI PMs face: “Models get deprecated faster than they get replaced. The next model is not always GA. Outputs change. Prompts break.”This creates product instability that PMs must anticipate and design around.Risk 6: Differentiation becomes hardI shared this perspective because I see so many early stage startups struggle with it.If your whole product is a wrapper around an LLM, competitors will copy you in a week. The real differentiation will not come from using AI. It will come from how deeply you understand the customer, how you integrate AI with proprietary data, and how you create durable workflows.6. Actionable Advice for Early and Mid Career PMsThis was one of my favorite parts of the panel because the advice was humble, practical, and immediately useful.A. Develop deep user empathy. This will become your biggest differentiator.Lauren said it clearly: “Maintain your empathy. Understand the pain your user really has.”AI makes execution cheap. It makes insight valuable.If you can articulate user pain precisely.If you can differentiate surface friction from underlying need.If you can see around corners.If you can prototype solutions and test them in hours.If you can connect dots between what AI can do and what users need.You will thrive.Tactical steps:• Sit in on customer support calls every week.• Watch 10 user sessions for every feature you own.• Talk to customers until patterns emerge.• Ask “why” five times in every conversation.• Maintain a user pain log and update it constantly.B. Become great at context engineeringThis will matter as much as SQL mattered ten years ago.Action steps:• Practice writing prompts with structured context blocks.• Build a library of prompts that work for your product.• Study how adding, removing, or reordering context changes output.• Learn RAG patterns.• Learn when structured data beats embeddings.• Learn when smaller local models outperform big ones.C. Learn eval frameworksThis is non negotiable.You need to know:• Precision vs recall tradeoffs• How to build golden datasets• How to design scenario based evals for UX• How to test for hallucination• How to monitor drift• How to set quality thresholds• How to build dashboards that reflect real world input distributionsYou do not need to write the code.You do need to define the eval strategy.D. Strengthen your product senseYou cannot outsource product taste.Todd said it best: “Imagine asking AI to generate 20 percent growth for you. It will not tell you what great looks like.”To strengthen your product sense:• Review the best products weekly.• Take screenshots of great UX patterns.• Map user flows from apps you admire.• Break products down into primitives.• Ask yourself why a product decision works.• Predict what great would look like before you design it.The PMs who thrive will be the ones who can recognize magic when they see it.E. Stay curiousRami's closing advice was simple and perfect: “Stay curious. Keep learning. It never gets old.”AI changes monthly. The PM who is excited by new ideas will outperform the PM who clings to old patterns.Practical habits:• Read one AI research paper summary each week.• Follow evaluation and model updates from major vendors.• Build at least one small AI prototype a month.• Join AI PM communities.• Teach juniors what you learn. Nothing accelerates mastery faster.F. Embrace velocity and side projectsTodd said that some of his biggest career breakthroughs came from solving problems on the side.This is more true now than ever.If you have an idea, you can build an MVP over a weekend. If it solves a real problem, someone will notice.G. Stay close to engineeringNot because you need to code, but because AI features require tighter PM engineering collaboration.Learn enough to be dangerous:• How embeddings work• How vector stores behave• What latency tradeoffs exist• How agents chain tasks• How model versioning works• How context limits shape UX• Why some prompts blow up API costsIf you can speak this language, you will earn trust and accelerate cycles.H. Understand the business deeplyJoe's advice was timeless: “Know who pays you and how much they pay. Solve real problems and know the business model.”PMs who understand unit economics, COGS, pricing, and funnel dynamics will stand out.7. Tom's Takeaways and What Really Matters Going ForwardI ended the recording by sharing what I personally believe after moderating this discussion and working closely with a variety of AI teams over the past 2 years.Judgment becomes the most valuable PM skillAs AI gets better at analysis, synthesis, and execution, your value shifts to:• Choosing the right problem• Sequencing decisions• Making 55 45 calls• Understanding user pain• Making tradeoffs• Deciding when good is good enough• Defining success• Communicating vision• Influencing the orgAgents can write specs.LLMs can produce strategies.But only humans can choose the right one and commit.Learning speed becomes a competitive advantageI said this on the panel and I believe it more every month.Because of AI, you now have:• Infinite coaches• Infinite mentors• Infinite experts• Infinite documentation• Infinite learning loopsA PM who learns slowly will not survive the next decade. Curiosity, empathy, and velocity will separate great from goodMany panelists said versions of this. The common pattern was:• Understand users deeply• Combine multiple tools creatively• Move quickly• Learn constantlyThe future rewards generalists with taste, speed, and emotional intelligence.Differentiation requires going beyond wrapper appsThis is one of my biggest concerns for early stage founders. If your entire product is a wrapper around a model, you are vulnerable.Durable value will come from:• Proprietary data• Proprietary workflows• Deep domain insight• Organizational trust• Distribution advantage• Safety and reliability• Integration with existing systemsAI is a component, not a moat.8. Closing ThoughtsHosting this panel made me more optimistic about the future of product management. Not because AI will not change the job. It already has. But because the fundamental craft remains alive.Product management has always been about understanding people, making decisions with incomplete information, telling compelling stories, and guiding teams through ambiguity and being right often.AI accelerates the craft. It amplifies the best PMs and exposes the weak ones. It rewards curiosity, empathy, velocity, and judgment.If you want tailored support on your PM career, leadership journey, or executive path, I offer 1 on 1 career, executive, and product coaching at tomleungcoaching.com.OK team. Let's ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com
Le développement de la Dialyse Péritonéale fait partie des axes stratégiques de la politique nationale de santé.1 En effet, la Haute Autorité de Santé a démontré que le parcours de santé le plus efficient demande de faire porter les efforts sur la greffe rénale et le développement de la dialyse péritonéale à domicile en première intention, quel que soit l'âge des patients.2 De plus, le développement d'une activité de DP permet d'élargir l'offre des modalités de prise en charge de l'IRC stade V au sein de l'établissement de santé, de donner l'accès à toutes les techniques de suppléance et ainsi mettre le patient au centre de la décision médicale en respectant son choix.Cependant, en France, seul 6% des patients dialysés sont en dialyse péritonéale et la tendance est à la baisse depuis quelques années.3 On observe donc un déclin national de la DP, mais comment l'enrayer ?Aujourd'hui, nous avons le privilège d'accueillir Dr. Christian Verger, Néphrologue, Responsable du Registre de Dialyse Péritonéale et Hémodialyse de Langue Française et du Bulletin de la Dialyse à Domicile, qui nous partage son expertise sur le sujet.Invité :Dr. Christian Verger, Néphrologue, Responsable du Registre de Dialyse Péritonéale et Hémodialyse de Langue Française et du Bulletin de la Dialyse à Domicile.Le Dr Verger déclare n'avoir aucun lien d'intérêt avec le sujet traité.L'équipe : Animation : Pyramidale CommunicationProduction : Pyramidale CommunicationCrédits : Pyramidale Communication, Sonacom Ce podcast est uniquement destiné à des fins d'information. Si vous souhaitez contacter Baxter pour de plus amples informations ou pour signaler un événement indésirable, veuillez consulter notre site web à l'adresse suivante : https://www.baxter.fr/fr/contact-usRéférences :1. Dr B. Gondoin. Enjeu médico-économiques de la dialyse en France. Néphrologie et Thérapeutique. 20172. HAS, Évaluation médico-économique des stratégies de prise en charge de l'insuffisance rénale chronique terminale en France. 20143. Données REIN 2021FR-RC00-240034 – V1.0
Photomission Photography Podcast ..Road Testing the Canon Powershot V1 in Dubai EP 353 https://www.canon.com.au/cameras/powershot-v1 In this episode, Road Testing the Canon Powershot V1 in Dubai. Join the conversation! About the Host: Stephen Finkel discovered his passion for photography at the age of seven and has never looked back. He manages several photography-related businesses, including Photomission, and is currently a Canon Collective Community Manager. Check out Stephen's work on Instagram:
So There I Was dives into the UPS MD-11 crash, compressor stalls, and why some jets are “varsity airplanes.” Fig kicks things off with a flaming T-45 compressor stall story, then we walk through what we know so far about the UPS MD-11 crash, V1 decision speed, startle factor, and why “no fast hands” can literally save your life. From tail tanks and induced drag to cargo-pilot zombie sleep schedules, you'll hear how big jets, night freight, and human factors all collide at high speed. Along the way we roast armchair investigators, explain jet engines and compressor stalls with a clever Taco Bell analogy from BadAss! We share some stories that will make every pilot nod and every non-aviator gasp. If you've ever wondered what really happens on the flight deck when everything goes sideways at rotation, this episode is your front-row seat. Screenshot
The Second World War saw the development of many new weapons. Perhaps none was more terrifying than the development of long-range strategic rockets. Rockets had been used in combat for centuries, dating back to their development in ancient China; however, the rockets developed by Germany were a different matter altogether. They terrorized civilians in England and actually served as the starting point of the space race. Learn more about the V1 and V2 rockets and the Nazi rocket program on this episode of Everything Everywhere Daily. Sponsors Quince Go to quince.com/daily for 365-day returns, plus free shipping on your order! Mint Mobile Get your 3-month Unlimited wireless plan for just 15 bucks a month at mintmobile.com/eed Stash Go to get.stash.com/EVERYTHING to see how you can receive $25 towards your first stock purchase. Newspaper.com Go to Newspapers.com to get a gift subscription for the family historian in your life! Subscribe to the podcast! https://everything-everywhere.com/everything-everywhere-daily-podcast/ -------------------------------- Executive Producer: Charles Daniel Associate Producers: Austin Oetken & Cameron Kieffer Become a supporter on Patreon: https://www.patreon.com/everythingeverywhere Discord Server: https://discord.gg/UkRUJFh Instagram: https://www.instagram.com/everythingeverywhere/ Facebook Group: https://www.facebook.com/groups/everythingeverywheredaily Twitter: https://twitter.com/everywheretrip Website: https://everything-everywhere.com/ Disce aliquid novi cotidie Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode dives deep into excitatory neurons—the brain's primary “go” signal—and their outsized role in the autistic phenotype. We explore how pyramidal neurons, powered by glutamate through AMPA and NMDA receptors, drive lightning-fast information transmission, synaptic hyperplasticity via BDNF, and elevated gamma oscillations (30–80 Hz) in V1, S1, and A1. This overactive excitatory push, paired with reduced parvalbumin and somatostatin inhibition, creates the well-documented E:I imbalance that fuels sensory hypersensitivity, one-trial learning, rigid memory encoding, repetitive behaviors, and the classic distal-connection timing mismatch from early sensory cortices to prefrontal regions.The autistic brain gets to the first two stops blazingly fast yet struggles to reach the final destination typical brains arrive at effortlessly.Inhibition Episodes: https://youtu.be/cjwbog7Rk4c?si=uSaLLNmS5EJLa_iHhttps://youtu.be/Oee4L7Vsj4E?si=Y5F2eVudCLhkxNw1 https://youtu.be/PBHVssvoQkM?si=A6SPedQi-Dt-DVO_E/I https://youtu.be/ETChjRQ0SzQ?si=yIFNovzldwSZRMeThttps://youtu.be/jl0xwjnyXII?si=dmk49CMQo3Uf17axDaylight Computer Company, use "autism" for $50 off at https://buy.daylightcomputer.com/autismChroma Light Devices, use "autism" for 10% discount at https://getchroma.co/?ref=autismFig Tree Christian Golf Apparel & Accessories, use "autism" for 10% discount at https://figtreegolf.com/?ref=autismCognity AI for Autistic Social Skills, use "autism" for 10% discount at https://thecognity.com00:00 Excitatory Neurons, Push-Pull System, Parvalbumin Deficiency03:30 E:I Imbalance, Sensory Hypersensitivity, Repetitive Behaviors07:00 Pyramidal Neurons, Glutamate, AMPA/NMDA Receptors10:30 Brain Regions: DLPFC, Anterior Insula, V1 S1 A114:00 Amygdala Misnomer, Low Road vs High Road, Emotional Hub18:30 Receptors: AMPA 1-5ms, NMDA 10-200ms, mGluR Modulatory22:00 Gamma Oscillations, BDNF Hyperplasticity, Sensory Overload25:30 Distal Connections, Point-A-to-Point-B Timing Mismatch29:00 BDNF Critical Period, One-Trial Learning, Rigid Memory32:30 TRN Dysfunction, Repetitive Behaviors, Corticostriatal Circuit34:30 Go-Signal vs Stop-Signal, Push-Pull Bowling Bumpers36:00 Rubenstein & Merzenich 2003, E:I Imbalance Foundation37:08 Daylight Computer Company, use "autism" for $50 discount39:32 Chroma Light Devices, use "autism" for 10% discount42:35 Reviews/Ratings & Contact InfoX: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
In this episode of Aviation News Talk, we begin with the developing details surrounding the crash of UPS Airlines Flight 2976, a McDonnell Douglas MD-11F cargo aircraft that crashed shortly after takeoff from Louisville, Kentucky. The aircraft, tail number N259UP, was a 34-year-old MD-11F powered by three General Electric CF6-80 engines. Bystander video shows the left engine separated from the wing, with the wing engulfed in flames as the aircraft lifted off. ADS-B data indicates the aircraft climbed less than 100 feet before beginning a descending, left-turning roll from which it did not recover. The crew had already passed V1, meaning they were committed to takeoff and did not have adequate runway remaining to stop. In situations like this, flight crews may have no survivable option, and this accident may represent one of those rare but tragic scenarios. We also compare aspects of this event to American Airlines Flight 191, the 1979 DC-10 crash at Chicago O'Hare. While both accidents involved the loss of the left engine on takeoff, the failure chain in AA191 involved slat retraction due to damaged hydraulic and control lines—failure modes later addressed in the MD-11 design. The MD-11's slats are hydraulically locked to prevent unintended retraction, meaning the probable cause of this accident must differ in critical ways. After the accident analysis, we shift to a practical, pilot-focused conversation about landings with returning guest Dr. Catherine Cavagnaro, columnist for AOPA and highly respected flight instructor and DPE. Drawing on more than a thousand check rides, Catherine explains that the most consistent problem she sees is pilots flying final approach too fast. While pilots often worry about being too slow, the data shows that excessive approach speed is far more common and contributes to long landing rolls, excessive float, bounced landings, and pilot-induced oscillations. Catherine and Max discuss how a correct approach speed provides the right amount of energy to land smoothly and in control. More power and speed make it harder to manage the flare and to touch down where intended. Pilots also frequently fail to align the aircraft longitudinal axis with the runway before touchdown, particularly in crosswinds, due to hesitation in applying sufficient rudder and aileron. Catherine explains that as the aircraft slows, flight controls become less effective, so pilots should expect to use more control input in the final seconds before touchdown—not less. The conversation also explores landing accuracy, noting that pilots should strive to touch down within 200–400 feet of a target point—not "somewhere down the runway." Even on long runways, building accuracy pays dividends when landing at shorter fields or during check rides. A useful data tool Catherine recommends is FlySto (flysto.net), which allows pilots with modern avionics to upload flight data and analyze approach speed, pitch attitude, touchdown point, crab angle, rollout direction, and braking forces. By reviewing objective data, pilots can identify habits and improve their consistency over time. Whether you're teaching new pilots, returning to flying after a break, or simply want your landings to be more stable and predictable, Catherine's techniques offer actionable steps: choose the correct approach speed, use proper crosswind controls, flare to a nose-high attitude, and maintain precision with touchdown point selection. Together, the accident analysis and the landing discussion reinforce a core theme of this show: aviation skills improve with deliberate practice, continuous learning, and a deep respect for the realities of risk, energy management, and aircraft control. If you're getting value from this show, please support the show via PayPal, Venmo, Zelle or Patreon. Support the Show by buying a Lightspeed ANR Headsets Max has been using only Lightspeed headsets for nearly 25 years! I love their tradeup program that let's you trade in an older Lightspeed headset for a newer model. Start with one of the links below, and Lightspeed will pay a referral fee to support Aviation News Talk. Lightspeed Delta Zulu Headset $1199 HOLIDAY SPECIAL NEW – Lightspeed Zulu 4 Headset $1099 Lightspeed Zulu 3 Headset $949Lightspeed Sierra Headset $749 My Review on the Lightspeed Delta Zulu Send us your feedback or comments via email If you have a question you'd like answered on the show, let listeners hear you ask the question, by recording your listener question using your phone. News Stories UPS MD-11 crashed almost after takeoff from Louisville airport FAA is set to start cutting flights to contend with delays and staffing shortages Archer Buys LA-Area Airport Jeppesen ForeFlight Unified Under Private Equity Ownership FAA acknowledges BasicMed form error Pilot injured when Piper hits fence Extreme turbulence bends Cessna 152 Blade to Launch Weekday Commuter Flights Between Manhattan and Westchester Mentioned on the ShowAmerican Airlines Flight 191 Analysis by Jeff Guzzetti Fly California Passport Program Catherine Cavagnaro YouTube Channel Ace Aerobatic School Buy Max Trescott's G3000 Book Call 800-247-6553 Free Index to the first 282 episodes of Aviation New Talk So You Want To Learn to Fly or Buy a Cirrus seminars Online Version of the Seminar Coming Soon – Register for Notification Check out our recommended ADS-B receivers, and order one for yourself. Yes, we'll make a couple of dollars if you do. Get the Free Aviation News Talk app for iOS or Android. Check out Max's Online Courses: G1000 VFR, G1000 IFR, and Flying WAAS & GPS Approaches. Find them all at: https://www.pilotlearning.com/ Social Media Like Aviation News Talk podcast on Facebook Follow Max on Instagram Follow Max on Twitter Listen to all Aviation News Talk podcasts on YouTube or YouTube Premium "Go Around" song used by permission of Ken Dravis; you can buy his music at kendravis.com If you purchase a product through a link on our site, we may receive compensation.
In this episode of The ModGolf Podcast host Colin Weston sits down with Alex Prasad, the CEO of V1 Sports, a company that has been a cornerstone of golf technology for over three decades. Alex shares the journey of V1 Sports from its origins in pioneering video analysis to its current position as an innovator leveraging AI and ground force sensors. We dive deep into the pivotal "come-to-Jesus" moment that forced the company to re-evaluate its path, the cultural shift to a data-driven mindset, and how they are now using their new AI tool, "V1CTOR," to solve the biggest pain points for golf instructors - time and student engagement. This is a masterclass in business transformation, product-market fit, and building a platform that creates a true win-win-win for the company, the coach, and the golfer. During this episode you will discover these inspiring takeaways: Embrace the "Why" Behind the Data Learn how V1 Sports moved beyond assumptions by relentlessly asking "how do we know?" This shift to a data-validated culture gave them the courage to innovate confidently and build products based on proven user needs, not just gut feelings. Solve the Real Problem, Not the Stated One Discover how Alex and his team decoded the common complaint from pros that they "don't have enough time." They uncovered the deeper issue of seasonality and student re-engagement, leading to AI-driven solutions that automate relationship-building. Trust is Built by Providing Value First Hear how V1 Sports flipped the script on traditional coaching models. By using AI to facilitate a "Quick Fix"—a free, personalized piece of feedback—they create a gateway to trust between a golfer and a pro, proving value before a transaction ever occurs. https://media24.fireside.fm/file/fireside-uploads-2024/images/1/1ea879c1-a4a2-4e10-bea4-e5d8368a3c7a/2parL16M.jpg Episode Chapters: (00:00) Introduction (01:29) The Power of Invitation: Alex's First Golf Experience (04:02) The Origin Story of V1 Sports (06:55) The Innovator's Dilemma: The Inflection Point for V1 (12:18) The Hard Work of Product Validation: Talking to Customers (16:11) The V1 Product Suite and the Introduction of "V1CTOR" AI (22:14) Building Trust in a Digital World (27:40) The V1 Business Model: A Two-Sided, Curated Marketplace (31:51) The Hardware Game: Ground Pressure Sensors and the Unseeable Data (34:33) Expanding to Other Sports? The Power of Focus (38:44) The Entrepreneurial Marathon: Lessons from Endurance Sports (43:30) How to Connect with Alex Prasad and V1 Sports Book referenced during our converation: The Innovator's Dilemma (https://www.amazon.ca/Innovators-Dilemma-New-Foreword-Technologies/dp/1647826764/ref=asc_df_1647826764) by Clayton M. Christensen Quotable Moments from Alex: On Challenging Assumptions: "Intuition can tell you where to look, but the data is going to validate or invalidate that's the right place to press." On Understanding Customer Needs: "What you learn is, customers can tell you their pain points, they can tell you their aspirations, but what they can't tell you is the solution... It's not their job. That's ours." On the Core Philosophy of V1's AI companion, V1CTOR: "The best way to build trust between a random person on the internet and a golf instructor is to show some value first." Are you more of a watcher than a listener? Then enjoy our video with Alex on The ModGolf YouTube channel (https://youtu.be/YBxBsVuRti8). Click on this link (https://youtu.be/TDyhOE2DEuo) or the image below. https://media24.fireside.fm/file/fireside-uploads-2024/images/1/1ea879c1-a4a2-4e10-bea4-e5d8368a3c7a/2DQnq5hM.jpg (https://youtu.be/TDyhOE2DEuo) Want to connect with Alex? Check out his bio page to make that happen! Alex Prasad's bio page >> https://modgolf.fireside.fm/guests/alex-prasad Visit the V1 Sports website (https://v1sports.com/) to learn more and to download the V1 Golf App in the Apple App Store or Google Play Store. https://media24.fireside.fm/file/fireside-uploads-2024/images/1/1ea879c1-a4a2-4e10-bea4-e5d8368a3c7a/fC_NoWtr.png (https://v1sports.com/) Join our mission to make golf more innovative, inclusive and fun... and WIN some awesome golf gear! As the creator and host of The ModGolf Podcast and YouTube channel I've been telling golf entrepreneurship and innovation stories since May 2017 and I love the community of ModGolfers that we are building. I'm excited to announce that I just launched our ModGolf Patreon page to bring together our close-knit community of golf-loving people! As my Patron you will get access to exclusive live monthly interactive shows where you can participate, ask-me-anything video events, bonus content, golf product discounts and entry in members-only ModGolf Giveaway contests. I'm offering two monthly membership tiers at $5 and $15 USD, but you can also join for free. Your subscription will ensure that The ModGolf Podcast continues to grow so that I can focus on creating unique and impactful stories that support and celebrate the future of golf. Click to join >> https://patreon.com/Modgolf I look forward to seeing you during an upcoming live show!... Colin https://files.fireside.fm/file/fireside-uploads/images/1/1ea879c1-a4a2-4e10-bea4-e5d8368a3c7a/q_IZwlpO.jpg (https://patreon.com/Modgolf) Special Guest: Alex Prasad - CEO at V1 Sports.
Today's episode explores the role of inhibitory neurons & the Sonic Hedgehog (SHH) gene in shaping the Autistic phenotype, focusing on the excitation-inhibition imbalance that drives sensory hypersensitivity and cognitive challenges. Through a neuroscience lens, the episode connects these mechanisms to heightened gamma activity.Ben Ari Episode https://youtu.be/jo-ffwF9u0YParvalbumin Interneurons episode https://youtu.be/PBHVssvoQkM?si=t8WYGlcHcv7WiE-TVisual Thinking Part 1 https://youtu.be/XqQ8jCvWzYc?si=lffUEjGHjWj4mGOMNeurulation Part 1 https://youtu.be/gZdg9bX3Nuw?si=xvwtlz-p1hPHI8FADaylight Computer Company, use "autism" for $50 off athttps://buy.daylightcomputer.com/autismChroma Light Devices, use "autism" for 10% discount athttps://getchroma.co/?ref=autismuse "autism" for 10% discount at Fig Tree Christian Golf Apparel & Accessories https://figtreegolf.com/?ref=autismCognity AI for Autistic Social Skills, use "autism" for 10% discount athttps://thecognity.com00:00 - Autistic phenotype, excitation-inhibition imbalance, sensory hypersensitivity, cognitive deficits03:40 - Inhibitory neurons, GABA receptors, GABA-A, GABA-B, GABA-C, tonic inhibition07:24 - Tonic firing, burst firing, phasic firing, neural oscillations, sensory processing08:31 - Sonic Hedgehog gene, neural development, GABAergic identity, thalamic reticular nucleus (TRN)14:48 - Parvalbumin interneurons, fast-spiking, gamma oscillations, sensory gating, dorsolateral prefrontal cortex18:22 - Parvalbumin dysfunction, sensory hypersensitivity, visual cortex (V1), EI imbalance, brain-derived neurotrophic factor (BDNF)22:02 - Somatostatin interneurons, feedback inhibition, dendritic modulation, sensory adaptation, hippocampus25:43 - Vasoactive intestinal peptide (VIP) interneurons, disinhibition, pyramidal activity, attention, social processing29:30 - Calbindin interneurons, calretinin interneurons, dendritic inhibition, sensory processing, anterior insula33:15 - Purkinje cells, cerebellum, motor control, cognitive timing, cell loss in autism36:00 - Evolutionary perspective, parvalbium density, neural circuit stabilization, sensory-cognitive processing39:25 - Gamma activity, visual processing, retina, lateral geniculate nucleus, attention to detail, autistic self39:52 Daylight Computer Company, use "autism" for $50 discount42:13 Chroma Light Devices, use "autism" for 10% discount45:17 Reviews & Contact InfoX: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
Predicazione espositiva del Pastore Jonathan Whitman di Matteo capitolo 9 versetti da 1 a 8. Registrata presso il Centro Evangelico Battista di Perugia il 19 ottobre 2025.Titolo del messaggio: "Che uomo è mai questo? Cinque aspetti del perdono dei peccati"MATTEO 9 V1-81 Gesù, entrato in una barca, passò all'altra riva e venne nella sua città. 2 Ed ecco, gli portarono un paralitico disteso sopra un letto. Gesù, veduta la loro fede, disse al paralitico: «Figliolo, coraggio, i tuoi peccati sono perdonati». 3 Ed ecco, alcuni scribi pensarono dentro di sé: «Costui bestemmia». 4 Ma Gesù, conosciuti i loro pensieri, disse: «Perché pensate cose malvagie nei vostri cuori? 5 Infatti che cos'è più facile, dire: "I tuoi peccati sono perdonati" o dire: "Àlzati e cammina"? 6 Ma, affinché sappiate che il Figlio dell'uomo ha sulla terra autorità di perdonare i peccati, àlzati», disse allora al paralitico, «prendi il tuo letto e va' a casa tua». 7 Ed egli si alzò e se ne andò a casa sua. 8 Visto ciò, la folla fu presa da timore e glorificò Dio, che aveva dato tale autorità agli uomini.
Paul Kromidas didn't just pivot a startup—he changed the vehicle mid-race and still pulled ahead. Summer began as an asset-heavy “own an STR without the risk” model: Summer found the house, bought it with their capital, operated it for two years, and sold it back with a book of business. It worked—until the capital stack and rate environment made venture-scale returns incompatible with real estate velocity. So Paul did the brave thing founders talk about but rarely do: he sold the homes, kept the brains, and rebuilt Summer around the software that had quietly powered V1. That software—Summer OS, now supercharged by Sunny AI—acts like a true asset-management layer for short-term rentals. It stitches market underwriting to unit-level P&L, pipes into your PMS, flags issues before they become reviews, and guides both pros and serious first-timers from “where should I buy?” to “how do I out-operate the comp set?” It's not a wrapper around generic answers; it's a working analyst that shows its work. Today on the show, I'm joined by Paul Kromidas—founder of Summer—on building tools that help operators decide, buy, and perform.In this episode, we: Explore why venture returns and deed-on-title don't rhyme—and how an honest boardroom conversation led to selling the portfolio and doubling down on software. Discuss what an STR “asset management system” really is—linking market selection, underwriting, expense modeling, and live ops into one pane of glass. Explore how Sunny AI turns fuzzy intent into investable action—guiding you through clarifying questions, surfacing the right comps, and recommending markets you didn't have on your radar. Discuss the difference between high-level market data and operator-grade decisions—and why posting performance back to the model is where comp-set truth lives. Explore who it's for today (multi-market PMs and serious operators) and how the roadmap invites the rising class of under-20-door owners without dumbing anything down. Discuss the next frontier: using predictions to fix tomorrow's dip today—so hosting feels less like firefighting and more like running a dialed business. If you're building a portfolio—or rebuilding your ops stack—this one will sharpen how you underwrite, staff, and scale.
Predicazione espositiva del Pastore Daniel Ransom di Efesini capitolo 4 versetti da 1 a 3. Registrata presso il Centro Evangelico Battista di Perugia il 5 Ottobre 2025.Titolo del messaggio: "4 elementi di una vita degna della nostra chiamata"EFESINI 4 V1-31 Io dunque, il prigioniero del Signore, vi esorto a comportarvi in modo degno della vocazione che vi è stata rivolta, 2 con ogni umiltà e mansuetudine, con pazienza, sopportandovi gli uni gli altri con amore, 3 sforzandovi di conservare l'unità dello Spirito con il vincolo della pace.
In this episode, the American Timelines gang explore historical and artistic topics, including baseball history, World War II recruitment posters, and the development of the V1 flying bomb. Specifically: Joe Nuxhall's Record-Breaking MLB Debut Joe shares the story of Joe Nuxhall, who at 15 years and 10 months became the youngest person to pitch in a Major League Baseball game for the Cincinnati Reds in 1944. Vojtěch Preissig's Life and Art Artstar shared the life story of Czech artist Vojtěch Preissig, who was born in 1872 and worked as a topographer, illustrator, and designer. Artstar also discussed how the Nazis categorized artists into "degenerate" and "Nazi-approved" categories, with the former including modern artists like Otto Dix, Paul Klee, and Picasso, while the latter favored realistic or neoclassic art. Nazi V1 Flying Bomb Overview Hunter presented an in-depth overview of the V1 flying bomb, also known as the Buzz Bomb or Doodle Bug, developed by Nazi Germany during World War II. The Ern Malley Hoax Steve shared the story of the Ern Malley poetry hoax, which was created to mock modernist poetry but ended up being embraced by some as legitimate art.
We put so much pressure on ourselves to make every new service launch flawless: perfect pricing, perfect workflows, perfect offer. But the reality is, you have to let yourself launch 'messily' if you want to be able to refine it and turn it into an offer that brings in substantially more money and clients.In today's episode, I'm walking you through how to create offers that you and your clients LOVE, and that transform the financial future of your business. Because having a business that brings in consistent high income isn't about having perfectly successful launches — it's about having the courage to launch something new and messy and feel confident WHILE doing it.Inside this episode, we're covering:Why your V1 doesn't need to be perfect (it just needs to exist)The mindset shifts that make launching feel less dauntingHow to navigate through a messy launch when it starts to feel overwhelmingThe permission you need to take messy action and start launching the offers that have been on your heartI can't wait for you to tune in, and I'd love to hear your takeaways! Send me a DM @heykristamarie and let's connect. Loved the episode? Have a topic or guest host request? Send me a text message!Ways we can work together: Create a brand so strong that clients are sold on working with you before they even reach out! Is it time to elevate your business with new brand photos? I'D LOVE TO CONNECT WITH YOU! Say hello on Instagram
Predicazione espositiva del Pastore Daniel Ransom di Efesini 4-6. Vengono tracciati i modi in cui il credente dovrebbe camminare. Registrata presso il Centro Evangelico Battista di Perugia il 14 Settembre 2025.Titolo del messaggio: "Check-up spirituale: 5 controlli importanti da fare sulla tua vita"EFESINI 4 V1-31 Io dunque, il prigioniero del Signore, vi esorto a comportarvi in modo degno della vocazione che vi è stata rivolta, 2 con ogni umiltà e mansuetudine, con pazienza, sopportandovi gli uni gli altri con amore, 3 sforzandovi di conservare l'unità dello Spirito con il vincolo della pace.
Predicazione espositiva del Pastore Emerito Fred Whitman di Atti capitolo 26 versetti 1 a 29. Registrata presso il Centro Evangelico Battista di Perugia il 7 Settembre 2025.Titolo del messaggio: "La testimonianza di Fede di Paolo l'Apostolo davanti a Re Agrippa e Festo come un esempio per noi oggi"ATTI 26 V1-291 Agrippa disse a Paolo: «Ti è concesso di parlare a tua difesa». Allora Paolo, stesa la mano, disse a sua difesa: 2 «Re Agrippa, io mi ritengo felice di potermi oggi discolpare davanti a te di tutte le cose delle quali sono accusato dai Giudei, 3 soprattutto perché tu hai conoscenza di tutti i riti e di tutte le questioni che ci sono tra i Giudei; perciò ti prego di ascoltarmi pazientemente. 4 Quale sia stata la mia vita fin dalla mia gioventù, che ho trascorsa a Gerusalemme in mezzo al mio popolo, è noto a tutti i Giudei, 5 perché mi hanno conosciuto fin da allora e sanno, se pure vogliono renderne testimonianza, che, secondo la più rigida setta della nostra religione, sono vissuto da fariseo. 6 E ora sono chiamato in giudizio per la speranza nella promessa fatta da Dio ai nostri padri; 7 della quale promessa le nostre dodici tribù, che servono con fervore Dio notte e giorno, sperano di vedere il compimento. Per questa speranza, o re, sono accusato dai Giudei! 8 Perché mai si giudica da voi cosa incredibile che Dio risusciti i morti? 9 Quanto a me, in verità pensai di dover lavorare attivamente contro il nome di Gesù il Nazareno. 10 Questo infatti feci a Gerusalemme; e avendone ricevuta l'autorizzazione dai capi dei sacerdoti, io rinchiusi nelle prigioni molti dei santi; e quando erano messi a morte, io davo il mio voto. 11 E spesso, in tutte le sinagoghe, punendoli, li costringevo a bestemmiare; e, infuriato oltremodo contro di loro, li perseguitavo fin nelle città straniere. 12 Mentre mi dedicavo a queste cose e andavo a Damasco con l'autorità e l'incarico da parte dei capi dei sacerdoti, 13 a mezzogiorno vidi per strada, o re, una luce dal cielo, più splendente del sole, la quale sfolgorò intorno a me e ai miei compagni di viaggio. 14 Tutti noi cademmo a terra, e io udii una voce che mi disse in lingua ebraica: "Saulo, Saulo, perché mi perseguiti? Ti è duro recalcitrare contro il pungolo". 15 Io dissi: "Chi sei, Signore?" E il Signore rispose: "Io sono Gesù, che tu perseguiti. 16 Ma àlzati e sta' in piedi, perché per questo ti sono apparso: per farti ministro e testimone delle cose che hai viste, e di quelle per le quali ti apparirò ancora, 17 liberandoti da questo popolo e dalle nazioni, alle quali io ti mando 18 per aprire loro gli occhi, affinché si convertano dalle tenebre alla luce e dal potere di Satana a Dio e ricevano, per la fede in me, il perdono dei peccati e la loro parte di eredità tra i santificati". 19 Perciò, o re Agrippa, io non sono stato disubbidiente alla visione celeste; 20 ma, prima a quelli di Damasco, poi a Gerusalemme e per tutto il paese della Giudea e fra le nazioni, ho predicato che si ravvedano e si convertano a Dio, facendo opere degne del ravvedimento. 21 Per questo i Giudei, dopo avermi preso nel tempio, tentavano di uccidermi. 22 Ma per l'aiuto che viene da Dio sono durato fino a questo giorno, rendendo testimonianza a piccoli e a grandi, senza dire nulla al di fuori di quello che i profeti e Mosè hanno detto che doveva avvenire, cioè: 23 che il Cristo avrebbe sofferto e che egli, il primo a risuscitare dai morti, avrebbe annunciato la luce al popolo e alle nazioni».
Bienvenido al podcast Productividad Máxima. Hoy traigo una estrategia de productividad sobre: Pomodoro de Alto Impacto: convierte horas en entregas.Idea clave: no necesitas más tiempo, necesitas ciclos que terminen en resultados visibles. El Pomodoro, bien usado, no es un reloj; es una máquina de convertir intención en entrega.Te cuento una historia. Sergio es consultor de marketing. Vivía apagando fuegos: correos, WhatsApp, reuniones que se alargan. Mucho movimiento, pocos avances. Le propuse un experimento de 3 días: cuatro pomodoros al día, cada uno con un objetivo de resultado, no de actividad. Por ejemplo: “enviar 3 propuestas firmables”, no “trabajar en propuestas”; “publicar la landing versión 1”, no “mejorar la landing”.Día uno, preparó su “mise en place” como en cocina: todo lo necesario sobre la mesa antes de encender el fuego. Temporizador a 25 minutos, móvil en otra habitación, pestañas mínimas, y una lista parking para cualquier idea que quisiera distraerle. Terminó el primer sprint con dos propuestas enviadas y una tercera casi. En el segundo, publicó la landing V1. Por la tarde hizo seguimiento a leads. Al final del día, menos cansancio y dos respuestas en la bandeja. Día tres, una propuesta aceptada. Mismo Sergio, misma jornada, pero con sprints que terminan en entrega.¿Cómo aplicas el Pomodoro de Alto Impacto?1) Define el resultado, no la tarea- Mal: “trabajar en blog”.- Bien: “publicar 1 post corto con CTA”.Escribe la frase: “Al terminar estos 25 minutos habré…” y completa con un verbo de entrega: enviar, publicar, cerrar, decidir, programar.2) Prepara como un chef (2 a 3 minutos)- Abre solo la herramienta necesaria.- Ten a mano datos, textos y plantillas.- Cierra todo lo que no vas a usar. Menos fricción, más avance.3) Protege el sprint- Modo no molestar en móvil y ordenador.- Pestañas mínimas.- Una hoja para aparcar ideas: si aparece algo, lo apuntas y sigues. Tu cerebro se calma porque no se perderá.4) Corre el Pomodoro: 25 minutos de foco- No perfecciones. Saca la versión que cumple el propósito.- Si te atascas, cambia el microobjetivo: “escribir solo el titular y la CTA”.5) Pausa corta: 5 minutos de recuperación real- Levántate, agua, estiramientos, respiración.- Nada de abrir redes ni métricas. Tu atención es como una vela: si la expones al viento de las notificaciones, se apaga.6) Completa un ciclo: 4 pomodoros y pausa larga- Tras cuatro sprints, 15 a 20 minutos para revisar, decidir el siguiente paso, y capturar pendientes.- Ese es tu “cool down”: consolidar lo logrado para que no se pierda.7) Mezcla con Pareto- Antes de cada sprint, pregúntate: ¿cuál es el 20% de acciones que me dará el 80% del resultado?- Empieza por ahí. Si el tiempo se acaba, al menos ya hiciste lo que más impacta.8) Ritual de cierre- Entrega, publica o envía lo que hiciste.- Escribe una línea de “estado”: qué quedó hecho y cuál es el siguiente paso. Mañana arrancarás sin fricción.Piensa en esto como entrenamiento por intervalos. Los atletas no corren a tope dos horas seguidas. Alternan alta intensidad con recuperación para rendir más y lesionarse menos. Tu cerebro funciona igual. Y como en cocina, primero preparas ingredientes (mise en place), luego cocinas a fuego fuerte y, al finalizar, emplatado y limpieza. Si mezclas todo a la vez, se quema o no sale.Guion de arranque en 60 segundos- Escribe: “Objetivo de resultado: enviar 2 propuestas firmables”.- Abre solo el CRM y la plantilla.- Móvil fuera. Temporizador a 25.- Empieza por lo que más mueve la aguja.- Al sonar, pausa 5, enviar y apuntar el siguiente paso.Ideas de pomodoros de alto impacto para emprendedores- Ventas: enviar 3 propuestas o hacer 5 seguimientos con mensaje personalizado.- Producto: publicar V1 de una landing, o grabar un vídeo de onboarding de 60 segundos.- Contenido: escribir y programar 1 post con CTA.- Finanzas: revisar 1 métrica clave y tomar 1 decisión concreta.- Operaciones: documentar en 10 líneas un proceso que repites.Reto para hoy: elige una sola piedra grande, complétala con dos pomodoros de 25 y uno de 25 para rematar y publicar. Si no termina en entrega, no cuenta.Y ahora, si quieres tomar mejores decisiones mientras ejecutas con foco, te recomiendo el Club de Emprendedores Triunfers. Deja de tomar malas decisiones en tu negocio. Es un club privado donde emprendedores nos ayudamos a resolver dudas y problemas para decidir mejor.Porque una mala decisión puede hundir tu negocio. También te hace perder tiempo y dinero. Y trae frustración, ansiedad e incluso el riesgo de cerrar y abandonar tu sueño de emprender con libertad.¿Te suena? Elegir una mala idea de negocio y descubrir que nadie la valora, mientras otros venden algo peor y no entiendes por qué. Elegir una mala plataforma para tu web, pagar cinco veces más, recibir mal soporte y acabar con una web poco profesional. Contratar al freelance equivocado, que sabe menos de lo que creías y entrega mal. Asociarte con la persona errónea, que no hace lo que debería ni prioriza el proyecto porque no es suyo. Invertir mal en publicidad, perder todo el dinero y pensar que la publicidad online no funciona, probando de todo sin resultados. Cometer un error del que ni siquiera eres consciente y, por mucho que trabajas, no ver los resultados que mereces. Elegir el lugar o el cliente incorrecto, pasar horas intentando convencer… y que terminen comprando a la competencia.Deja de tomar malas decisiones. Antes de hacer algo importante, pregunta a los expertos del club. En Triunfers tienes criterio, acompañamiento y gente que ya pasó por ahí. Te ahorrarás tiempo, dinero y muchos tropiezos. Únete desde Triunfers.comHasta aquí el episodio de hoy. Gracias por compartirlo con esa persona que lo pueda necesitar. Te espero mañana en el próximo episodio. Un fuerte abrazo.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/productividad-maxima--5279700/support.Newsletter Marketing Radical: https://marketingradical.substack.com/welcomeNewsletter Negocios con IA: https://negociosconia.substack.com/welcomeMis Libros: https://borjagiron.com/librosSysteme Gratis: https://borjagiron.com/systemeSysteme 30% dto: https://borjagiron.com/systeme30Manychat Gratis: https://borjagiron.com/manychatMetricool 30 días Gratis Plan Premium (Usa cupón BORJA30): https://borjagiron.com/metricoolNoticias Redes Sociales: https://redessocialeshoy.comNoticias IA: https://inteligenciaartificialhoy.comClub: https://triunfers.com
Predicazione espositiva del Pastore Emerito Fred Whitman di Giovanni capitolo 4 versetti 1 a 42. Registrata presso il Centro Evangelico Battista di Perugia il 31 Agosto 2025.Titolo del messaggio: "Gesù e la donna Samaritana"GIOVANNI 4 V1-421 Quando dunque Gesù seppe che i farisei avevano udito che egli faceva e battezzava più discepoli di Giovanni 2 (sebbene non fosse Gesù che battezzava, ma i suoi discepoli), 3 lasciò la Giudea e se ne andò di nuovo in Galilea. 4 Ora doveva passare per la Samaria. 5 Giunse dunque a una città della Samaria, chiamata Sicar, vicina al podere che Giacobbe aveva dato a suo figlio Giuseppe; 6 e là c'era la fonte di Giacobbe. Gesù dunque, stanco del cammino, stava così a sedere presso la fonte. Era circa l'ora sesta. 7 Una donna della Samaria venne ad attingere l'acqua. Gesù le disse: «Dammi da bere». 8 (Infatti i suoi discepoli erano andati in città a comprare da mangiare.) 9 La donna samaritana allora gli disse: «Come mai tu che sei Giudeo chiedi da bere a me, che sono una donna samaritana?» Infatti i Giudei non hanno relazioni con i Samaritani. 10 Gesù le rispose: «Se tu conoscessi il dono di Dio e chi è che ti dice: "Dammi da bere", tu stessa gliene avresti chiesto, ed egli ti avrebbe dato dell'acqua viva». 11 La donna gli disse: «Signore, tu non hai nulla per attingere, e il pozzo è profondo; da dove avresti dunque quest'acqua viva? 12 Sei tu più grande di Giacobbe, nostro padre, che ci diede questo pozzo e ne bevve egli stesso con i suoi figli e il suo bestiame?» 13 Gesù le rispose: «Chiunque beve di quest'acqua avrà di nuovo sete; 14 ma chi beve dell'acqua che io gli darò, non avrà mai più sete; anzi, l'acqua che io gli darò diventerà in lui una fonte d'acqua che scaturisce in vita eterna». 15 La donna gli disse: «Signore, dammi di quest'acqua, affinché io non abbia più sete e non venga più fin qui ad attingere». 16 Egli le disse: «Va' a chiamare tuo marito e vieni qua». 17 La donna gli rispose: «Non ho marito». E Gesù: «Hai detto bene: "Non ho marito", 18 perché hai avuto cinque mariti, e quello che hai ora non è tuo marito; ciò che hai detto è vero». 19 La donna gli disse: «Signore, vedo che tu sei un profeta. 20 I nostri padri hanno adorato su questo monte, ma voi dite che è a Gerusalemme il luogo dove bisogna adorare». 21 Gesù le disse: «Donna, credimi; l'ora viene che né su questo monte né a Gerusalemme adorerete il Padre. 22 Voi adorate quel che non conoscete; noi adoriamo quel che conosciamo, perché la salvezza viene dai Giudei. 23 Ma l'ora viene, anzi è già venuta, che i veri adoratori adoreranno il Padre in spirito e verità; poiché il Padre cerca tali adoratori. 24 Dio è Spirito, e quelli che lo adorano bisogna che lo adorino in spirito e verità». 25 La donna gli disse: «Io so che il Messia (che è chiamato Cristo) deve venire; quando sarà venuto ci annuncerà ogni cosa». 26 Gesù le disse: «Sono io, io che ti parlo!»
This week's episode is all about Reading. We will go through the entire process from the moment light hits the retina (50-100ms) to formulating speech (600ms or so). That is, either speaking out loud or silently speaking while reading, a phenomena called subvocalization. We do this when reading to the self. Either way, we speak while reading.We will compare so called normal readers, the Autistic phenotype, and dyslexia, and at times the odd contrasts of the Autistic phenotype AND dyslexia. Lots of neurobiology, measurement instruments, brain waves (oscillations, frequencies), however, I will hopefully provide easy to understand analogies.The entire reading process is covered.Daylight Computer Companyuse "autism" for $50 off athttps://buy.daylightcomputer.com/autismChroma Light Devicesuse "autism" for 10% discount athttps://getchroma.co/?ref=autismCognity AI for Autistic Social Skillsuse "autism" for 10% discount athttps://thecognity.com00:00 - Overview of reading process and neurobiology03:28 - Visual processing in V1 (primary visual cortex), V2-V4 (secondary visual cortex)4:42 - Neuroplasticity of Blind using V1-V4 for Braille07:17 - Neural oscillations (Delta, Theta, Alpha, Beta, Gamma)10:07 - Visual word form area (VWFA) recognizes patterns, begins sequencing letters & recognizes the word, Example: "d-o-g" & 'd' not 'b', 'o' not 'c,' 'g' not 'p.'13:01 - Phonological processing in temporal-parietal cortex15:54 - Fractional anisotropy (FA) & Diffusion Tensor Imaging (DTI) and arcuate fasciculus; Myelination, Water Flow, Garden Hose example18:06 - Detailed discussion of orthographic processing begins (VWFA's role in recognizing visual word forms)21:26 - Detailed discussion of cerebellum's role in eye movements begins (Purkinje cells and saccades)24:07 - Detailed discussion of spelling difficulties begins (orthographic processing challenges in autism/dyslexia)27:41 - Detailed discussion of semantic integration begins (delays in dyslexia, inferior frontal gyrus)30:55 - Detailed discussion of orthographic confusion begins (e.g., "except" vs. "expert")33:30 - Detailed discussion of phonological processing begins (temporal-parietal cortex mapping words to sounds)34:18 - Cerebellum mentioned regarding tongue movements (Purkinje cells refine timing for speech)36:10 - Subvocalization in silent reading37:07 - Oscillations in VWFA for autistic phenotype; Comprehension lags in Autism due to delayed N40039:19 Daylight Computer Company (and Daylight Kids !), use "autism" for $50 discount41:40 Chroma Light Devices, use "autism" for 10% discount44:52 Reviews/Ratings, Contact InfoX: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
Medboard Europe TToo much Incomplete Tech File - Let's explain to you how to do it: https://www.team-nb.org/wp-content/uploads/2025/09/Team-NB-PositionPaper-BPG-IVDR-V2-20250903.pdf 2025/1920 on Master UDI-DI - Not only lenses but also Spectacle frames and Ready-to-wear: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202501920 Borderline manual Update - New products included: https://health.ec.europa.eu/document/download/71a87df8-5ca1-4555-b453-b65bdf8de909_en?filename=md_borderline_manual_en.pdf red blood cell additive solutions containing adenine dual action cream with menthol and capsaicin Lactose tablets for vaginal use microabrasion dental stain removers medical examination table covers Mobile sterile air system EU asks your feedback on EU MDR and IVDR - Enjoy reading some 100 feedbacks: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14808-Medical-devices-and-in-vitro-diagnostics-targeted-revision-of-EU-rules_en Switzerland Swissdamed Technical Documentation - XML upload: https://www.swissmedic.ch/swissmedic/en/home/medical-devices/medizinprodukte-datenbank/swissdamed-informationen/swissdamed-technical-documents.html Business Rules Swissdamed: https://www.swissmedic.ch/dam/swissmedic/en/dokumente/medizinprodukte/mep_urr/bw630_40_002e_pu_swissdamed_business_rules.pdf.download.pdf/BW630_40_002e_PU_swissdamed_Business_Rules.pdf UK UK PMS guidance for Report - Template available: https://www.gov.uk/government/publications/medical-devices-post-market-surveillance-requirements/requirements-of-the-manufacturers-pms-system PMSR Template: https://www.gov.uk/government/publications/medical-devices-standardised-format-for-the-post-market-surveillance-report Magazine Issue 1: Sept/Oct 2025 - Next one will come November 2025: https://easymedicaldevice.com/emd-mag/ Events Medtech Conf events - Be listed on the MAP: https://medtechconf.com/events-map-2/ EasyIFU Free trial for eIFU with EASYIFU - Compliant EU 2025/1234: https://easyifu.com ROW US FDA Computer System Assurance - SOP offered on the show notes: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/computer-software-assurance-production-and-quality-system-software-0 Malaysia affiliate member of MDSAP - What does it change?: https://portal.mda.gov.my/index.php/announcement/1636-malaysia-is-now-mdsap-medical-device-single-audit-program-mdsap-member Australia Essential Principles Checklist Update -: Update the templates V1.2: https://www.tga.gov.au/resources/resource/checklists/essential-principles-checklist Egypt guidance to import your devices - Medical Devices, Accessories, IVD: All type of devices: https://edaegypt.gov.eg/media/lafopofx/1-regulatory-guideline-of-issuance-of-import-approvals-of-all-types-of-medical-devices_gd.pdf Medical Equipment and Accessories: https://edaegypt.gov.eg/media/fltnd1qc/4-regulatory-guideline-of-issuing-import-approvals-for-medical-equipment-and-their-accessories_gd.pdf IVD: https://edaegypt.gov.eg/media/e2rf4qg5/2-regulatory-guideline-of-the-procedures-and-rules-of-obtaining-import-approvals-for-iaboratory-and-diagnostic-equipment-gd.pdf Podcast Episode 353: Cybersecurity in Medical Devices: What QA/RA must do Today: https://podcast.easymedicaldevice.com/353-2/ Episode 354: From Surgeon to CEO: Building Neurogyn AG: https://podcast.easymedicaldevice.com/354-2/ Episode 355: Postmarket Surveillance for SaMD and AI: https://podcast.easymedicaldevice.com/355-2/ Easy Medical Device Service Support for Consulting (QA RA projects) Support for Authorized Representative and Market Access Integration to an eQMS Social Media to follow Monir El Azzouzi Linkedin: https://linkedin.com/in/melazzouzi Twitter: https://twitter.com/elazzouzim Pinterest: https://www.pinterest.com/easymedicaldevice Instagram: https://www.instagram.com/easymedicaldevice
Gäster: Emma-Lee Andersson, August Mether, Jack Moy, Christer Svensson, Behrad Rouzbeh, Viktor Elsnitz För 90SEK/mån får du 5 avsnitt i veckan:4 Vanliga AMK MORGON + AMK FREDAG med Isak Wahlberg Se till att bli Patron via webben och inte direkt i iPhones Patreon-app för att undvika Apples extraavgifter:Öppna istället din browser och gå till www.patreon.com/amkmorgon Relevanta länkar: …I'm from Barcelonahttps://lh3.googleusercontent.com/CwR_liEnqvXf7ygnfk2kV3Df9QGbTvJDa-nhiKUg2gKTKA9vgive7573_BGn15UkRPyZSSCcifzDQPha=w2880-h1200-p-l90-rj https://en.wikipedia.org/wiki/I%27m_from_Barcelona …n-ordet från Degerforshttps://m.media-amazon.com/images/M/MV5BY2FmMDNkN2EtZmRkYi00YTI2LTljYjQtYzA5MWE3MjE2ZWI5XkEyXkFqcGc@._V1_.jpg …Gyllene Gryninghttps://sv.wikipedia.org/wiki/Gyllene_gryning …vänsterpopulismhttps://sv.wikipedia.org/wiki/Populism …Trumps reveal om autismhttps://www.dn.se/varlden/trump-havdar-koppling-mellan-paracetamol-och-autism/ https://www.youtube.com/shorts/HDCEXPVp4h4 https://www.svt.se/nyheter/utrikes/usas-regering-kopplar-anvandande-av-paracetamol-till-autism …budgetenhttps://www.dn.se/sverige/skattesankningar-i-fokus-i-regeringens-hostbudget/ …tonfiskhttps://www.matspar.se/produkt/msc-tonfisk-i-vatten-200-g-abba https://fiskkungen.se/userfiles/image/iStock-612244606.jpg …värnpliktsduckarnahttps://www.svt.se/nyheter/lokalt/vast/blev-knivhuggen-for-att-slippa-lumpen-i-skovde-doms …kyrkovalethttps://omni.se/har-du-svart-att-valja-testa-valkompasserna/a/63xnOe https://www.svt.se/nyheter/granskning/ug/100-miljoner-satsades-pa-kyrkan-saldes-for-fem Låtarna som spelades var:We're from Barcelona - I'm from BarcelonaPervers politiker - Ebba GrönBrain Damage - Pink Floyd Alla låtar finns i AMK Morgons spellista här:https://open.spotify.com/user/amk.morgon/playlist/6V9bgWnHJMh9c4iVHncF9j?si=so0WKn7sSpyufjg3olHYmg
Predicazione espositiva del Pastore Jonathan Whitman di Matteo capitolo 8 versetti da 1 a 4. Registrata presso il Centro Evangelico Battista di Perugia il 10 agosto 2025.Titolo del messaggio: "Tre dimostrazioni che Gesù è Re sopra le nostre infermità"MATTEO 8 V1-41 Quando egli scese dal monte, una gran folla lo seguì. 2 Ed ecco un lebbroso, avvicinatosi, gli si prostrò davanti, dicendo: «Signore, se vuoi, tu puoi purificarmi». 3 Gesù, tesa la mano, lo toccò dicendo: «Lo voglio, sii purificato». E in quell'istante egli fu purificato dalla lebbra. 4 Gesù gli disse: «Guarda di non dirlo a nessuno, ma va', mostrati al sacerdote e fa' l'offerta che Mosè ha prescritto, e ciò serva loro di testimonianza».
Lesson Seven: The Message of Salvation Intro: God inspired the New Testament to reveal and explain His marvelous and wonderful plan of salvation. The doctrine of soteriology. Paul called it the glorious gospel. God gave us these truths to inspire us, motivate us, to tell the world about His salvation. Ref. Psalm 51:12. Time would not permit to discuss this thoroughly so we will look at the famous salvation formula as found in Romans chapter 10. 1. The doctrine of salvation. Rom. 10:1-4 • Salvation begins by someone having a burden for souls. V1 = the missionary. Paul's heart's desire was for souls to be saved. • God Himself began the work of missions. Ref John 3:16; 1 John 4:10 • Salvation comes by knowing truth not religious zeal. Rom. 10:2 • Salvation is having God's righteousness not self- righteousness. Rom. 10:3; Phil. 3:9 • Jesus is the only way to attain God's righteousness = salvation. V4; 2 Cor. 5:21; 1 John 2:2 2. The plan of salvation. Rom. 10:8-13 • Salvation is by grace through faith. Rom 10:8; Ref Eph. 2:8-9 • Salvation comes by confessing the Lord Jesus. Rom. 10:9a; 1 John 4:2, 15; 1 Cor. 12:3 • Salvation comes by believing the whole gospel. Rom. 10:9b-10 = Jesus died, was buried and rose from the grave! Rom. 5:8-9; 1 Cor. 15:1-4 • Salvation comes by calling on the name of the Lord. Rom. 10:13; Acts 4:12 3. The preaching of salvation = the perpetuity of the gospel. Rom. 10:14-17 • The purpose of missions is to tell lost souls of salvation. Rom. 10:14 – How can they call if they don't believe? – How can they believe if they have never heard? – How can they hear unless someone tells them? • The purpose of the church is to send missionaries. Rom. 10:15a • The duty of the Christian is to go tell. Rom. 10:15b Conclusion: There is something beautiful about those who tell = their feet.
Every society on earth has always had groups of people in the social margins; people who are relegated to the edges of the larger community. It's a tragic symptom of living in this broken world, the ease with which we dehumanize others by categorizing them as unwanted or undesirable, or unuseful.Which is why Jesus' ministry is so arresting, simply because the majority of his messianic work was done with and for those who were designated as the outcasts of his time. When the Kingdom of Heaven began its invasion of this world, it wasn't focused in on the elites and powerful of Rome or even Israel. It was laser focused on the most vulnerable among us – revealing the heart of God and the nature of His healing work in this world.This Sunday we'll be reading Matthew 8:1-17 in our ongoing study of this Gospel. Chapters 8 and 9 of Matthew are arranged around two sets of three miracles, bridged by sayings of Jesus. This framework is meant to put the authority of Jesus on display after he had revealed his authority to teach in the Sermon on the Mount.V1-4 is the account of Jesus healing a leper. The Torah had very specific instructions on identifying skin diseases, and what to do if one was diagnosed on a person. It's clear that a person's life would be miserable with that affliction, especially on a social level – they would be mandatorily outcast. Does the leper demand a healing from Jesus? Why do you think he phrased his inquiry the way he did? What is the first thing Matthew describes Jesus doing, even before declaring him healed? What might a human touch have meant to someone who had been labeled as “untouchable”? What do we learn about the nature of our mission, as Jesus' representatives, from that?V 5-13 tells us about a request from a Roman officer. Rome was the occupying force in Israel – they were seen as the enemy, the oppressors of the Jewish people. I can't think of someone who would be more likely to be ostracized by the larger community than man who represented the Roman army. How resistant did Jesus seem to answer this man's request? What might have been the thoughts of the people around Jesus when this gentile soldier made this request? The officer gives Jesus a way out of coming into his house, and Jesus commends his faith. Faith in what, do you suppose?The last part of this section details Jesus' healing of Peter's mother-in-law from a fever. Women rarely took center stage in recorded events in the ancient world. This is highlighted in the Gospels for a reason. What was her response when she was healed, what did she do (hint: the words “a meal” are not in the Greek – she got up and diakoneō him)?I'm really excited to get into this text together – I hope you can join us this Sunday at 10 AM!Click here for a pdf of the teaching slideshow.
Parker and Sean are back this week with some wild ass shit. Fist up, Hulk Hogan is dead! He died. Then Sean adds some things to his watchlist. Parker goes to a horror convention and sees Fantastic Four. And then! The August movie preview with a surprising amount of interesting things. Then, the guys review Rene Cardona Jr's "Cyclone" from 1978. Hugo Stiglitz and an all-star cast get caught in bad Mexican weather and are adrift at sea in the aftermath. Hunted by sharks, low on water and getting an itch for cannibalism -- who will survive? We'll spoil it so watch it first if you want. All this plus so much more. Direct Donloyd And don't forget to join the Patreon! There's gonna be some new (and old) stuff up there very soon.
Mixing Music with Dee Kei | Audio Production, Technical Tips, & Mindset
In this episode, Dee Kei and Lu unpack the common mistake of oversharing your process with clients — and why it can damage trust, reduce perceived value, and kill your confidence.This isn't about keeping secrets or playing power games. It's about curating a better experience, protecting your workflow, and communicating like a professional. From discussing mic placement and plugin chains to giving away discounts and doubting your own V1 mix, this episode covers the subtle ways engineers unintentionally sabotage themselves.They also share personal stories (including working with Jazze Pha and multi-platinum engineers), and draw comparisons to chefs, barbers, doctors, and even electricians — all to hammer home the truth: clients pay you for results and confidence, not explanations.
Ben and Taylor review the iRacing Spa 24, including Split 22, SimCast Racing's shortened event, and preview Le Mans Ultimate's V1.0 Release next week.
Send us a textIn today's episode, Paul will explore how he scales hardware teams, builds for manufacturability, navigates supply chain complexity, mentors engineers, and embraces community‑driven innovation. Get ready for insights on leadership, prototyping, and bringing hardware to life from idea to market.Main Topics:Proteus Motion's V1 and V2 machine developmentEngineering career progressionHardware product design and manufacturingConsulting and entrepreneurshipNew York Hardware Meetup community buildingAbout the guest: Paul Vizzio is a seasoned mechanical engineer and hardware leader with a diverse background spanning consumer electronics, cleantech, and defense. Starting as a product management intern at SolidWorks, he later managed undersea vehicle projects at the Naval Undersea Warfare Center. As the first mechanical engineer at goTenna, he developed both consumer and military-spec products from concept to production in under a year.In 2017, he founded Vizeng, providing end-to-end mechanical and supply-chain consulting to NYC hardware startups. He also led product development for RoadPower's regenerative road systems.Since 2019, Paul has led hardware efforts at Proteus Motion, overseeing team growth, R&D, and supply chain. His work includes redesigning the V1 system and launching the V2 within a year—contributing to Proteus's adoption by 400+ pro sports teams and clinics. He also co-organizes the NY Hardware Meetup and founded the D2C pet brand RemieDog, reflecting his passion for innovation and community-building.Links:Paul Vizzio - LinkedInVizeng WebsiteAaron Moncur, hostClick here to learn more about simulation solutions from Simutech Group.
CardioNerds (Dr. Claire Cambron and Dr. Rawan Amir) join Dr. Ayan Purkayastha, Dr. David Song, and Dr. Justin Wang from NewYork-Presbyterian Queens for an afternoon of hot pot in downtown Flushing. They discuss a case of congenital heart disease presenting in adulthood. Expert commentary is provided by Dr. Su Yuan, and audio editing for this episode was performed by CardioNerds Intern, Julia Marques Fernandes. A 53-year-old woman with a past medical history of hypertension visiting from Guyana presented with 2 days of chest pain. EKG showed dominant R wave in V1 with precordial T wave inversions. Troponin levels were normal, however she was started on therapeutic heparin with plan for left heart catheterization. Her chest X-ray revealed dextrocardia and echocardiogram was suspicious for the systemic ventricle being the morphologic right ventricle with reduced systolic function and the pulmonic ventricle being the morphologic left ventricle. Patient underwent coronary CT angiography which confirmed diagnosis of congenitally corrected transposition of the great arteries (CCTGA) as well as minimal non-obstructive coronary artery disease. Her chest pain spontaneously improved and catheterization was deferred. Patient opted to follow with a congenital specialist back in her home country upon discharge. US Cardiology Review is now the official journal of CardioNerds! Submit your manuscript here. CardioNerds Case Reports PageCardioNerds Episode PageCardioNerds AcademyCardionerds Healy Honor Roll CardioNerds Journal ClubSubscribe to The Heartbeat Newsletter!Check out CardioNerds SWAG!Become a CardioNerds Patron! Pearls- A Case of Congenital Heart Disease Presenting in Adulthood Congenitally Corrected Transposition of the Great Arteries (CCTGA) is a rare and unique structural heart disease which presents as an isolated combination of atrioventricular and ventriculoarterial discordance resulting in physiologically corrected blood flow. CCTGA occurs due to L looping of the embryologic heart tube. As a result, the morphologic right ventricle outflows into the systemic circulation, and the morphologic left ventricle outflows into the pulmonary circulation. CCTGA is frequently associated with ventricular septal defects, pulmonic stenosis, tricuspid valve abnormalities and dextrocardia. CCTGA is often asymptomatic in childhood and can present later in adulthood with symptoms of morphologic right ventricular failure, tricuspid regurgitation, or cardiac arrhythmias. Systemic atrioventricular valve (SAVV) intervention can be a valuable option for treating right ventricular failure and degeneration of the morphologic tricuspid valve. notes- A Case of Congenital Heart Disease Presenting in Adulthood Notes were drafted by Ayan Purkayastha. What is the pathogenesis of Congenitally Corrected Transposition of the Great Arteries? Occurs due to disorders in the development of the primary cardiac tube Bulboventricular part of the primary heart forms a left-sided loop instead of right-sided loop, leading to the normally located atria being connected to morphologically incompatible ventricles This is accompanied by abnormal torsion of the aortopulmonary septum (transposition of the great vessels) As a result, there is ‘physiologic correction' of blood flow. Non-oxygenated blood flows into the right atrium and through the mitral valve into the morphologic left ventricle, which pumps blood into the pulmonary artery. Oxygenated blood from the pulmonary veins flows into the left atrium and through the tricuspid valve to the morphologic right ventricle, which pumps blood to the aorta. Compared with standard anatomy, the flow of blood is appropriate, but it is going through the incorrect ventricle on both sides. Frequent conditions associated with CCTGA include VSD, pulmonic stenosis and dextrocardia
Riri comes up against some “Bad Magic” as she comes to understand “Karma's a Glitch,” but “The Past is the Past.” Matt and Pete discuss episodes 4-6.Thanks as always to everyone who supports the podcast by visiting Patreon.com/PhantasticGeek.Share your feedback by emailing PhantasticGeek@gmail.com, commenting at PhantasticGeek.com, or tweeting @PhantasticGeek.MP3
Our 213nd episode with a summary and discussion of last week's big AI news! Recorded on 06/21/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Midjourney launches its first AI video generation model, moving from text-to-image to video with a subscription model offering up to 21-second clips, highlighting the affordability and growing capabilities in AI video generation. Google's Gemini AI family updates include high-efficiency models for cost-effective workloads, and new enhancements in Google's search function now allow for voice interactions. The introduction of two new benchmarks, Live Code Bench Pro and Abstention Bench, aiming to test and improve the problem-solving and abstention capabilities of reasoning models, revealing current limitations. OpenAI wins a $200 million US defense contract to support various aspects of the Department of Defense, reflecting growing collaborations between tech companies and government for AI applications. Timestamps + Links: (00:00:10) Intro / Banter (00:01:32) News Preview Tools & Apps (00:02:12) Midjourney launches its first AI video generation model, V1 (00:05:52) Google's Gemini AI family updated with stable 2.5 Pro, super-efficient 2.5 Flash-Lite (00:07:59) Google's AI Mode can now have back-and-forth voice conversations (00:10:13) YouTube to Add Google's Veo 3 to Shorts in Move That Could Turbocharge AI on the Video Platform Applications & Business (00:11:10) The ‘OpenAI Files' will help you understand how Sam Altman's company works (00:12:29) OpenAI drops Scale AI as a data provider following Meta deal (00:13:28) Amazon's Zoox opens its first major robotaxi production facility Projects & Open Source (00:15:20) LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming? (00:19:45) AbstentionBench: Reasoning LLMs Fail on Unanswerable Questions (00:22:49) MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention Research & Advancements (00:24:33) Scaling Laws of Motion Forecasting and Planning -- A Technical Report Policy & Safety (00:28:07) Universal Jailbreak Suffixes Are Strong Attention Hijackers (00:30:52) OpenAI found features in AI models that correspond to different ‘personas' (00:33:25) OpenAI wins $200 million U.S. defense contract
Mixing Music with Dee Kei | Audio Production, Technical Tips, & Mindset
In this deep and reflective episode, Dee Kei and Lu explore one of the most relatable challenges for mixers and creatives alike: the paradox of improvement and insecurity. Why does getting better at your craft sometimes make you feel worse? How can self-doubt either fuel your growth or paralyze your process?With insights from Zen philosophy, mixing experience, and personal anecdotes, they unpack the emotional rollercoaster that comes with V1 mixes, fear of feedback, and chasing perfection. The episode touches on the difference between insecurity and curiosity, the danger of comparison, and why the best mixers embrace ambiguity.Whether you're a beginner, intermediate, or seasoned pro, this conversation will help you reframe your mindset and reconnect with the joy of mixing.Topics Include:The illusion of the "perfect mix"How to handle feedback without egoFlow state and finding alignment with your craftZen quotes and how they apply to creative workWhy obsessing over a snare for 6 hours won't matter to your clientLetting go of validation and trusting your tastePerfect for mixers, engineers, and any creative chasing mastery without losing their soul.SUBSCRIBE TO OUR PATREON FOR EXCLUSIVE CONTENT!SUBSCRIBE TO YOUTUBEJoin the ‘Mixing Music Podcast' Discord!HIRE DEE KEIHIRE LUFind Dee Kei and Lu on Social Media:Instagram: @DeeKeiMixes @MasteredbyLuTwitter: @DeeKeiMixes @MasteredbyLuThe Mixing Music Podcast is sponsored by Izotope, Antares (Auto Tune), Sweetwater, Plugin Boutique, Lauten Audio, Filepass, & CanvaThe Mixing Music Podcast is a video and audio series on the art of music production and post-production. Dee Kei, Lu, and James are professionals in the Los Angeles music industry having worked with names like Odetari, 6arelyhuman, Trey Songz, Keyshia Cole, Benny the Butcher, carolesdaughter, Crying City, Daphne Loves Derby, Natalie Jane, charlieonnafriday, bludnymph, Lay Bankz, Rico Nasty, Ayesha Erotica, ATEEZ, Dizzy Wright, Kanye West, Blackway, The Game, Dylan Espeseth, Tara Yummy, Asteria, Kets4eki, Shaquille O'Neal, Republic Records, Interscope Records, Arista Records, Position Music, Capital Records, Mercury Records, Universal Music Group, apg, Hive Music, Sony Music, and many others.This podcast is meant to be used for educational purposes only. This show is filmed and recorded at Dee Kei's private studio in North Hollywood, California. If you would like to sponsor the show, please email us at deekeimixes@gmail.com.Support this podcast at — https://redcircle.com/mixing-music-music-production-audio-engineering-and-music/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
En el último año de la Segunda Guerra Mundial, Adolf Hitler dirigió la atención de la Alemania nazi hacia las llamadas «armas de venganza»: la bomba voladora V1 y el imparable cohete V2, con ellas la labor de los bombarderos ya no era necesaria pero aún así fue un bombardero el que dejó caer de sus bodegas el arma más destructiva de la historia sobre las ciudades japonesas de Hiroshima y Nagasaki, la bomba atómica. Los siguiente años la combinación de los cohetes con la bomba atómica dará como resultado al misil nuclear, el arma que destronará al bombardero en la cima del armamento de guerra.
Your JFS boys are back and things are getting mighty sleazy around here! First up, the guys talk about Mission Impossible and complain about it -- a lot. Then they discuss the rehearsal and praise it -- a lot. Then it's time for the June Movie Preview and there may or may not even be any good movies coming up this month. Finally, it's time to review "Who Saw Her Die," a 1972 giallo directed by Aldo Lado. Starring George Lazenby, the film explores grief in Venice, Italy all against the backdrop of a creepy murder spree. All this plus Voice Mails, instant regret, vacation chat, Italian slander, and more! Donloyd Here After the episode go to the Patreon and sign up for some more bonus episodes!!
The Vulcanair V1 training aircraft will be built in a new US manufacturing facility and offered as an affordable option for flight schools. In the news, air traffic control problems at Newark and government actions, the impacts of tariffs on commercial aviation, a call for in-cockpit video recorders, the timely availability of weather forecasts for aviation, Real ID goes live, and wildlife at airports. The V1 trainer, courtesy Vulcanair. Guest Stephen Pope is the Director of Communications for Vulcanair Aircraft North America. Vulcanair is establishing a manufacturing facility in the US and plans to make the Vulcanair V1 trainer aircraft affordable for flight schools. Steve describes the history of the company and how it optimized the V1 model piston airplane for the US flight training market. The V1 is similar to the Cessna 172, but costs less and is easier to maintain. To address the problem of very old training aircraft at flight schools that are expensive to replace, Vulcanair has formed a leasing company that will offer the V1 to schools for $79 per hour. Vulcanair plans to cover the cost of engine and propeller overhauls. Vulcanair is building a factory in Elizabethtown, North Carolina, with a planned opening date of September 2025. It is sized to produce up to 100 aircraft per year, and the workforce will come from area military veterans. The facility will serve as the main parts hub in the US. After the opening, Vulcanair will build five aircraft for production certification, which they hope to receive in 1Q2026. Vulcanair Aircraft was established in 1996 with private capital to become a General Aviation manufacturer worldwide. Between 1996 and 1998, Vulcanair purchased all the assets, type designs, trademarks, and rights of Partenavia and the SF600 Series Program, including type certificates, tooling, and rights from Siai Marchetti. Vulcanair Aircraft introduced modern tools, a modern organization, and a world-class engineering team to enable aircraft design upgrades and improvements. Vulcanair Aircraft North America is the corporate identity for Ameravia Inc., which was founded in 2015 to serve as the U.S. distributor for Vulcanair aircraft. The company has expanded its operations by offering the P68 line of twin piston- and turbine-engine aircraft, and the V1 single-engine training aircraft. Before joining Vulcanair Aircraft North America, Steve was an Aircraft Sales Counselor with LifeStyle Aviation and a sales and marketing executive with Spectro | Jet-Care. He was Editor in Chief at Flying Magazine, as well as Editor at Business Jet Traveler. Aviation News House Panel Approves $12.5 Billion Boost in ATC Funding The House Transportation and Infrastructure Committee added $12.5 billion for air traffic control modernization and controller funding. At the same time, the Committee dropped grants for sustainable aviation fuel, hydrogen, and other low-emission technology projects. A provision that would have prohibited the use of funds to privatize or sell portions of the ATC system was voted down. See: House Panel To Consider $15B ATC Boost, SAF Grant Cuts and The FAA wants to hire more air traffic controllers, but that won't happen overnight. United removes 35 round-trip flights per day from Newark Airport schedule as travel woes continue Some air traffic controllers walked off the job after systems went down. Runway construction and a lack of controllers contributed to the flight cancellations. United CEO Scott Kirby said, “This isn't just about schedules or pay. It's about a system on the brink of collapse.” See: Chaos grips Newark Airport as controllers walk out, exposing FAA crisis Major airlines deliver dire warning to Trump administration as grim new twist emerges in tariff drama Air France and Lufthansa reported that transatlantic bookings from Europe to the US are down in the first quarter of the year. The Financial Times reported that the total numbe...