POPULARITY
Categories
When Rose Hamilton, Founder of Compass Rose Ventures, first tried Aplós, she knew there was something different, sophisticated, calming, and beautifully intentional. In this episode, Rose sits down with David Fudge, Co-founder and CEO of Aplós, to unpack how he's not just building a brand—he's reimagining what it means to unwind. From Bonobos to beverage, David's journey is a masterclass in blending creative vision with operational strategy. We dive deep into how Aplós is breaking category codes while anchoring to timeless consumer desires. David shares how his fashion-forward, design-driven mindset helped shape a non-alcoholic spirit brand that's as emotionally resonant as it is operationally sound. Whether you're a founder, marketer, or brand strategist, this episode is full of insights that will have you rethinking what brand-building really means today. Here are a few key moments to listen for: * How David leveraged a non-beverage background to challenge industry norms—and why not knowing the rules can be an asset. * The concept of “disciplined disruption”: choosing which category codes to break and which to honor. * Why bartenders—not just consumers—are key to Aplós's advocacy and growth strategy. * Building a go-to-market strategy rooted in both aspiration and data: from DTC learnings to luxury retail partnerships. * David's powerful mantra: “Be convicted in vision, malleable in strategy,” and what it means for modern founders navigating fast-changing landscapes. Join us in listening to the episode to discover how David is crafting more than a product. He's creating a cultural shift in how we gather, relax, and connect. This is a story of vision, values, and the bold art of brand building. For more on Aplós, visit: https://www.aplos.world/ If you enjoyed this episode, please leave The Story of a Brand Show a rating and review. Plus, don't forget to follow us on Apple and Spotify. Your support helps us bring you more content like this! * Today's Sponsors: Compass Rose Ventures - Advisor for CPG Brands: https://compassroseventures.com/contact/ Compass Rose Ventures can help your CPG brand increase customer lifetime value, expand into the US market, create an omnipresent omnichannel footprint, optimize customer journeys, build brand communities, and more. Visit the link above to learn more. Workspace6 - Private Community for 7, 8, 9-figure Brands: https://www.workspace6.io/ Workspace6 is a private community where over 950 seven, eight, and nine-figure brand operators trade insights, solve problems, and shortcut growth. It's the anti-fluff operator's room, and for your first 30 days, it's just $1. Get real answers and skip the trial and error
Listen to ASCO's JCO Oncology Practice, Art of Oncology Practice article, "An Oncologist's Guide to Ensuring Your First Medical Grand Rounds Will Be Your Last” by Dr. David Johnson, who is a clinical oncologist at University of Texas Southwestern Medical School. The article is followed by an interview with Johnson and host Dr. Mikkael Sekeres. Through humor and irony, Johnson critiques how overspecialization and poor presentation practices have eroded what was once internal medicine's premier educational forum. Transcript Narrator: An Oncologist's Guide to Ensuring Your First Medical Grand Rounds Will Be Your Last, by David H. Johnson, MD, MACP, FASCO Over the past five decades, I have attended hundreds of medical conferences—some insightful and illuminating, others tedious and forgettable. Among these countless gatherings, Medical Grand Rounds (MGRs) has always held a special place. Originally conceived as a forum for discussing complex clinical cases, emerging research, and best practices in patient care, MGRs served as a unifying platform for clinicians across all specialties, along with medical students, residents, and other health care professionals. Expert speakers—whether esteemed faculty or distinguished guests—would discuss challenging cases, using them as a springboard to explore the latest advances in diagnosis and treatment. During my early years as a medical student, resident, and junior faculty member, Grand Rounds consistently attracted large, engaged audiences. However, as medicine became increasingly subspecialized, attendance began to wane. Lectures grew more technically intricate, often straying from broad clinical relevance. The patient-centered discussions that once brought together diverse medical professionals gradually gave way to hyperspecialized presentations. Subspecialists, once eager to share their insights with the wider medical community, increasingly withdrew to their own specialty-specific conferences, further fragmenting the exchange of knowledge across disciplines. As a former Chair of Internal Medicine and a veteran of numerous MGRs, I observed firsthand how these sessions shifted from dynamic educational exchanges to highly specialized, often impenetrable discussions. One of the most striking trends in recent years has been the decline in presentation quality at MGR—even among local and visiting world-renowned experts. While these speakers are often brilliant clinicians and investigators, they can also be remarkably poor lecturers, delivering some of the most uninspiring talks I have encountered. Their presentations are so consistently lackluster that one might suspect an underlying strategy at play—an unspoken method to ensure that they are never invited back. Having observed this pattern repeatedly, I am convinced that these speakers must be adhering to a set of unwritten rules to avoid future MGR presentations. To assist those unfamiliar with this apparent strategy, I have distilled the key principles that, when followed correctly, all but guarantee that a presenter will not be asked to give another MGR lecture—thus sparing them the burden of preparing one in the future. Drawing on my experience as an oncologist, I illustrate these principles using an oncology-based example although I suspect similar rules apply across other subspecialties. It will be up to my colleagues in cardiology, endocrinology, rheumatology, and beyond to identify and document their own versions—tasks for which I claim no expertise. What follows are the seven “Rules for Presenting a Bad Medical Oncology Medical Grand Rounds.” 1. Microscopic Mayhem: Always begin with an excruciatingly detailed breakdown of the tumor's histology and molecular markers, emphasizing how these have evolved over the years (eg, PAP v prostate-specific antigen)—except, of course, when they have not (eg, estrogen receptor, progesterone receptor, etc). These nuances, while of limited relevance to general internists or most subspecialists (aside from oncologists), are guaranteed to induce eye-glazing boredom and quiet despair among your audience. 2. TNM Torture: Next, cover every nuance of the newest staging system … this is always a real crowd pleaser. For illustrative purposes, show a TNM chart in the smallest possible font. It is particularly helpful if you provide a lengthy review of previous versions of the staging system and painstakingly cover each and every change in the system. Importantly, this activity will allow you to disavow the relevance of all previous literature studies to which you will subsequently refer during the course of your presentation … to wit—“these data are based on the OLD staging system and therefore may not pertain …” This phrase is pure gold—use it often if you can. NB: You will know you have “captured” your audience if you observe audience members “shifting in their seats” … it occurs almost every time … but if you have failed to “move” the audience … by all means, continue reading … there is more! 3. Mechanism of Action Meltdown: Discuss in detail every drug ever used to treat the cancer under discussion; this works best if you also give a detailed description of each drug's mechanism of action (MOA). General internists and subspecialists just LOVE hearing a detailed discussion of the drug's MOA … especially if it is not at all relevant to the objectives of your talk. At this point, if you observe a wave of slack-jawed faces slowly slumping toward their desktops, you will know you are on your way to successfully crushing your audience's collective spirit. Keep going—you are almost there. 4. Dosage Deadlock: One must discuss “dose response” … there is absolutely nothing like a dose response presentation to a group of internists to induce cries of anguish. A wonderful example of how one might weave this into a lecture to generalists or a mixed audience of subspecialists is to discuss details that ONLY an oncologist would care about—such as the need to dose escalate imatinib in GIST patients with exon 9 mutations as compared with those with exon 11 mutations. This is a definite winner! 5. Criteria Catatonia: Do not forget to discuss the newest computed tomography or positron emission tomography criteria for determining response … especially if you plan to discuss an obscure malignancy that even oncologists rarely encounter (eg, esthesioneuroblastoma). Should you plan to discuss a common disease you can ensure ennui only if you will spend extra time discussing RECIST criteria. Now if you do this well, some audience members may begin fashioning their breakfast burritos into projectiles—each one aimed squarely at YOU. Be brave … soldier on! 6. Kaplan-Meier Killer: Make sure to discuss the arcane details of multiple negative phase II and III trials pertaining to the cancer under discussion. It is best to show several inconsequential and hard-to-read Kaplan-Meier plots. To make sure that you do a bad job, divide this portion of your presentation into two sections … one focused on adjuvant treatment; the second part should consist of a long boring soliloquy on the management of metastatic disease. Provide detailed information of little interest even to the most ardent fan of the disease you are discussing. This alone will almost certainly ensure that you will never, ever be asked to give Medicine Grand Rounds again. 7. Lymph Node Lobotomy: For the coup de grâce, be sure to include an exhaustive discussion of the latest surgical techniques, down to the precise number of lymph nodes required for an “adequate dissection.” To be fair, such details can be invaluable in specialized settings like a tumor board, where they send subspecialists into rapturous delight. But in the context of MGR—where the audience spans multiple disciplines—it will almost certainly induce a stultifying torpor. If dullness were an art, this would be its masterpiece—capable of lulling even the most caffeinated minds into a stupor. If you have carefully followed the above set of rules, at this point, some members of the audience should be banging their heads against the nearest hard surface. If you then hear a loud THUD … and you're still standing … you will know you have succeeded in giving the world's worst Medical Grand Rounds! Final Thoughts I hope that these rules shed light on what makes for a truly dreadful oncology MGR presentation—which, by inverse reasoning, might just serve as a blueprint for an excellent one. At its best, an outstanding lecture defies expectations. One of the most memorable MGRs I have attended, for instance, was on prostaglandin function—not a subject typically associated with edge-of-your-seat suspense. Given by a biochemist and physician from another subspecialty, it could have easily devolved into a labyrinth of enzymatic pathways and chemical structures. Instead, the speaker took a different approach: rather than focusing on biochemical minutiae, he illustrated how prostaglandins influence nearly every major physiologic system—modulating inflammation, regulating cardiovascular function, protecting the gut, aiding reproduction, supporting renal function, and even influencing the nervous system—without a single slide depicting the prostaglandin structure. The result? A room full of clinicians—not biochemists—walked away with a far richer understanding of how prostaglandins affect their daily practice. What is even more remarkable is that the talk's clarity did not just inform—it sparked new collaborations that shaped years of NIH-funded research. Now that was an MGR masterpiece. At its core, effective scientific communication boils down to three deceptively simple principles: understanding your audience, focusing on relevance, and making complex information accessible.2 The best MGRs do not drown the audience in details, but rather illuminate why those details matter. A great lecture is not about showing how much you know, but about ensuring your audience leaves knowing something they didn't before. For those who prefer the structured wisdom of a written guide over the ramblings of a curmudgeon, an excellent review of these principles—complete with a handy checklist—is available.2 But fair warning: if you follow these principles, you may find yourself invited back to present another stellar MGRs. Perish the thought! Dr. Mikkael SekeresHello and welcome to JCO's Cancer Stories: The Art of Oncology, which features essays and personal reflections from authors exploring their experience in the oncology field. I'm your host, Mikkael Sekeres. I'm Professor of Medicine and Chief of the Division of Hematology at the Sylvester Comprehensive Cancer Center, University of Miami. What a pleasure it is today to be joined by Dr. David Johnson, clinical oncologist at the University of Texas Southwestern Medical School. In this episode, we will be discussing his Art of Oncology Practice article, "An Oncologist's Guide to Ensuring Your First Medical Grand Rounds Will Be Your Last." Our guest's disclosures will be linked in the transcript. David, welcome to our podcast and thanks so much for joining us. Dr. David JohnsonGreat to be here, Mikkael. Thanks for inviting me. Dr. Mikkael SekeresI was wondering if we could start with just- give us a sense about you. Can you tell us about yourself? Where are you from? And walk us through your career. Dr. David JohnsonSure. I grew up in a small rural community in Northwest Georgia about 30 miles south of Chattanooga, Tennessee, in the Appalachian Mountains. I met my wife in kindergarten. Dr. Mikkael SekeresOh my. Dr. David JohnsonThere are laws in Georgia. We didn't get married till the third grade. But we dated in high school and got married after college. And so we've literally been with one another my entire life, our entire lives. Dr. Mikkael SekeresMy word. Dr. David JohnsonI went to medical school in Georgia. I did my training in multiple sites, including my oncology training at Vanderbilt, where I completed my training. I spent the next 30 years there, where I had a wonderful career. Got an opportunity to be a Division Chief and a Deputy Director of, and the founder of, a cancer center there. And in 2010, I was recruited to UT Southwestern as the Chairman of Medicine. Not a position I had particularly aspired to, but I was interested in taking on that challenge, and it proved to be quite a challenge for me. I had to relearn internal medicine, and really all the subspecialties of medicine really became quite challenging to me. So my career has spanned sort of the entire spectrum, I suppose, as a clinical investigator, as an administrator, and now as a near end-of-my-career guy who writes ridiculous articles about grand rounds. Dr. Mikkael SekeresNot ridiculous at all. It was terrific. What was that like, having to retool? And this is a theme you cover a little bit in your essay, also, from something that's super specialized. I mean, you have had this storied career with the focus on lung cancer, and then having to expand not only to all of hematology oncology, but all of medicine. Dr. David JohnsonIt was a challenge, but it was also incredibly fun. My first few days in the chair's office, I met with a number of individuals, but perhaps the most important individuals I met with were the incoming chief residents who were, and are, brilliant men and women. And we made a pact. I promised to teach them as much as I could about oncology if they would teach me as much as they could about internal medicine. And so I spent that first year literally trying to relearn medicine. And I had great teachers. Several of those chiefs are now on the faculty here or elsewhere. And that continued on for the next several years. Every group of chief residents imparted their wisdom to me, and I gave them what little bit I could provide back to them in the oncology world. It was a lot of fun. And I have to say, I don't necessarily recommend everybody go into administration. It's not necessarily the most fun thing in the world to do. But the opportunity to deal one-on-one closely with really brilliant men and women like the chief residents was probably the highlight of my time as Chair of Medicine. Dr. Mikkael SekeresThat sounds incredible. I can imagine, just reflecting over the two decades that I've been in hematology oncology and thinking about the changes in how we diagnose and care for people over that time period, I can only imagine what the changes had been in internal medicine since I was last immersed in that, which would be my residency. Dr. David JohnsonWell, I trained in the 70s in internal medicine, and what transpired in the 70s was kind of ‘monkey see, monkey do'. We didn't really have a lot of understanding of pathophysiology except at the most basic level. Things have changed enormously, as you well know, certainly in the field of oncology and hematology, but in all the other fields as well. And so I came in with what I thought was a pretty good foundation of knowledge, and I realized it was completely worthless, what I had learned as an intern and resident. And when I say I had to relearn medicine, I mean, I had to relearn medicine. It was like being an intern. Actually, it was like being a medical student all over again. Dr. Mikkael SekeresOh, wow. Dr. David JohnsonSo it's quite challenging. Dr. Mikkael SekeresWell, and it's just so interesting. You're so deliberate in your writing and thinking through something like grand rounds. It's not a surprise, David, that you were also deliberate in how you were going to approach relearning medicine. So I wonder if we could pivot to talking about grand rounds, because part of being a Chair of Medicine, of course, is having Department of Medicine grand rounds. And whether those are in a cancer center or a department of medicine, it's an honor to be invited to give a grand rounds talk. How do you think grand rounds have changed over the past few decades? Can you give an example of what grand rounds looked like in the 1990s compared to what they look like now? Dr. David JohnsonWell, I should all go back to the 70s and and talk about grand rounds in the 70s. And I referenced an article in my essay written by Dr. Ingelfinger, who many people remember Dr. Ingelfinger as the Ingelfinger Rule, which the New England Journal used to apply. You couldn't publish in the New England Journal if you had published or publicly presented your data prior to its presentation in the New England Journal. Anyway, Dr. Ingelfinger wrote an article which, as I say, I referenced in my essay, about the graying of grand rounds, when he talked about what grand rounds used to be like. It was a very almost sacred event where patients were presented, and then experts in the field would discuss the case and impart to the audience their wisdom and knowledge garnered over years of caring for patients with that particular problem, might- a disease like AML, or lung cancer, or adrenal insufficiency, and talk about it not just from a pathophysiologic standpoint, but from a clinician standpoint. How do these patients present? What do you do? How do you go about diagnosing and what can you do to take care of those kinds of patients? It was very patient-centric. And often times the patient, him or herself, was presented at the grand rounds. And then experts sitting in the front row would often query the speaker and put him or her under a lot of stress to answer very specific questions about the case or about the disease itself. Over time, that evolved, and some would say devolved, but evolved into more specialized and nuanced presentations, generally without a patient present, or maybe even not even referred to, but very specifically about the molecular biology of disease, which is marvelous and wonderful to talk about, but not necessarily in a grand round setting where you've got cardiologists sitting next to endocrinologists, seated next to nephrologists, seated next to primary care physicians and, you know, an MS1 and an MS2 and et cetera. So it was very evident to me that what I had witnessed in my early years in medicine had really become more and more subspecialized. As a result, grand rounds, which used to be packed and standing room only, became echo chambers. It was like a C-SPAN presentation, you know, where local representative got up and gave a talk and the chambers were completely empty. And so we had to go to do things like force people to attend grand rounds like a Soviet Union-style rally or something, you know. You have to pay them to go. But it was really that observation that got me to thinking about it. And by the way, I love oncology and I'm, I think there's so much exciting progress that's being made that I want the presentations to be exciting to everybody, not just to the oncologist or the hematologist, for example. And what I was witnessing was kind of a formula that, almost like a pancake formula, that everybody followed the same rules. You know, “This disease is the third most common cancer and it presents in this way and that way.” And it was very, very formulaic. It wasn't energizing and exciting as it had been when we were discussing individual patients. So, you know, it just is what it is. I mean, progress is progress and you can't stop it. And I'm not trying to make America great again, you know, by going back to the 70s, but I do think sometimes we overthink what medical grand rounds ought to be as compared to a presentation at ASH or ASCO where you're talking to subspecialists who understand the nuances and you don't have to explain the abbreviations, you know, that type of thing. Dr. Mikkael SekeresSo I wonder, you talk about the echo chamber of the grand rounds nowadays, right? It's not as well attended. It used to be a packed event, and it used to be almost a who's who of, of who's in the department. You'd see some very famous people who would attend every grand rounds and some up-and-comers, and it was a chance for the chief residents to shine as well. How do you think COVID and the use of Zoom has changed the personality and energy of grand rounds? Is it better because, frankly, more people attend—they just attend virtually. Last time I attended, I mean, I attend our Department of Medicine grand rounds weekly, and I'll often see 150, 200 people on the Zoom. Or is it worse because the interaction's limited? Dr. David JohnsonYeah, I don't want to be one of those old curmudgeons that says, you know, the way it used to be is always better. But there's no question that the convenience of Zoom or similar media, virtual events, is remarkable. I do like being able to sit in my office where I am right now and watch a conference across campus that I don't have to walk 30 minutes to get to. I like that, although I need the exercise. But at the same time, I think one of the most important aspects of coming together is lost with virtual meetings, and that's the casual conversation that takes place. I mentioned in my essay an example of the grand rounds that I attended given by someone in a different specialty who was both a physician and a PhD in biochemistry, and he was talking about prostaglandin metabolism. And talk about a yawner of a title; you almost have to prop your eyelids open with toothpicks. But it turned out to be one of the most fascinating, engaging conversations I've ever encountered. And moreover, it completely opened my eyes to an area of research that I had not been exposed to at all. And it became immediately obvious to me that it was relevant to the area of my interest, which was lung cancer. This individual happened to be just studying colon cancer. He's not an oncologist, but he was studying colon cancer. But it was really interesting what he was talking about. And he made it very relevant to every subspecialist and generalist in the audience because he talked about how prostaglandin has made a difference in various aspects of human physiology. The other grand rounds which always sticks in my mind was presented by a long standing program director at my former institution of Vanderbilt. He's passed away many years ago, but he gave a fascinating grand rounds where he presented the case of a homeless person. I can't remember the title of his grand rounds exactly, but I think it was “Care of the Homeless” or something like that. So again, not something that necessarily had people rushing to the audience. What he did is he presented this case as a mysterious case, you know, “what is it?” And he slowly built up the presentation of this individual who repeatedly came to the emergency department for various and sundry complaints. And to make a long story short, he presented a case that turned out to be lead poisoning. Everybody was on the edge of their seat trying to figure out what it was. And he was challenging members of the audience and senior members of the audience, including the Cair, and saying, “What do you think?” And it turned out that the patient became intoxicated not by eating paint chips or drinking lead infused liquids. He was burning car batteries to stay alive and inhaling lead fumes, which itself was fascinating, you know, so it was a fabulous grand rounds. And I mean, everybody learned something about the disease that they might otherwise have ignored, you know, if it'd been a title “Lead Poisoning”, I'm not sure a lot of people would have shown up. Dr. Mikkael Sekeres That story, David, reminds me of Tracy Kidder, who's a master of the nonfiction narrative, will choose a subject and kind of just go into great depth about it, and that subject could be a person. And he wrote a book called Rough Sleepers about Jim O'Connell - and Jim O'Connell was one of my attendings when I did my residency at Mass General - and about his life and what he learned about the homeless. And it's this same kind of engaging, “Wow, I never thought about that.” And it takes you in a different direction. And you know, in your essay, you make a really interesting comment. You reflect that subspecialists, once eager to share their insight with the wider medical community, increasingly withdraw to their own specialty specific conferences, further fragmenting the exchange of knowledge across disciplines. How do you think this affects their ability to gain new insights into their research when they hear from a broader audience and get questions that they usually don't face, as opposed to being sucked into the groupthink of other subspecialists who are similarly isolated? Dr. David Johnson That's one of the reasons I chose to illustrate that prostaglandin presentation, because again, that was not something that I specifically knew much about. And as I said, I went to the grand rounds more out of a sense of obligation than a sense of engagement. Moreover, our Chair at that institution forced us to go, so I was there, not by choice, but I'm so glad I was, because like you say, I got insight into an area that I had not really thought about and that cross pollination and fertilization is really a critical aspect. I think that you can gain at a broad conference like Medical Grand Rounds as opposed to a niche conference where you're talking about APL. You know, everybody's an APL expert, but they never thought about diabetes and how that might impact on their research. So it's not like there's an ‘aha' moment at every Grand Rounds, but I do think that those kinds of broad based audiences can sometimes bring a different perspective that even the speaker, him or herself had not thought of. Dr. Mikkael SekeresI think that's a great place to end and to thank David Johnson, who's a clinical oncologist at the University of Texas Southwestern Medical School and just penned the essay in JCO Art of Oncology Practice entitled "An Oncologist's Guide to Ensuring Your First Medical Grand Rounds Will Be Your Last." Until next time, thank you for listening to JCO's Cancer Stories: The Art of Oncology. Don't forget to give us a rating or review, and be sure to subscribe so you never miss an episode. You can find all of ASCO's shows at asco.org/podcasts. David, once again, I want to thank you for joining me today. Dr. David JohnsonThank you very much for having me. The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. Show notes: Like, share and subscribe so you never miss an episode and leave a rating or review. Guest Bio: Dr David Johnson is a clinical oncologist at the University of Texas Southwestern Medical School.
rWotD Episode 2929: USS Mercer (APL-39) Welcome to Random Wiki of the Day, your journey through Wikipedia's vast and varied content, one random article at a time.The random article for Sunday, 11 May 2025, is USS Mercer (APL-39).The second USS Mercer (APB 39/IX 502/APL 39) is an Benewah-class barracks ship of the United States Navy. Originally classified as Barracks Craft APL 39, the ship was reclassified as Self-Propelled Barracks Ship APB 39 on 7 August 1944. Laid down on 24 August 1944 by Boston Navy Yard, and launched on 17 November 1944 as APB 39, sponsored by Mrs. Lillian Gaudette, the ship was named Mercer, after counties in eight states, on 14 March 1945, and commissioned on 19 September 1945.This recording reflects the Wikipedia text as of 06:00 UTC on Sunday, 11 May 2025.For the full current version of the article, see USS Mercer (APL-39) on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm generative Ayanda.
Analizamos el superpoder que puso en marcha el copywriter que cambió el mundo. Aplícalo de ahora delante.Pero antes, recuerda que, si todavía no te has apuntado, puedes unirte ahora mismo a la lista de correo Press Start y recibir cada día un nuevo consejo de ventas
Box2Box, with Rob Gilbert & Michael Edgley!Liverpool’s 20th English League title has been apparent for some time, but that did nothing to dull the celebrations as the title was sealed with four games to spare. Former Red Stephen Warnock joins Rob & Derek Dyson just hours after witnessing the scenes at Anfield, first hand.Auckland FC also lifted silverware this week, the A-League Men’s premiership at the first time of asking. Daniel Garb casts his expert eye over their achievement - and less gratifying stories, like Macarthur’s legal case and the APL’s belt-tightening.Also on the agenda: Crystal Palace charge into the FA Cup final, Socceroos & Matildas Central & plenty more!Follow us on X: https://twitter.com/Box2BoxNTSLike us on Facebook: https://www.facebook.com/profile.php?id=100028871306243 Enjoy our written content: https://www.box2boxnts.com.au/… & Join us for Stoppage Time on Wednesday!See omnystudio.com/listener for privacy information.
L'armée populaire de libération (APL), munie de ses avions de chasse, d'une marine modernisée et d'une force de frappe infiniment supérieure, a intensifié sa pression sur Taïwan et ses 23 millions d'habitants. Face à Goliath, la petite île cherche à utiliser tous ses atouts pour rendre trop couteux le rêve de « réunification » de Xi Jinping. De Nicolas Rocca envoyé spécial à Taïwan et Igor Gauquelin à Paris,Les mirages 2000 décollent et atterrissent dans un balai incessant sur la base aérienne de Hsinchu, chargés de protéger la capitale Taipei, à 80 km plus au nord. Cette ville de la côte ouest héberge aussi le siège de TSMC, l'entreprise dont les semi-conducteurs de pointe sont vitaux pour faire tourner l'économie planétaire. Quelques jours plus tôt, ont été lancés des exercices de « réponse immédiate » mobilisant toutes les branches de l'armée taïwanaise pour répliquer à la pression chinoise.« La plupart du temps, on prépare l'avion en quelques minutes, mais si on est très pressé, on peut aller plus vite », explique le lieutenant-colonel Wu Meng-che à côté d'un des 54 avions de chasses encore opérationnels parmi les 60 livrés par la France à la fin des années 1990.Pression croissanteSi ces chasseurs à la carlingue fatiguée sont encore opérationnels, c'est, notamment, car Taïwan fait face à un défi unique. Personne, sauf les États-Unis, n'accepte désormais de lui livrer des armes ou des équipements militaires de peur de fâcher le voisin chinois. Pourtant, l'année dernière, plus de 3 000 avions de l'APL [NDLR Armée populaire de libération, nom de l'armée chinoise] ont été identifiés dans l'ADIZ taïwanais (espace d'identification aérienne). Contre 972 en 2021. « La plupart du temps, on a déjà des avions dans les airs qui vont effectuer les vérifications nécessaires, mais parfois, on nous demande de décoller en urgence », assure le lieutenant-colonel de 39 ans. « Notre centre de commandement dit aux avions chinois : "Notre limite est ici, vous ne pouvez pas la franchir", mais eux répondent : "C'est notre territoire, notre espace aérien". » Une intimidation permise par le déséquilibre des forces. Malgré une récente livraison de 66 nouveaux F-16 américains, ses vieux mirages et sa production d'avions indigènes, Taïwan possède seulement un peu moins de 400 avions de chasses. La Chine, elle, en dispose de plus de 1 500. Un chiffre en constante augmentation.Ce déséquilibre est flagrant dans tous les secteurs. Amaigris par un taux de natalité en chute libre, les effectifs de l'armée taïwanaise ne cessent de se réduire. En plus du service militaire, allongé de quatre mois à un an, pour ceux nés après 2004, qui vient grossir le rang du 1,6 million de réservistes, l'armée compte sur ses soldats de métiers, plus 152 000 en 2024. Des chiffres limités face aux 2 millions de militaires de carrière de l'APL.Alors, dans les villes de l'île, des affiches sont placardées pour inciter les jeunes recrues à s'engager. « Moi, je veux bien faire carrière dans la marine, mon père me dit que c'est une bonne idée et que la paie est bonne », assure un jeune homme de 17 ans, emmené par son lycée au port de Keelung visiter deux frégates et un ravitailleur mis en avant par la marine. Même question à un adolescent, mais une réponse à l'opposée. « On n'apprend rien en un an de service militaire. Et si on va à la guerre, notre armée n'a pas la capacité de résister. Qu'est-ce que je dois faire ? Me battre ? Fuir ? » Des réactions qui témoignent de l'incertitude persistante sur la résilience taïwanaise en cas de conflit. « Cette question de l'esprit de défense à Taïwan n'est pas claire, résume Mathieu Duchatel, directeur du programme Asie à l'Institut Montaigne. Du côté de Pékin, on constate qu'il y a une erreur d'appréciation terrible de la Russie sur la détermination de l'Ukraine à résister. On peut même se dire que ce flou sur la réaction de la société taïwanaise est une forme de dissuasion pour la Chine. »À écouter aussiTaiwan secoué par les infiltrations chinoises« Porc-épic »Ce mot résume la mentalité de l'armée de l'île, symbolisée par ce pari d'une défense asymétrique ou celle dite du « porc-épic », selon les mots utilisés par l'ex-présidente Tsai Ying-wen. À l'image du rongeur, l'objectif est de rendre, avec des moyens limités, la proie taïwanaise trop dure à avaler pour le prédateur chinois. « L'armée est en transition, mais elle est héritière de celle du KMT (Guo Min-tank), qui a fui la Chine en 1949, avec des plateformes lourdes, des chars, des gros navires…, explique Tanguy Le Pesant, chercheur associé au Centre d'études français sur la Chine contemporaine. Maintenant, elle souhaite se doter d'armes plus petites et moins couteuses, des missiles anti-navires, des drones aériens, de surface, sous-marins. »Une mutation déjà bien entamée, avec une industrie locale dynamique permettant de produire missiles et drones en grande quantité. Mais la tradition persiste. « Il y a eu longtemps une inertie culturelle au sein de l'armée taïwanaise, favorable aux gros équipements qui sont aussi une cible facile », résume Marc Julienne, directeur du Centre Asie de l'Ifri. Une inertie loin d'avoir disparu, en témoigne le projet très décrié et onéreux du Hai Kun, premier sous-marin indigène, dont les derniers essais sont censés avoir lieu en avril 2025. Mais face à la flotte chinoise et sa soixantaine de sous-marins qu'elle devrait affronter dans un détroit peu profond, son utilité est très débattue. « L'autre élément pour Taïwan est d'utiliser la géographie de l'île à son avantage, explique Tanguy Lepesant : « Il existe une centaine de sommets permettant à l'armée taïwanaise de se cacher, d'envoyer des salves de missiles, et les côtes sont aussi à leur avantage, très difficiles d'accès et escarpées. » De quoi rendre un débarquement extrêmement complexe, malgré les imposantes barges développées récemment par l'APL.Si Taïwan ne manque pas d'atout pour décourager la Chine d'envahir, « notre sécurité dépend aussi de la crédibilité de l'armée américaine dans la région », reconnaît François Wu, vice-ministre des Affaires étrangères de l'île. Et rien de tel pour garantir le soutien continu de Washington que de préserver la place centrale de Taïwan au sein de l'économie mondiale. 68% des semi-conducteurs sont produits par des entreprises taïwanaises et 90% des puces les plus innovantes par TSMC, qui vient d'investir 100 milliards de dollars aux États-Unis. Cette industrie, surnommée « bouclier du silicium », semble représenter une assurance-vie encore plus cruciale que son armée, pour l'île de 23 millions d'habitants.
For the fourth year in a row, NASA is the proud recipient of the prestigious Collier Trophy.
Welcome back to our podcast, Orange Army!This episode takes you inside the Sunrisers Hyderabad's wild IPL 2025 ride.We're talking explosive batting, yes. But the bowling? That's where the real story is.We dissect the powerplay puzzle. Is Abhishek Sharma's role a defensive move, or an attacking gamble gone wrong?Then, Abhinav Manohar, "Bagheera," is his form sustainable?And Shami's fitness... a major worry.Plus, Pat Cummins' captaincy under the microscope.To give us a unique angle, we have a special guest: a talent scout & Stratagey Analyst from the Andhra Premier League's Uttarandhra Lions Venkatesh X handle - (8) Venky (@Venky_DK) / X He brings a local perspective by analyzing SRH's performance.He'll also discuss the rise of regional talent and the impact of APL.Get expert analysis, in-depth discussions, and a fresh perspective.And, a detailed preview of their crucial clash against KKR.
En este episodio te comparto la práctica más transformadora que marcó el antes y el después en mi energía, mi enfoque y mi vida. Prepárate para integrar lo que puede cambiar la tuya también.Además, en mi último post, te conté las 3 acciones que no te imaginas que también me ayudaron a lograr este propósito.Aplícalas todas y verás como el cambio es exponencial.Sígueme: @isabelhuerta.qcx
We hit the ground running with a CBS poll dissecting Trump's latest speech, and DJ Daniel stirring the pot in the media. As Democrats falter in facing non-victim narratives, Trump's fiery social media takes and Don Jr.'s insights on the State of the Union stoke the flames.Tensions rise with Cory Booker's bizarre influencer interview antics and Byron Donalds' relentless grilling of sanctuary city spending. Discussions heat up over immigration policies, culminating in Rep. Mace's scorching critique of Chicago.The episode peaks with Trump's bold declaration on gender, Charlie Kirk's controversial study, and Megyn Kelly's fierce takedown of John Fetterman. Wrapping up, Trump sends a stark message to Hamas, and the clash between Andrew Tate and DeSantis ignites debate. Tune in for an unmissable, fiery discourse on today's most pressing political issues.Visit https://readywise.com/ code CHICKS10 for 10% off your entire purchase. Prepare when times are good, so you are ready when they are bad.Lose weight the smarter way. Visit https://TakeLean.com and use code Chicks20 for 20% off your first order.Never run out of MEAT go to https://omahasteaks.com/CHICKS subscribe and get 12 FREE burgers, FREE shipping, and an EXTRA 10% OFF. Minimum purchase may apply.Maximize your rest as Daylight Savings Time begins! Visit https://HealthyCell.com/Chicks code CHICKS to get REM Sleep and 20% off your first order
Today, Lindy and Meagan are recording this episode in their most natural state: lying down.That's right–these two sleepy gals are broadcasting from a very fancy Big Fig mattress at On Air Fest in Brooklyn, NYC aka DA BIG APPLE. And they're not alone! They're in bed with the biggest grifter in the biz, a legendary podcast king and professional menace…Ronald Young Jr. Big Ron gives us the scoop on New York's hottest restaurants (a sexy little local joint~Aplé-beis~), gives us many a glad tiding, and tells us Watch-him Watchin! Are you watching Lady Matlock??? Tell us your thoughts!!! BFF Party Line: (703) 829-0003.If you'd like to keep Meagan licensed to practice law in New York, DONATE $400 TO OUR PATREON patreon.com/textmebackpodDo we really have to watch Beast Games, Suits, or Succession?
Box2Box with Rob Gilbert and Derek Dyson!Few clubs have banged the drum for a national second division as long and loudly as South Melbourne, and their dream came true - in part - with last week’s announcement of the Australian Championship. Chairman Bill Papastergiadis returns to discuss how the news has been taken at Lakeside.Abroad, Liverpool’s weekend win over Manchester City has opened up a provisional eleven point gap on Arsenal atop the Premier League table. Is the title race as good as over? The Athletic’s James Pearce returns with excitement nearing fever pitch on Merseyside.Also on the agenda: the Young Socceroos seal the deal in China, Nick Garcia departs the APL, Sydney FC advance in Asia & more…Follow us on X: https://twitter.com/Box2BoxNTSLike us on Facebook: https://www.facebook.com/profile.php?id=100028871306243 Enjoy our written content: https://www.box2boxnts.com.au/… & Join us for Stoppage Time on Wednesday!See omnystudio.com/listener for privacy information.
Si quieres aumentar tu confianza, reducir la ansiedad y multiplicar tus probabilidades de éxito, necesitas un plan estructurado de estudio. En este video, te comparto 7 leyes de productividad diseñadas específicamente para opositores, basadas en la evidencia y en estrategias probadas por miles de personas que ya han conseguido su plaza. Menos esfuerzo, más resultados. ¡Aplícalas desde hoy! ➡️ Apúntate gratis al Consejo Educativo diario y recíbelo todos los días a las 15h para ser mejor docente: https://preparadoredufis.com/consejo-educativo-diario/ ════════════════ Secciones de nuestro canal por categorías ➜ Encuéntralas aquí: https://www.youtube.com/c/OposicionesdeEducaci%C3%B3n/playlists ════════════════ ⚡️ ¿YouTube se te queda corto y quieres ir más allá? ¡Síguenos en otras redes sociales! Instagram: https://www.instagram.com/diegofuentes.oposiciones TikTok: https://www.tiktok.com/@diegofuentes.oposiciones Mi web: https://preparadoredufis.com/ ════════════════ ÍNDICE DE VÍDEO 0:00 Introducción al vídeo 0:52 Trabajo inteligente, no duro 2:05 Aprovecha la presión a tu favor (Ley de Yerkes-Dodson) 3:12 Estado de flow y sesiones de estudio progresivas 4:30 Deja tareas inacabadas para potenciar la memoria (Efecto Zeigarnik) 5:35 Usa fechas límite para evitar procrastinar (Ley de Parkinson) 6:20 Enfócate en lo que realmente importa (Ley de Pareto) 7:10 Ataca lo más difícil primero (Ley de Laborit) 8:00 Cómo aplicar estas leyes en tu rutina de estudio ¡Suscríbete al canal y dale like para más estrategias que te acerquen a tu plaza soñada!
It was looking like a pretty familiar story but a second half change made all the difference as the Vuck took the points against Wellington Phoenix while the Wuck professionally took care of Central Coast. Follow us on Twitter, Instagram & FacebookSupport us on PatreonListen to our interview with John Stensholt regarding Melbourne Victory's finances HEREMON THE VUCK
While Apple Music, TIDAL and Amazon Music are all pushing spatial audio and immersive sound as the next big thing, the current reality is that, to get the best, most immersive effect, you need speakers: lots and lots of speakers. Of course, you *can* listen to immersive sound formats like Dolby Atmos and DTS:X on headphones, but the current processes to render a multi-channel recording into a 2-channel binaural signal, suitable for playback on headphones, leave room for improvement.APL (Advanced Psychoacoustics Lab) is dedicated to fixing this problem. The company's Virtuoso software can convert any 2-channel or multi-channel recording to a standard binaural headphone mix that maintains all the sonic cues of a real three-dimensional mix or space. Join eCoustics CEO Brian Mitchell on this episode of the eCoustics podcast as he chats with APL founder Professor Hyunkook Lee to discuss the current state of immersive sound over headphones and what can be done to make it better.Learn more at:https://apl-hud.com/Keep up with the latest audio and video news at https://www.ecoustics.comThank you to our sponsors SVS & Q Acoustics! For more information on these stellar brands, please click the links below: https://www.svsound.comhttps://www.qacoustics.comCredits:• Original intro music by The Arc of All. sourceoflightandpower.bandcamp.com• Voice Over Provided by Todd Harrell of SSP Unlimited. https://sspunlimited.com• Production by Mitch Anderson, Black Circle Studios. https://blackcircleradio.com#dolbyatmosheadphones #binauralsound #immersivesound #dolbyatmos #spatialaudio #virtuososoftware #aplaudio #aesnews #aes #audioengineers
Happy Friday, Wholigans! On today's episode of Who's There, our weekly call-in show, we celebrate Conclave's Oscar nominations (#ThisIsNotAConclavePodcast) before taking your comments about Christine Quinn's hatred for oligarchs and the horrible Who that the two Entrepreneurial Jessicas have in common. Moving on, it's time for questions about Bowen Yang and Rachel Sennott's latest gig, what Apl.de.ap and Taboo are up to (along with the iconic name for Black Eyed Peas stans), Daniel Powter's international success, whether or not the Superman Curse will affect David Corenswet, and more! As always, call in at 619.WHO.THEM to leave questions, comments & concerns for a future episode of Who's There?. Get a ton of bonus content over on Patreon.com/WhoWeekly To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Říká, že rok 2024 pro něj byl dobrý rok. Vydal knížku, syn se dostal na gymnázium a hodně cestoval. Ivan Tesař se stal ve dvaapadesáti překvapivým debutantem, když v brněnském nakladatelství Větrné mlýny vydal sbírku povídak a poetických textů s názvem Na kusy. Bývalý novinář a mediální analytik, který byl letech 2011 až 2017 členem Rady Českého rozhlasu, je od konce roku 2023 místopředsedou Rady České televizi. V zajímavém a dramatickém období, kdy veřejnoprávní televize nemá vyřešeno financování (čeká se na zvýšení koncesionářských poplatků parlamentem), a zároveň se vede debata o jejím smyslu. O tom jsme spolu v podcastu mluvili, a taky o psaní, stárnutí a světě.Zde je jako obvykle článek, který na základě přepisu napsala AI, Claude 3,5. Sonnet -Ivan Tesař: Po padesátce vydal svou první knihu, chystá přechod Apl se synem a možná napíše román o české dušiV době, kdy většina lidí hodnotí uplynulý rok jako mimořádně složitý, sedí ve studiu muž, který s úsměvem tvrdí opak. Ivan Tesař, spisovatel, analytik a místopředseda Rady České televize, představuje mix zdánlivých protikladů: člověk, který dokáže předpovědět geopolitické krize, ale zároveň píše poezii; profesionál analyzující tvrdá data, který však věří v sílu osobních příběhů.Jeho profesní dráha připomíná film o transformaci 90. let: od kulisáka v legendárním Branickém divadle, přes meteorický vzestup v médiích až po vedení ekonomického zpravodajství v agentuře v pouhých dvaadvaceti letech. "Bylo to vtipné, když si pro rozumy přijel dvacetiletý kluk do Vídně, a tam byl na jeho pozici šéfredaktor, kterému bylo šedesát," vzpomíná s úsměvem.Dnes se Tesař živí analýzou globálních událostí pro soukromé subjekty. Jeho tým dokázal předpovědět ruskou invazi na Ukrajinu půl roku před jejím začátkem. "Pokud jsi schopen se prohrabávat stovkami informací každý den a máš dostatečnou erudici na jejich interpretaci, dosáhneš úspěšnosti předpovědí kolem 85 procent," vysvětluje.Paralelně s analytickou prací se však věnuje literatuře. Jeho nedávno vydaná kniha, kombinující povídky a poezii s fotografiemi uznávaného Karla Cudlína, představuje odvážný experiment. "Vydavatelé tento mix žánrů obvykle nesnášejí," přiznává, "ale Větrné mlýny to riskovaly."Jako člen Rady České televize se Tesař potýká s existenciálními otázkami veřejnoprávních médií. Poukazuje na kritický stav financování ČT, kde rezervy dramaticky klesly z 3,8 miliardy na pouhých 350 milionů korun. Zároveň přichází s radikálním návrhem: omezit funkční období ředitelů na maximum dvou období. "Po určité době začneš jet podle zaběhnutého mustru a nevytvoříš tlak na inovace," argumentuje.Jeho nový literární projekt slibuje být ambiciózní: generační román o devadesátiletém muži, jehož život kopíruje historii Československa a České republiky. "Je to neskutečný příběh člověka, který přežil od 30. let přes protektorát, rok 1948, šedesátá léta, normalizaci až do současnosti," popisuje.Mezi jeho literární vzory patří Jan Zábrana, Jan Balabán a Georg Sebald - autoři, kteří mistrovsky zachycují osobní příběhy na pozadí velkých dějinných událostí. Tato inspirace se odráží i v jeho vlastní tvorbě, kde se snaží propojit osobní a historickou rovinu vyprávění.Přestože se pohybuje ve světě vysoké politiky a médií, neztrácí kontakt s osobní rovinou života. Na léto 2025 plánuje se svým patnáctiletým synem náročný přechod Alp: 670 kilometrů a 31 000 výškových metrů za 32 dní. "Vnímám to jako poslední velkou společnou akci se synem, než se vydá vlastní cestou," říká.Tesařův příběh ukazuje, že i v době specializace je možné propojovat zdánlivě neslučitelné světy - analytickou přesnost s uměleckou citlivostí, profesionální odstup s osobní angažovaností. Jeho optimistický pohled na svět přitom není naivní - je to optimismus člověka, který vidí do temných zákoutí světové politiky, ale přesto věří v lepší budoucnost.
The Vuck cough up two goals in injury time and we are all left asking where to from here? Winless in 5 and seemingly no clarity on if Diles is the man going forward. This is a podcast full of questions. Follow us on Twitter, Instagram & FacebookSupport us on PatreonListen to our interview with John Stensholt regarding Melbourne Victory's finances HEREMON THE VUCK
In this week's episode we'll learn about the role of iron in myelodysplastic syndromes, or MDS. After that: long-term treatment outcomes in immune thrombocytopenia from the STOPAGO study. Finally, new insights into APL treatment outcomes and prognostic factors from the large-scale Harmony APL project which used ATRA-Arsenic combination therapy.Featured Articles:Genetic iron overload aggravates, and pharmacological iron restriction improves, MDS pathophysiology in a preclinical studyLong-term follow-up of the STOPAGO studyAcute promyelocytic leukemia: long-term outcomes from the HARMONY project
#668. Esta forma de hacer ejercicio no sólo eliminará las excusas a las que cómodamente te apalancabas para no tener que moverte, sino que sólo con destinar unos pocos segundos de tu día también cambiará por completo la forma en la que has entendido hasta ahora el entrenamiento, fuerza y masa muscular. • Notas de este episodio: https://podcast.pau.ninja/668 • Comunidad + episodios exclusivos: https://sociedad.ninja/ (00:00) Introducción (2:26) Usa sentido común en tu plan de entreno (5:11) Qué es el método Grease The Groove (8:00) Cómo seguir el método (9:20) Por qué funciona GtG (10:09) Orígenes del método (12:04) Cómo hacer el método Grease The Groove desde cero (13:53) Aplícalo (16:28) Periodiza (19:27) Progresa (23:54) ¿Ganarás hipertrofia con Grease The Groove? (28:23) ¿Qué opiniones te dio los resultados de Greasing The Groove? (30:42) ¿Puedo combinar GtG con otros entrenamientos? (31:33) ¿Es sólo para dominadas, flexiones o calistenia?
In this episode of the Startup CPG podcast, Daniel Scharff sits down with David Fudge, co-founder of Aplós, to discuss the journey of building a premium non-alcoholic spirits brand. David shares insights from his time leading brand at Bonobos, where he helped pioneer the direct-to-consumer model, and how those experiences informed his approach to creating Aplós—a brand redefining the cocktail experience without compromise.They explore the growing non-alcoholic beverage market, the importance of thoughtful branding, and how Aplós balances innovation and tradition with the help of renowned mixologist Lynette Marrero. David provides actionable strategies for scaling an e-commerce business, including influencer partnerships, creative optimization, and leveraging AI tools to stay ahead in a competitive landscape.David also reflects on lessons learned from both successes and missteps, offering actionable advice for aspiring founders looking to carve out a space in competitive markets. Whether you're interested in building a brand, excelling in e-commerce, or creating a product that stands the test of time, this episode is packed with valuable takeaways.Don't miss this engaging conversation with one of the most thoughtful minds in CPG —tune in now!Listen in as they share about:The Vision and Product Development of AplósThe Evolution of the Non-Alcoholic Beverage MarketBranding and Marketing StrategyE-commerce StrategiesNotable Challenges and Low ROI learningsTools and RecommendationsAdvice for EntrepreneursEpisode Links:Website: https://www.aplos.world/ LinkedIn: https://www.linkedin.com/in/dwfudge/ Don't forget to leave a five-star review on Apple Podcasts or Spotify if you enjoyed this episode. For potential sponsorship opportunities or to join the Startup CPG community, visit http://www.startupcpg.com.Show Links:Transcripts of each episode are available on the Transistor platform that hosts our podcast here (click on the episode and toggle to “Transcript” at the top)Join the Startup CPG Slack community (20K+ members and growing!)Follow @startupcpgVisit host Daniel's Linkedin Questions or comments about the episode? Email Daniel at podcast@startupcpg.comEpisode music by Super Fantastics
這个烏影景色 就是冥王星暗暝彼半爿。遮是一个暗淡 koh 遙遠 ê 世界。這張予人讚嘆 ê 太空視角內底,太陽 to̍h tī 49 億公里遠(差不多是 4.5 光時遠)ê 所在。這張相片是 飛足遠 ê 新視野號太空船 tī 2015 年 7 月 翕 ê。彼陣 ê 太空船離冥王星 2 萬 1 千公里遠,差不多是伊 ùi 離冥王星 上近彼位飛--出去 ê 19 分鐘後。這个 Kuiper 帶 ê 成員有戲劇性 ê 外形。Ùi 這張相片來看,咱知影冥王星 霧霧 ê 大氣層其實是蔫蔫,而且實在是 有夠複雜 ê。Tī 這張相片頂懸彼个月眉形 ê 晨昏區景色 內底,有南部地區 ê 窒素冰原,這馬叫做 Sputnik 平原,嘛有 坎坎坷坷、有水冰 ê Norgay 山脈。 ——— 這是 NASA Astronomy Picture of the Day ê 台語文 podcast 原文版:https://apod.nasa.gov/ 台文版:https://apod.tw/ 今仔日 ê 文章: https://apod.tw/daily/20241116/ 影像:NASA, Johns Hopkins Univ./APL, Southwest Research Institute 音樂:P!SCO - 鼎鼎 聲優:阿錕 翻譯:An-Li Tsai (TARA) 原文:https://apod.nasa.gov/apod/ap241116.html Powered by Firstory Hosting
In this episode, Cody Askins sits down with Alison Sosa to dive into a game-changing service that every insurance agency owner needs: APL, the best solution for commission accounting!
The Vuck chalk up a well earned win against Macarthur with Reno Piscopo announcing himself to the Vuck faithful with a Sunday special. The Wuck also opened their season with a gutsy win with an equally brilliant winner from Ava Briedis. Follow us on Twitter, Instagram & FacebookSupport us on PatreonMON THE VUCK
Retrouvez L'Actu c'est Vous, votre émission en direct, sur le 24/7 du Média et le canal 165 de la freebox, toutes les semaines, du lundi au jeudi à 13h, présenté par Cyril Lemba. Vous touchez le RSA, des APL ou l'AAH (Allocation Adulte Handicapés), vous enchaînez les CDD ou les missions d'intérim, et bien sachez que cela peut être suspect pour la CAF. La caisse d'allocation familiale via son logiciel de contrôle cible d'abord les plus précaires et leur attribue des notes. Une quinzaine d'organisations ont déposé un recours devant le Conseil d'État pour faire interdire cet algorithme de notation des allocataires de la CAF. On en parle tout de suite avec Bastien Le Querrec de la Quadrature du Net. Deux témoins, dont les prénoms ont été modifiés, nous raconterons de leurs positions respectives comment se passent plus largement ces contrôles.
In this episode of Transform, the Samis are spilling all the tea about Sami B's joint bachelor/bachelorette weekend! From why they chose a joint celebration to why Austin was the perfect destination, they're diving into all the behind-the-scenes details.Tune in as they chat about everything from the pre-trip prep to the wild itinerary (and yes, the moment Sami Clarke's phone took a dive into Lake Austin). This episode is packed with giggles, stories, and all the bach weekend tea!Transform Instagram - click here!Sami Spalter Instagram - click here!Sami Clarke Instagram - click here!FORM Shop - click here!FORM Website - click here!The house that we rented in Austin on the lake for sami's bach - click here!Frankies bikinis- click here!Kopari gold sunscreen- click here!Lands end big tote bags- click here!Kat the label lingerie- click here!Form lounge sets- click here!Form espresso sets- click here!Form thunderstorm set- click here!APL slides- click here!Funboy lake float- click here!Code TRANSFORM for 20% off an annual membership.Please note that this episode may contain paid endorsements and advertisements for products and services. Individuals on the show may have a direct or indirect financial interest in products or services referred to in this episode.Sponsors:Go to Hungryroot.com/TRANSFORM to get 40% off your first delivery and get your free veggies.Seed.com/transform and use code 25TRANSFORM to get 25% off your first month.Get 25% off your first month at ritual.com/TRANSFORM.Taylor Farms Chopped Salad Kits are available at all major grocery stores.Visit weliveconscious.com and use code TRANSFORM at checkout for 15% off your first purchase.Make switching seasons a breeze with Quince's high-quality closet essentials. Go to Quince.com/transform for free shipping on your order and 365-day returns.Produced by Dear Media.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Tonight we look at a possible time traveler and some of the things he said before heading back.-=Links=-If you would like to join in on the conversation, Join me on Discord.Discord: https://discord.gg/a6UJEb5Dj3Twitter: https://twitter.com/magicsenshiRumble: https://rumble.com/c/c-5613161Fringe Radio: https://fringeradionetwork.com/liveSpirit Force: https://faithbucks.comIf you would like to be a guest on the show or have a topic that you want explored, please Email me with the subject "Guest"Email: captainepoch79@proton.meIf you want to support this Podcast,https://paypal.me/Magicslayer/Cashapp $CaptainEpochMusic by UDIO
The 20th season of the A-League is upon us and we are pumped! In this season preview we put forward our best XI, discuss the key players, review the fan forum, share our season predictions and preview the season opener against Central Coast. Mon! Follow us on Twitter, Instagram & FacebookSupport us on http://www.patreon.com/ForVucksSakeListen to our interview with journalist Paul Brown about 777 Partners here
In this podcast Jon Westfall and I discuss: GNU APL J901 for iPad: Inspired by APL. APL & J both created by Ken Iverson Google is developing tools to let you run Debian in a VM on android Finger mouse vs. vertical mouse vs. conventional mouse Apple Watch Vitals is detecting illnesses before they appear. Oura ring has done this for a number of years now. Is it useful to know that you're about to get sick? Heart rate monitor Blood Pressure needs to be taken properly: Finger mouse vs. vertical mouse vs. conventional mouse Android OS 15 Mac mini M4, iPad mini 7?
We are back! The new season is nearly upon us. It's a new era for Melbourne Victory under Patrick Kisnorbo and this week we wrap up all the goings on in the off season. FVS also celebrates its 10th season and we will let you in on what we have planned to celebrate. MON! Follow us on Twitter, Instagram & FacebookSupport us on http://www.patreon.com/ForVucksSakeListen to our interview with journalist Paul Brown about 777 Partners here
In this week's episode we'll discuss a novel tripartite fusion drives treatment resistance in acute promyelocytic leukemia. In some patients with atypical APL, these novel retinoic acid receptor gene fusions result in truncation of the ligand binding domain of the retinoic acid receptor protein, resulting in non-responsiveness to treatment with all-trans retinoic acid. After that: managing immune thrombotic thrombocytopenia or iTTP without therapeutic plasma exchange, or TPE. Finally, hope for motherhood after allogeneic HCT.Featured Articles: Critical role of tripartite fusion and LBD truncation in certain RARA- and all RARG-related atypical APLManagement of immune thrombotic thrombocytopenic purpura without therapeutic plasma exchangeHope for motherhood: pregnancy after allogeneic hematopoietic cell transplantation (a national multicenter study)
En El Efecto Leopi, exploramos un error común en las relaciones: asumir que ambos tienen el mismo concepto de ser novios o pareja. Cada persona y cultura puede tener una definición distinta.
In this episode, Conor and Bryce follow up on a conversation from 2.5 years ago.Link to Episode 200 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachShow NotesDate Recorded: 2024-08-26 & 2022-03-27 & 2024-09-18Date Released: 2024-09-20ADSP Episode 71: APL, COBOL, BASIC & MoreADSP Episode 72: C++ Algorithm Family Feud!NDC TechTownBayesian CredibilityIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
By popular demand, our next series that we are excited to share with you is on MDS/AML! As we prepare for the release of the first episode next week, let's throw it back to Episode 019 in our Heme/Onc Emergencies Series and talk about APL!Episode contents: - How do we diagnose APL? - What are characteristic findings of APL? - What is the acute management of this disease? ****Have some time and want to make some extra money? Get paid to participate in market research surveys: https://affiliatepanel.members-only.online/FOC_24?utm_campaign=FOC&utm_source=email&utm_medium=email** Want to review the show notes for this episode and others? Check out our website: https://www.thefellowoncall.com/our-episodesLove what you hear? Tell a friend and leave a review on our podcast streaming platforms!Twitter: @TheFellowOnCallInstagram: @TheFellowOnCallListen in on: Apple Podcast, Spotify, and Google Podcast
About our Guest:Judith Donathhttps://cyber.harvard.edu/people/jdonathKey Discussion Points:Understanding Signaling Theory:The foundation of signaling theory in communication.The balance between honest and deceptive signals.Evolutionary Biology and Communication:Darwin's insights on animal communication.Zahavi's Handicap Principle and its role in ensuring signal honesty.Maynard Smith's Index Signals and their reliability without cost.AI and the Evolution of Communication:The impact of AI on the reliability of communication signals.Challenges posed by deepfakes in video and audio.The arms race between deception technologies and verification methods.Cultural and Institutional Roles:How culture and institutions uphold the reliability of signals.The interplay between technological advancements and societal norms.Future of Communication in the Digital Age:Strategies for developing secure communication channels.Balancing privacy with the need for verification.The role of trusted sources in maintaining signal integrity.Papers and Books Mentioned:Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433Zahavi, A. (1975). Mate selection—a selection for a handicap. Journal of Theoretical Biology, 53(1), 205-214. https://doi.org/10.1016/0022-5193(75)90111-3Veblen, T. (1899). The Theory of the Leisure Class. New York: Macmillan.https://moglen.law.columbia.edu/LCS/theoryleisureclass.pdfhttps://dn720401.ca.archive.org/0/items/theoryofleisurec01vebl/theoryofleisurec01vebl.pdfWeizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. https://doi.org/10.1145/365153.365168Donath, J. S. (2002). Identity and deception in the virtual community. In Communities in cyberspace (pp. 37-68). Routledge.https://vivatropolis.com/papers/Donath/IdentityDeception/IdentityDeception.pdfCurrent Progress on the forthcoming book: Signals, Truth & Designhttps://vivatropolis.com/judith/signalsTruthDesign.htmlDonath, J. (2014). The social machine: designs for living online. MIT Press.https://direct.mit.edu/books/monograph/4037/The-Social-MachineDesigns-for-Living-OnlineOther:The Story about the Ferrari executive Deepfake attempthttps://www.carscoops.com/2024/07/ferrari-ceo-impersonator-uncovered-by-colleague-in-deepfake-call/We geeked out for a moment on Programming languages. Learn about them here.The C languagehttps://en.wikipedia.org/wiki/C_(programming_language)Introduction to Chttps://www.w3schools.com/c/c_intro.phpAPL Languagehttps://en.wikipedia.org/wiki/APL_(programming_language)Learn APLhttps://xpqz.github.io/learnapl/intro.htmlTry APLhttps://tryapl.orgLISP Languagehttps://en.wikipedia.org/wiki/Lisp_(programming_language)Learn LISPhttps://www.geeksforgeeks.org/introduction-to-lisp/
David Kostiner, CEO & Co-Founder of Lumenary, and Robbie Schneider, Inventor and CTODavid Kostiner and Robbie Schneider are the driving forces behind Lumenary, a company at the forefront of cannabis technology and innovation.David Kostiner is a multifaceted entrepreneur and legal expert with a rich background in both the music and legal industries. He began his career as a professional drummer and member of the Dreamworks artist Creeper Lagoon. Driven by his passion for law, David transitioned to studying and practicing entertainment law, becoming the managing partner of Counsel LLP since 2009, and sharing his expertise as a law professor at UC San Francisco School of Law. A serial entrepreneur, David has co-founded several startups, including the Independent Online Distribution Alliance, and has collaborated with industry figures like Black Eyed Peas member Apl.de.ap, an investor in Lumenary. His dedication to representing artists and his passion for cannabis innovation led him to co-found Lumenary, where he supports the development of cutting-edge technology like the Beam laser vaporizer.Robbie Schneider, Lumenary's Inventor and CTO, is an artist and musician with a deep passion for technology. He has spent the majority of his life immersed in the cannabis industry, mastering the art of growing, vending, and extracting. With the help of a talented team of engineers, Robbie has dedicated the past decade to creating the finest cannabis device they could envision. The Beam, a product of this dream, is a testament to Robbie's commitment to blending artistry with technological innovation, making a significant impact in the cannabis industry.Together, David Kostiner and Robbie Schneider have combined their unique talents and experiences to push the boundaries of cannabis technology, leading Lumenary in its mission to revolutionize the industry with the Beam laser vaporizer.David Kostiner LinkedInthebeamlaser.comInstagram
In this episode, Conor and Aaron Hsu record from the Eagle Pub in Cambridge, UK and chat about the importance of algorithms and tersity in programming languages.Link to Episode 197 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraAbout the GuestAaron Hsu is the implementor of Co-dfns and an advocate for a terse and minimal array programming style. Hsu has a background in academic functional programming, and was primarily a Scheme programmer for ten years before learning APL. He was introduced to APL by Morten Kromberg while working on a GPU-hosted compiler, and switched to Dyalog APL for the project, which is now Co-dfns.Show NotesDate Recorded: 2024-08-21Date Released: 2024-08-30ArrayCast Episode 19: Aaron HsuCo-dfnsThe Eagle Pub, CambridgeLiving The Loopless Life: Techniques For Removing Explicit Loops And Recursion by Aaron HsuThe Nano-parsing Architecture: Sane And Portable Parsing For Perverse Environments by Aaron HsuAlgorithms as a Tool of Thought // Conor Hoekstra // APL Seeds '21Intro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
In this episode, Anthony and Bernie discuss the recently reported APOLLO trial in high-risk APL and debate whether this should represent the new standard of care in this population. We also review key practical considerations for the treatment of APL patients, including side effect management and treatment of APL in unique clinical scenarios. APOLLO: https://library.ehaweb.org/eha/2024/eha2024-congress/422206/uwe.platzbecker.first.results.of.the.apollo.trial.a.randomized.phase.iii.study.html
In this episode, Conor and Aaron Hsu record from the Eagle Pub in Cambridge, UK and chat about algorithms in APL and algorithm implementations.Link to Episode 196 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraAbout the GuestAaron Hsu is the implementor of Co-dfns and an advocate for a terse and minimal array programming style. Hsu has a background in academic functional programming, and was primarily a Scheme programmer for ten years before learning APL. He was introduced to APL by Morten Kromberg while working on a GPU-hosted compiler, and switched to Dyalog APL for the project, which is now Co-dfns.Show NotesDate Recorded: 2024-08-21Date Released: 2024-08-23ArrayCast Episode 19: Aaron HsuCo-dfnsThe Eagle Pub, CambridgeIverson CollegeArrayCast Episode 63: Uiua, a Stack based Array languageArrayCast Episode 77: Kai Schmidt and the Evolving Uiua Programming LanguageUiua LanguageScheme LanguageStepanov's "Notes on Higher Order Programming in Scheme"C++98 std::inner_productC++98 std::adjacent_differenceC++11 std::iotaC++17 std::reduceDyalog APL ∨ (GCD)Dyalog APL ∧ LCMC++ ContainersRAIIC++ Core GuidelinesDyalog APL ⍳ (iota)Dyalog APL ⍳ (dyadic iota)Dyadic APL Possible Implementation in C++ (Godbolt)Dyadic APL Possible Implementation in BQNC++20 std::ranges::binary_searchNVIDIA cucollections (cuco)Intro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Disclaimer: We recorded this episode ~1.5 months ago, timing for the FastHTML release. It then got bottlenecked by Llama3.1, Winds of AI Winter, and SAM2 episodes, so we're a little late. Since then FastHTML was released, swyx is building an app in it for AINews, and Anthropic has also released their prompt caching API. Remember when Dylan Patel of SemiAnalysis coined the GPU Rich vs GPU Poor war? (if not, see our pod with him). The idea was that if you're GPU poor you shouldn't waste your time trying to solve GPU rich problems (i.e. pre-training large models) and are better off working on fine-tuning, optimized inference, etc. Jeremy Howard (see our “End of Finetuning” episode to catchup on his background) and Eric Ries founded Answer.AI to do exactly that: “Practical AI R&D”, which is very in-line with the GPU poor needs. For example, one of their first releases was a system based on FSDP + QLoRA that let anyone train a 70B model on two NVIDIA 4090s. Since then, they have come out with a long list of super useful projects (in no particular order, and non-exhaustive):* FSDP QDoRA: this is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training.* Cold Compress: a KV cache compression toolkit that lets you scale sequence length without impacting speed.* colbert-small: state of the art retriever at only 33M params* JaColBERTv2.5: a new state-of-the-art retrievers on all Japanese benchmarks.* gpu.cpp: portable GPU compute for C++ with WebGPU.* Claudette: a better Anthropic API SDK. They also recently released FastHTML, a new way to create modern interactive web apps. Jeremy recently released a 1 hour “Getting started” tutorial on YouTube; while this isn't AI related per se, but it's close to home for any AI Engineer who are looking to iterate quickly on new products: In this episode we broke down 1) how they recruit 2) how they organize what to research 3) and how the community comes together. At the end, Jeremy gave us a sneak peek at something new that he's working on that he calls dialogue engineering: So I've created a new approach. It's not called prompt engineering. I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it.He explains it a bit more ~44:53 in the pod, but we'll just have to wait for the public release to figure out exactly what he means.Timestamps* [00:00:00] Intro by Suno AI* [00:03:02] Continuous Pre-Training is Here* [00:06:07] Schedule-Free Optimizers and Learning Rate Schedules* [00:07:08] Governance and Structural Issues within OpenAI and Other AI Labs* [00:13:01] How Answer.ai works* [00:23:40] How to Recruit Productive Researchers* [00:27:45] Building a new BERT* [00:31:57] FSDP, QLoRA, and QDoRA: Innovations in Fine-Tuning Large Models* [00:36:36] Research and Development on Model Inference Optimization* [00:39:49] FastHTML for Web Application Development* [00:46:53] AI Magic & Dialogue Engineering* [00:52:19] AI wishlist & predictionsShow Notes* Jeremy Howard* Previously on Latent Space: The End of Finetuning, NeurIPS Startups* Answer.ai* Fast.ai* FastHTML* answerai-colbert-small-v1* gpu.cpp* Eric Ries* Aaron DeFazio* Yi Tai* Less Wright* Benjamin Warner* Benjamin Clavié* Jono Whitaker* Austin Huang* Eric Gilliam* Tim Dettmers* Colin Raffel* Sebastian Raschka* Carson Gross* Simon Willison* Sepp Hochreiter* Llama3.1 episode* Snowflake Arctic* Ranger Optimizer* Gemma.cpp* HTMX* UL2* BERT* DeBERTa* Efficient finetuning of Llama 3 with FSDP QDoRA* xLSTMTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: And today we're back with Jeremy Howard, I think your third appearance on Latent Space. Welcome.Jeremy [00:00:19]: Wait, third? Second?Swyx [00:00:21]: Well, I grabbed you at NeurIPS.Jeremy [00:00:23]: I see.Swyx [00:00:24]: Very fun, standing outside street episode.Jeremy [00:00:27]: I never heard that, by the way. You've got to send me a link. I've got to hear what it sounded like.Swyx [00:00:30]: Yeah. Yeah, it's a NeurIPS podcast.Alessio [00:00:32]: I think the two episodes are six hours, so there's plenty to listen, we'll make sure to send it over.Swyx [00:00:37]: Yeah, we're trying this thing where at the major ML conferences, we, you know, do a little audio tour of, give people a sense of what it's like. But the last time you were on, you declared the end of fine tuning. I hope that I sort of editorialized the title a little bit, and I know you were slightly uncomfortable with it, but you just own it anyway. I think you're very good at the hot takes. And we were just discussing in our pre-show that it's really happening, that the continued pre-training is really happening.Jeremy [00:01:02]: Yeah, absolutely. I think people are starting to understand that treating the three ULM FIT steps of like pre-training, you know, and then the kind of like what people now call instruction tuning, and then, I don't know if we've got a general term for this, DPO, RLHFE step, you know, or the task training, they're not actually as separate as we originally suggested they were in our paper, and when you treat it more as a continuum, and that you make sure that you have, you know, more of kind of the original data set incorporated into the later stages, and that, you know, we've also seen with LLAMA3, this idea that those later stages can be done for a lot longer. These are all of the things I was kind of trying to describe there. It wasn't the end of fine tuning, but more that we should treat it as a continuum, and we should have much higher expectations of how much you can do with an already trained model. You can really add a lot of behavior to it, you can change its behavior, you can do a lot. So a lot of our research has been around trying to figure out how to modify the model by a larger amount rather than starting from random weights, because I get very offended at the idea of starting from random weights.Swyx [00:02:14]: Yeah, I saw that in ICLR in Vienna, there was an outstanding paper about starting transformers from data-driven piers. I don't know if you saw that one, they called it sort of never trained from scratch, and I think it was kind of rebelling against like the sort of random initialization.Jeremy [00:02:28]: Yeah, I've, you know, that's been our kind of continuous message since we started Fast AI, is if you're training for random weights, you better have a really good reason, you know, because it seems so unlikely to me that nobody has ever trained on data that has any similarity whatsoever to the general class of data you're working with, and that's the only situation in which I think starting from random weights makes sense.Swyx [00:02:51]: The other trends since our last pod that I would point people to is I'm seeing a rise in multi-phase pre-training. So Snowflake released a large model called Snowflake Arctic, where they detailed three phases of training where they had like a different mixture of like, there was like 75% web in the first instance, and then they reduced the percentage of the web text by 10% each time and increased the amount of code in each phase. And I feel like multi-phase is being called out in papers more. I feel like it's always been a thing, like changing data mix is not something new, but calling it a distinct phase is new, and I wonder if there's something that you're seeingJeremy [00:03:32]: on your end. Well, so they're getting there, right? So the point at which they're doing proper continued pre-training is the point at which that becomes a continuum rather than a phase. So the only difference with what I was describing last time is to say like, oh, there's a function or whatever, which is happening every batch. It's not a huge difference. You know, I always used to get offended when people had learning rates that like jumped. And so one of the things I started doing early on in Fast.ai was to say to people like, no, you should actually have your learning rate schedule should be a function, not a list of numbers. So now I'm trying to give the same idea about training mix.Swyx [00:04:07]: There's been pretty public work from Meta on schedule-free optimizers. I don't know if you've been following Aaron DeFazio and what he's doing, just because you mentioned learning rate schedules, you know, what if you didn't have a schedule?Jeremy [00:04:18]: I don't care very much, honestly. I don't think that schedule-free optimizer is that exciting. It's fine. We've had non-scheduled optimizers for ages, like Less Wright, who's now at Meta, who was part of the Fast.ai community there, created something called the Ranger optimizer. I actually like having more hyperparameters. You know, as soon as you say schedule-free, then like, well, now I don't get to choose. And there isn't really a mathematically correct way of, like, I actually try to schedule more parameters rather than less. So like, I like scheduling my epsilon in my atom, for example. I schedule all the things. But then the other thing we always did with the Fast.ai library was make it so you don't have to set any schedules. So Fast.ai always supported, like, you didn't even have to pass a learning rate. Like, it would always just try to have good defaults and do the right thing. But to me, I like to have more parameters I can play with if I want to, but you don't have to.Alessio [00:05:08]: And then the more less technical side, I guess, of your issue, I guess, with the market was some of the large research labs taking all this innovation kind of behind closed doors and whether or not that's good, which it isn't. And now we could maybe make it more available to people. And then a month after we released the episode, there was the whole Sam Altman drama and like all the OpenAI governance issues. And maybe people started to think more, okay, what happens if some of these kind of labs, you know, start to break from within, so to speak? And the alignment of the humans is probably going to fall before the alignment of the models. So I'm curious, like, if you have any new thoughts and maybe we can also tie in some of the way that we've been building Answer as like a public benefit corp and some of those aspects.Jeremy [00:05:51]: Sure. So, yeah, I mean, it was kind of uncomfortable because two days before Altman got fired, I did a small public video interview in which I said, I'm quite sure that OpenAI's current governance structure can't continue and that it was definitely going to fall apart. And then it fell apart two days later and a bunch of people were like, what did you know, Jeremy?Alessio [00:06:13]: What did Jeremy see?Jeremy [00:06:15]: I didn't see anything. It's just obviously true. Yeah. So my friend Eric Ries and I spoke a lot before that about, you know, Eric's, I think probably most people would agree, the top expert in the world on startup and AI governance. And you know, we could both clearly see that this didn't make sense to have like a so-called non-profit where then there are people working at a company, a commercial company that's owned by or controlled nominally by the non-profit, where the people in the company are being given the equivalent of stock options, like everybody there was working there with expecting to make money largely from their equity. So the idea that then a board could exercise control by saying like, oh, we're worried about safety issues and so we're going to do something that decreases the profit of the company, when every stakeholder in the company, their remuneration pretty much is tied to their profit, it obviously couldn't work. So I mean, that was a huge oversight there by someone. I guess part of the problem is that the kind of people who work at non-profits and in this case the board, you know, who are kind of academics and, you know, people who are kind of true believers. I think it's hard for them to realize that 99.999% of the world is driven very heavily by money, especially huge amounts of money. So yeah, Eric and I had been talking for a long time before that about what could be done differently, because also companies are sociopathic by design and so the alignment problem as it relates to companies has not been solved. Like, companies become huge, they devour their founders, they devour their communities and they do things where even the CEOs, you know, often of big companies tell me like, I wish our company didn't do that thing. You know, I know that if I didn't do it, then I would just get fired and the board would put in somebody else and the board knows if they don't do it, then their shareholders can sue them because they're not maximizing profitability or whatever. So what Eric's spent a lot of time doing is trying to think about how do we make companies less sociopathic, you know, how to, or more, you know, maybe a better way to think of it is like, how do we make it so that the founders of companies can ensure that their companies continue to actually do the things they want them to do? You know, when we started a company, hey, we very explicitly decided we got to start a company, not a academic lab, not a nonprofit, you know, we created a Delaware Seacorp, you know, the most company kind of company. But when we did so, we told everybody, you know, including our first investors, which was you Alessio. They sound great. We are going to run this company on the basis of maximizing long-term value. And in fact, so when we did our second round, which was an angel round, we had everybody invest through a long-term SPV, which we set up where everybody had to agree to vote in line with long-term value principles. So like never enough just to say to people, okay, we're trying to create long-term value here for society as well as for ourselves and everybody's like, oh, yeah, yeah, I totally agree with that. But when it comes to like, okay, well, here's a specific decision we have to make, which will not maximize short-term value, people suddenly change their mind. So you know, it has to be written into the legal documents of everybody so that no question that that's the way the company has to be managed. So then you mentioned the PBC aspect, Public Benefit Corporation, which I never quite understood previously. And turns out it's incredibly simple, like it took, you know, like one paragraph added to our corporate documents to become a PBC. It was cheap, it was easy, but it's got this huge benefit, which is if you're not a public benefit corporation, then somebody can come along and offer to buy you with a stated description of like turning your company into the thing you most hate, right? And if they offer you more than the market value of your company and you don't accept it, then you are not necessarily meeting the kind of your fiduciary responsibilities. So the way like Eric always described it to me is like, if Philip Morris came along and said that you've got great technology for marketing cigarettes to children, so we're going to pivot your company to do that entirely, and we're going to pay you 50% more than the market value, you're going to have to say yes. If you have a PBC, then you are more than welcome to say no, if that offer is not in line with your stated public benefit. So our stated public benefit is to maximize the benefit to society through using AI. So given that more children smoking doesn't do that, then we can say like, no, we're not selling to you.Alessio [00:11:01]: I was looking back at some of our emails. You sent me an email on November 13th about talking and then on the 14th, I sent you an email working together to free AI was the subject line. And then that was kind of the start of the C round. And then two days later, someone got fired. So you know, you were having these thoughts even before we had like a public example of like why some of the current structures didn't work. So yeah, you were very ahead of the curve, so to speak. You know, people can read your awesome introduction blog and answer and the idea of having a R&D lab versus our lab and then a D lab somewhere else. I think to me, the most interesting thing has been hiring and some of the awesome people that you've been bringing on that maybe don't fit the central casting of Silicon Valley, so to speak. Like sometimes I got it like playing baseball cards, you know, people are like, oh, what teams was this person on, where did they work versus focusing on ability. So I would love for you to give a shout out to some of the awesome folks that you have on the team.Jeremy [00:11:58]: So, you know, there's like a graphic going around describing like the people at XAI, you know, Elon Musk thing. And like they are all connected to like multiple of Stanford, Meta, DeepMind, OpenAI, Berkeley, Oxford. Look, these are all great institutions and they have good people. And I'm definitely not at all against that, but damn, there's so many other people. And one of the things I found really interesting is almost any time I see something which I think like this is really high quality work and it's something I don't think would have been built if that person hadn't built the thing right now, I nearly always reach out to them and ask to chat. And I tend to dig in to find out like, okay, you know, why did you do that thing? Everybody else has done this other thing, your thing's much better, but it's not what other people are working on. And like 80% of the time, I find out the person has a really unusual background. So like often they'll have like, either they like came from poverty and didn't get an opportunity to go to a good school or had dyslexia and, you know, got kicked out of school in year 11, or they had a health issue that meant they couldn't go to university or something happened in their past and they ended up out of the mainstream. And then they kind of succeeded anyway. Those are the people that throughout my career, I've tended to kind of accidentally hire more of, but it's not exactly accidentally. It's like when I see somebody who's done, two people who have done extremely well, one of them did extremely well in exactly the normal way from the background entirely pointing in that direction and they achieved all the hurdles to get there. And like, okay, that's quite impressive, you know, but another person who did just as well, despite lots of constraints and doing things in really unusual ways and came up with different approaches. That's normally the person I'm likely to find useful to work with because they're often like risk-takers, they're often creative, they're often extremely tenacious, they're often very open-minded. So that's the kind of folks I tend to find myself hiring. So now at Answer.ai, it's a group of people that are strong enough that nearly every one of them has independently come to me in the past few weeks and told me that they have imposter syndrome and they're not convinced that they're good enough to be here. And I kind of heard it at the point where I was like, okay, I don't think it's possible that all of you are so far behind your peers that you shouldn't get to be here. But I think part of the problem is as an R&D lab, the great developers look at the great researchers and they're like, wow, these big-brained, crazy research people with all their math and s**t, they're too cool for me, oh my God. And then the researchers look at the developers and they're like, oh, they're killing it, making all this stuff with all these people using it and talking on Twitter about how great it is. I think they're both a bit intimidated by each other, you know. And so I have to kind of remind them like, okay, there are lots of things in this world where you suck compared to lots of other people in this company, but also vice versa, you know, for all things. And the reason you came here is because you wanted to learn about those other things from those other people and have an opportunity to like bring them all together into a single unit. You know, it's not reasonable to expect you're going to be better at everything than everybody else. I guess the other part of it is for nearly all of the people in the company, to be honest, they have nearly always been better than everybody else at nearly everything they're doing nearly everywhere they've been. So it's kind of weird to be in this situation now where it's like, gee, I can clearly see that I suck at this thing that I'm meant to be able to do compared to these other people where I'm like the worst in the company at this thing for some things. So I think that's a healthy place to be, you know, as long as you keep reminding each other about that's actually why we're here. And like, it's all a bit of an experiment, like we don't have any managers. We don't have any hierarchy from that point of view. So for example, I'm not a manager, which means I don't get to tell people what to do or how to do it or when to do it. Yeah, it's been a bit of an experiment to see how that would work out. And it's been great. So for instance, Ben Clavier, who you might have come across, he's the author of Ragatouille, he's the author of Rerankers, super strong information retrieval guy. And a few weeks ago, you know, this additional channel appeared on Discord, on our private Discord called Bert24. And these people started appearing, as in our collab sections, we have a collab section for like collaborating with outsiders. And these people started appearing, there are all these names that I recognize, like Bert24, and they're all talking about like the next generation of Bert. And I start following along, it's like, okay, Ben decided that I think, quite rightly, we need a new Bert. Because everybody, like so many people are still using Bert, and it's still the best at so many things, but it actually doesn't take advantage of lots of best practices. And so he just went out and found basically everybody who's created better Berts in the last four or five years, brought them all together, suddenly there's this huge collaboration going on. So yeah, I didn't tell him to do that. He didn't ask my permission to do that. And then, like, Benjamin Warner dived in, and he's like, oh, I created a whole transformers from scratch implementation designed to be maximally hackable. He originally did it largely as a teaching exercise to show other people, but he was like, I could, you know, use that to create a really hackable BERT implementation. In fact, he didn't say that. He said, I just did do that, you know, and I created a repo, and then everybody's like starts using it. They're like, oh my god, this is amazing. I can now implement all these other BERT things. And it's not just answer AI guys there, you know, there's lots of folks, you know, who have like contributed new data set mixes and blah, blah, blah. So, I mean, I can help in the same way that other people can help. So like, then Ben Clavier reached out to me at one point and said, can you help me, like, what have you learned over time about how to manage intimidatingly capable and large groups of people who you're nominally meant to be leading? And so, you know, I like to try to help, but I don't direct. Another great example was Kerem, who, after our FSTP QLORA work, decided quite correctly that it didn't really make sense to use LoRa in today's world. You want to use the normalized version, which is called Dora. Like two or three weeks after we did FSTP QLORA, he just popped up and said, okay, I've just converted the whole thing to Dora, and I've also created these VLLM extensions, and I've got all these benchmarks, and, you know, now I've got training of quantized models with adapters that are as fast as LoRa, and as actually better than, weirdly, fine tuning. Just like, okay, that's great, you know. And yeah, so the things we've done to try to help make these things happen as well is we don't have any required meetings, you know, but we do have a meeting for each pair of major time zones that everybody's invited to, and, you know, people see their colleagues doing stuff that looks really cool and say, like, oh, how can I help, you know, or how can I learn or whatever. So another example is Austin, who, you know, amazing background. He ran AI at Fidelity, he ran AI at Pfizer, he ran browsing and retrieval for Google's DeepMind stuff, created Jemma.cpp, and he's been working on a new system to make it easier to do web GPU programming, because, again, he quite correctly identified, yeah, so I said to him, like, okay, I want to learn about that. Not an area that I have much expertise in, so, you know, he's going to show me what he's working on and teach me a bit about it, and hopefully I can help contribute. I think one of the key things that's happened in all of these is everybody understands what Eric Gilliam, who wrote the second blog post in our series, the R&D historian, describes as a large yard with narrow fences. Everybody has total flexibility to do what they want. We all understand kind of roughly why we're here, you know, we agree with the premises around, like, everything's too expensive, everything's too complicated, people are building too many vanity foundation models rather than taking better advantage of fine-tuning, like, there's this kind of general, like, sense of we're all on the same wavelength about, you know, all the ways in which current research is fucked up, and, you know, all the ways in which we're worried about centralization. We all care a lot about not just research for the point of citations, but research that actually wouldn't have happened otherwise, and actually is going to lead to real-world outcomes. And so, yeah, with this kind of, like, shared vision, people understand, like, you know, so when I say, like, oh, well, you know, tell me, Ben, about BERT 24, what's that about? And he's like, you know, like, oh, well, you know, you can see from an accessibility point of view, or you can see from a kind of a actual practical impact point of view, there's far too much focus on decoder-only models, and, you know, like, BERT's used in all of these different places and industry, and so I can see, like, in terms of our basic principles, what we're trying to achieve, this seems like something important. And so I think that's, like, a really helpful that we have that kind of shared perspective, you know?Alessio [00:21:14]: Yeah. And before we maybe talk about some of the specific research, when you're, like, reaching out to people, interviewing them, what are some of the traits, like, how do these things come out, you know, usually? Is it working on side projects that you, you know, you're already familiar with? Is there anything, like, in the interview process that, like, helps you screen for people that are less pragmatic and more research-driven versus some of these folks that are just gonna do it, you know? They're not waiting for, like, the perfect process.Jeremy [00:21:40]: Everybody who comes through the recruiting is interviewed by everybody in the company. You know, our goal is 12 people, so it's not an unreasonable amount. So the other thing to say is everybody so far who's come into the recruiting pipeline, everybody bar one, has been hired. So which is to say our original curation has been good. And that's actually pretty easy, because nearly everybody who's come in through the recruiting pipeline are people I know pretty well. So Jono Whitaker and I, you know, he worked on the stable diffusion course we did. He's outrageously creative and talented, and he's super, like, enthusiastic tinkerer, just likes making things. Benjamin was one of the strongest parts of the fast.ai community, which is now the alumni. It's, like, hundreds of thousands of people. And you know, again, like, they're not people who a normal interview process would pick up, right? So Benjamin doesn't have any qualifications in math or computer science. Jono was living in Zimbabwe, you know, he was working on, like, helping some African startups, you know, but not FAANG kind of credentials. But yeah, I mean, when you actually see people doing real work and they stand out above, you know, we've got lots of Stanford graduates and open AI people and whatever in our alumni community as well. You know, when you stand out above all of those people anyway, obviously you've got something going for you. You know, Austin, him and I worked together on the masks study we did in the proceeding at the National Academy of Science. You know, we had worked together, and again, that was a group of, like, basically the 18 or 19 top experts in the world on public health and epidemiology and research design and so forth. And Austin, you know, one of the strongest people in that collaboration. So yeah, you know, like, I've been lucky enough to have had opportunities to work with some people who are great and, you know, I'm a very open-minded person, so I kind of am always happy to try working with pretty much anybody and some people stand out. You know, there have been some exceptions, people I haven't previously known, like Ben Clavier, actually, I didn't know before. But you know, with him, you just read his code, and I'm like, oh, that's really well-written code. And like, it's not written exactly the same way as everybody else's code, and it's not written to do exactly the same thing as everybody else's code. So yeah, and then when I chatted to him, it's just like, I don't know, I felt like we'd known each other for years, like we just were on the same wavelength, but I could pretty much tell that was going to happen just by reading his code. I think you express a lot in the code you choose to write and how you choose to write it, I guess. You know, or another example, a guy named Vic, who was previously the CEO of DataQuest, and like, in that case, you know, he's created a really successful startup. He won the first, basically, Kaggle NLP competition, which was automatic essay grading. He's got the current state-of-the-art OCR system, Surya. Again, he's just a guy who obviously just builds stuff, you know, he doesn't ask for permission, he doesn't need any, like, external resources. Actually, Karim's another great example of this, I mean, I already knew Karim very well because he was my best ever master's student, but it wasn't a surprise to me then when he then went off to create the world's state-of-the-art language model in Turkish on his own, in his spare time, with no budget, from scratch. This is not fine-tuning or whatever, he, like, went back to Common Crawl and did everything. Yeah, it's kind of, I don't know what I'd describe that process as, but it's not at all based on credentials.Swyx [00:25:17]: Assemble based on talent, yeah. We wanted to dive in a little bit more on, you know, turning from the people side of things into the technical bets that you're making. Just a little bit more on Bert. I was actually, we just did an interview with Yi Tay from Reka, I don't know if you're familiar with his work, but also another encoder-decoder bet, and one of his arguments was actually people kind of over-index on the decoder-only GPT-3 type paradigm. I wonder if you have thoughts there that is maybe non-consensus as well. Yeah, no, absolutely.Jeremy [00:25:45]: So I think it's a great example. So one of the people we're collaborating with a little bit with BERT24 is Colin Raffle, who is the guy behind, yeah, most of that stuff, you know, between that and UL2, there's a lot of really interesting work. And so one of the things I've been encouraging the BERT group to do, Colin has as well, is to consider using a T5 pre-trained encoder backbone as a thing you fine-tune, which I think would be really cool. You know, Colin was also saying actually just use encoder-decoder as your Bert, you know, why don't you like use that as a baseline, which I also think is a good idea. Yeah, look.Swyx [00:26:25]: What technical arguments are people under-weighting?Jeremy [00:26:27]: I mean, Colin would be able to describe this much better than I can, but I'll give my slightly non-expert attempt. Look, I mean, think about like diffusion models, right? Like in stable diffusion, like we use things like UNet. You have this kind of downward path and then in the upward path you have the cross connections, which it's not a tension, but it's like a similar idea, right? You're inputting the original encoding path into your decoding path. It's critical to make it work, right? Because otherwise in the decoding part, the model has to do so much kind of from scratch. So like if you're doing translation, like that's a classic kind of encoder-decoder example. If it's decoder only, you never get the opportunity to find the right, you know, feature engineering, the right feature encoding for the original sentence. And it kind of means then on every token that you generate, you have to recreate the whole thing, you know? So if you have an encoder, it's basically saying like, okay, this is your opportunity model to create a really useful feature representation for your input information. So I think there's really strong arguments for encoder-decoder models anywhere that there is this kind of like context or source thing. And then why encoder only? Well, because so much of the time what we actually care about is a classification, you know? It's like an output. It's like generating an arbitrary length sequence of tokens. So anytime you're not generating an arbitrary length sequence of tokens, decoder models don't seem to make much sense. Now the interesting thing is, you see on like Kaggle competitions, that decoder models still are at least competitive with things like Deberta v3. They have to be way bigger to be competitive with things like Deberta v3. And the only reason they are competitive is because people have put a lot more time and money and effort into training the decoder only ones, you know? There isn't a recent Deberta. There isn't a recent Bert. Yeah, it's a whole part of the world that people have slept on a little bit. And this is just what happens. This is how trends happen rather than like, to me, everybody should be like, oh, let's look at the thing that has shown signs of being useful in the past, but nobody really followed up with properly. That's the more interesting path, you know, where people tend to be like, oh, I need to get citations. So what's everybody else doing? Can I make it 0.1% better, you know, or 0.1% faster? That's what everybody tends to do. Yeah. So I think it's like, Itay's work commercially now is interesting because here's like a whole, here's a whole model that's been trained in a different way. So there's probably a whole lot of tasks it's probably better at than GPT and Gemini and Claude. So that should be a good commercial opportunity for them if they can figure out what those tasks are.Swyx [00:29:07]: Well, if rumors are to be believed, and he didn't comment on this, but, you know, Snowflake may figure out the commercialization for them. So we'll see.Jeremy [00:29:14]: Good.Alessio [00:29:16]: Let's talk about FSDP, Qlora, Qdora, and all of that awesome stuff. One of the things we talked about last time, some of these models are meant to run on systems that nobody can really own, no single person. And then you were like, well, what if you could fine tune a 70B model on like a 4090? And I was like, no, that sounds great, Jeremy, but like, can we actually do it? And then obviously you all figured it out. Can you maybe tell us some of the worst stories behind that, like the idea behind FSDP, which is kind of taking sharded data, parallel computation, and then Qlora, which is do not touch all the weights, just go quantize some of the model, and then within the quantized model only do certain layers instead of doing everything.Jeremy [00:29:57]: Well, do the adapters. Yeah.Alessio [00:29:59]: Yeah. Yeah. Do the adapters. Yeah. I will leave the floor to you. I think before you published it, nobody thought this was like a short term thing that we're just going to have. And now it's like, oh, obviously you can do it, but it's not that easy.Jeremy [00:30:12]: Yeah. I mean, to be honest, it was extremely unpleasant work to do. It's like not at all enjoyable. I kind of did version 0.1 of it myself before we had launched the company, or at least the kind of like the pieces. They're all pieces that are difficult to work with, right? So for the quantization, you know, I chatted to Tim Detmers quite a bit and, you know, he very much encouraged me by saying like, yeah, it's possible. He actually thought it'd be easy. It probably would be easy for him, but I'm not Tim Detmers. And, you know, so he wrote bits and bytes, which is his quantization library. You know, he wrote that for a paper. He didn't write that to be production like code. It's now like everybody's using it, at least the CUDA bits. So like, it's not particularly well structured. There's lots of code paths that never get used. There's multiple versions of the same thing. You have to try to figure it out. So trying to get my head around that was hard. And you know, because the interesting bits are all written in CUDA, it's hard to like to step through it and see what's happening. And then, you know, FSTP is this very complicated library and PyTorch, which not particularly well documented. So the only really, really way to understand it properly is again, just read the code and step through the code. And then like bits and bytes doesn't really work in practice unless it's used with PEF, the HuggingFace library and PEF doesn't really work in practice unless you use it with other things. And there's a lot of coupling in the HuggingFace ecosystem where like none of it works separately. You have to use it all together, which I don't love. So yeah, trying to just get a minimal example that I can play with was really hard. And so I ended up having to rewrite a lot of it myself to kind of create this like minimal script. One thing that helped a lot was Medec had this LlamaRecipes repo that came out just a little bit before I started working on that. And like they had a kind of role model example of like, here's how to train FSTP, LoRa, didn't work with QLoRa on Llama. A lot of the stuff I discovered, the interesting stuff would be put together by Les Wright, who's, he was actually the guy in the Fast.ai community I mentioned who created the Ranger Optimizer. So he's doing a lot of great stuff at Meta now. So yeah, I kind of, that helped get some minimum stuff going and then it was great once Benjamin and Jono joined full time. And so we basically hacked at that together and then Kerim joined like a month later or something. And it was like, gee, it was just a lot of like fiddly detailed engineering on like barely documented bits of obscure internals. So my focus was to see if it kind of could work and I kind of got a bit of a proof of concept working and then the rest of the guys actually did all the work to make it work properly. And, you know, every time we thought we had something, you know, we needed to have good benchmarks, right? So we'd like, it's very easy to convince yourself you've done the work when you haven't, you know, so then we'd actually try lots of things and be like, oh, and these like really important cases, the memory use is higher, you know, or it's actually slower. And we'd go in and we just find like all these things that were nothing to do with our library that just didn't work properly. And nobody had noticed they hadn't worked properly because nobody had really benchmarked it properly. So we ended up, you know, trying to fix a whole lot of different things. And even as we did so, new regressions were appearing in like transformers and stuff that Benjamin then had to go away and figure out like, oh, how come flash attention doesn't work in this version of transformers anymore with this set of models and like, oh, it turns out they accidentally changed this thing, so it doesn't work. You know, there's just, there's not a lot of really good performance type evals going on in the open source ecosystem. So there's an extraordinary amount of like things where people say like, oh, we built this thing and it has this result. And when you actually check it, so yeah, there's a shitload of war stories from getting that thing to work. And it did require a particularly like tenacious group of people and a group of people who don't mind doing a whole lot of kind of like really janitorial work, to be honest, to get the details right, to check them. Yeah.Alessio [00:34:09]: We had a trade out on the podcast and we talked about how a lot of it is like systems work to make some of these things work. It's not just like beautiful, pure math that you do on a blackboard. It's like, how do you get into the nitty gritty?Jeremy [00:34:22]: I mean, flash attention is a great example of that. Like it's, it basically is just like, oh, let's just take the attention and just do the tiled version of it, which sounds simple enough, you know, but then implementing that is challenging at lots of levels.Alessio [00:34:36]: Yeah. What about inference? You know, obviously you've done all this amazing work on fine tuning. Do you have any research you've been doing on the inference side, how to make local inference really fast on these models too?Jeremy [00:34:47]: We're doing quite a bit on that at the moment. We haven't released too much there yet. But one of the things I've been trying to do is also just to help other people. And one of the nice things that's happened is that a couple of folks at Meta, including Mark Seraphim, have done a nice job of creating this CUDA mode community of people working on like CUDA kernels or learning about that. And I tried to help get that going well as well and did some lessons to help people get into it. So there's a lot going on in both inference and fine tuning performance. And a lot of it's actually happening kind of related to that. So PyTorch team have created this Torch AO project on quantization. And so there's a big overlap now between kind of the FastAI and AnswerAI and CUDA mode communities of people working on stuff for both inference and fine tuning. But we're getting close now. You know, our goal is that nobody should be merging models, nobody should be downloading merged models, everybody should be using basically quantized plus adapters for almost everything and just downloading the adapters. And that should be much faster. So that's kind of the place we're trying to get to. It's difficult, you know, because like Karim's been doing a lot of work with VLM, for example. These inference engines are pretty complex bits of code. They have a whole lot of custom kernel stuff going on as well, as do the quantization libraries. So we've been working on, we're also quite a bit of collaborating with the folks who do HQQ, which is a really great quantization library and works super well. So yeah, there's a lot of other people outside AnswerAI that we're working with a lot who are really helping on all this performance optimization stuff, open source.Swyx [00:36:27]: Just to follow up on merging models, I picked up there that you said nobody should be merging models. That's interesting because obviously a lot of people are experimenting with this and finding interesting results. I would say in defense of merging models, you can do it without data. That's probably the only thing that's going for it.Jeremy [00:36:45]: To explain, it's not that you shouldn't merge models. You shouldn't be distributing a merged model. You should distribute a merged adapter 99% of the time. And actually often one of the best things happening in the model merging world is actually that often merging adapters works better anyway. The point is, Sean, that once you've got your new model, if you distribute it as an adapter that sits on top of a quantized model that somebody's already downloaded, then it's a much smaller download for them. And also the inference should be much faster because you're not having to transfer FB16 weights from HPM memory at all or ever load them off disk. You know, all the main weights are quantized and the only floating point weights are in the adapters. So that should make both inference and fine tuning faster. Okay, perfect.Swyx [00:37:33]: We're moving on a little bit to the rest of the fast universe. I would have thought that, you know, once you started Answer.ai, that the sort of fast universe would be kind of on hold. And then today you just dropped Fastlight and it looks like, you know, there's more activity going on in sort of Fastland.Jeremy [00:37:49]: Yeah. So Fastland and Answerland are not really distinct things. Answerland is kind of like the Fastland grown up and funded. They both have the same mission, which is to maximize the societal benefit of AI broadly. We want to create thousands of commercially successful products at Answer.ai. And we want to do that with like 12 people. So that means we need a pretty efficient stack, you know, like quite a few orders of magnitude more efficient, not just for creation, but for deployment and maintenance than anything that currently exists. People often forget about the D part of our R&D firm. So we've got to be extremely good at creating, deploying and maintaining applications, not just models. Much to my horror, the story around creating web applications is much worse now than it was 10 or 15 years ago in terms of, if I say to a data scientist, here's how to create and deploy a web application, you know, either you have to learn JavaScript or TypeScript and about all the complex libraries like React and stuff, and all the complex like details around security and web protocol stuff around how you then talk to a backend and then all the details about creating the backend. You know, if that's your job and, you know, you have specialists who work in just one of those areas, it is possible for that to all work. But compared to like, oh, write a PHP script and put it in the home directory that you get when you sign up to this shell provider, which is what it was like in the nineties, you know, here are those 25 lines of code and you're done and now you can pass that URL around to all your friends, or put this, you know, .pl file inside the CGI bin directory that you got when you signed up to this web host. So yeah, the thing I've been mainly working on the last few weeks is fixing all that. And I think I fixed it. I don't know if this is an announcement, but I tell you guys, so yeah, there's this thing called fastHTML, which basically lets you create a complete web application in a single Python file. Unlike excellent projects like Streamlit and Gradio, you're not working on top of a highly abstracted thing. That's got nothing to do with web foundations. You're working with web foundations directly, but you're able to do it by using pure Python. There's no template, there's no ginger, there's no separate like CSS and JavaScript files. It looks and behaves like a modern SPA web application. And you can create components for like daisy UI, or bootstrap, or shoelace, or whatever fancy JavaScript and or CSS tailwind etc library you like, but you can write it all in Python. You can pip install somebody else's set of components and use them entirely from Python. You can develop and prototype it all in a Jupyter notebook if you want to. It all displays correctly, so you can like interactively do that. And then you mentioned Fastlight, so specifically now if you're using SQLite in particular, it's like ridiculously easy to have that persistence, and all of your handlers will be passed database ready objects automatically, that you can just call dot delete dot update dot insert on. Yeah, you get session, you get security, you get all that. So again, like with most everything I do, it's very little code. It's mainly tying together really cool stuff that other people have written. You don't have to use it, but a lot of the best stuff comes from its incorporation of HTMX, which to me is basically the thing that changes your browser to make it work the way it always should have. So it just does four small things, but those four small things are the things that are basically unnecessary constraints that HTML should never have had, so it removes the constraints. It sits on top of Starlet, which is a very nice kind of lower level platform for building these kind of web applications. The actual interface matches as closely as possible to FastAPI, which is a really nice system for creating the kind of classic JavaScript type applications. And Sebastian, who wrote FastAPI, has been kind enough to help me think through some of these design decisions, and so forth. I mean, everybody involved has been super helpful. Actually, I chatted to Carson, who created HTMX, you know, so about it. Some of the folks involved in Django, like everybody in the community I've spoken to definitely realizes there's a big gap to be filled around, like, highly scalable, web foundation-based, pure Python framework with a minimum of fuss. So yeah, I'm getting a lot of support and trying to make sure that FastHTML works well for people.Swyx [00:42:38]: I would say, when I heard about this, I texted Alexio. I think this is going to be pretty huge. People consider Streamlit and Gradio to be the state of the art, but I think there's so much to improve, and having what you call web foundations and web fundamentals at the core of it, I think, would be really helpful.Jeremy [00:42:54]: I mean, it's based on 25 years of thinking and work for me. So like, FastML was built on a system much like this one, but that was of hell. And so I spent, you know, 10 years working on that. We had millions of people using that every day, really pushing it hard. And I really always enjoyed working in that. Yeah. So, you know, and obviously lots of other people have done like great stuff, and particularly HTMX. So I've been thinking about like, yeah, how do I pull together the best of the web framework I created for FastML with HTMX? There's also things like PicoCSS, which is the CSS system, which by default, FastHTML comes with. Although, as I say, you can pip install anything you want to, but it makes it like super easy to, you know, so we try to make it so that just out of the box, you don't have any choices to make. Yeah. You can make choices, but for most people, you just, you know, it's like the PHP in your home directory thing. You just start typing and just by default, you'll get something which looks and feels, you know, pretty okay. And if you want to then write a version of Gradio or Streamlit on top of that, you totally can. And then the nice thing is if you then write it in kind of the Gradio equivalent, which will be, you know, I imagine we'll create some kind of pip installable thing for that. Once you've outgrown, or if you outgrow that, it's not like, okay, throw that all away and start again. And this like whole separate language that it's like this kind of smooth, gentle path that you can take step-by-step because it's all just standard web foundations all the way, you know.Swyx [00:44:29]: Just to wrap up the sort of open source work that you're doing, you're aiming to create thousands of projects with a very, very small team. I haven't heard you mention once AI agents or AI developer tooling or AI code maintenance. I know you're very productive, but you know, what is the role of AI in your own work?Jeremy [00:44:47]: So I'm making something. I'm not sure how much I want to say just yet.Swyx [00:44:52]: Give us a nibble.Jeremy [00:44:53]: All right. I'll give you the key thing. So I've created a new approach. It's not called prompt engineering. It's called dialogue engineering. But I'm creating a system for doing dialogue engineering. It's currently called AI magic. I'm doing most of my work in this system and it's making me much more productive than I was before I used it. So I always just build stuff for myself and hope that it'll be useful for somebody else. Think about chat GPT with code interpreter, right? The basic UX is the same as a 1970s teletype, right? So if you wrote APL on a teletype in the 1970s, you typed onto a thing, your words appeared at the bottom of a sheet of paper and you'd like hit enter and it would scroll up. And then the answer from APL would be printed out, scroll up, and then you would type the next thing. And like, which is also the way, for example, a shell works like bash or ZSH or whatever. It's not terrible, you know, like we all get a lot done in these like very, very basic teletype style REPL environments, but I've never felt like it's optimal and everybody else has just copied chat GPT. So it's also the way BART and Gemini work. It's also the way the Claude web app works. And then you add code interpreter. And the most you can do is to like plead with chat GPT to write the kind of code I want. It's pretty good for very, very, very beginner users who like can't code at all, like by default now the code's even hidden away, so you never even have to see it ever happened. But for somebody who's like wanting to learn to code or who already knows a bit of code or whatever, it's, it seems really not ideal. So okay, that's one end of the spectrum. The other end of the spectrum, which is where Sean's work comes in, is, oh, you want to do more than chat GPT? No worries. Here is Visual Studio Code. I run it. There's an empty screen with a flashing cursor. Okay, start coding, you know, and it's like, okay, you can use systems like Sean's or like cursor or whatever to be like, okay, Apple K in cursors, like a creative form that blah, blah, blah. But in the end, it's like a convenience over the top of this incredibly complicated system that full-time sophisticated software engineers have designed over the past few decades in a totally different environment as a way to build software, you know. And so we're trying to like shoehorn in AI into that. And it's not easy to do. And I think there are like much better ways of thinking about the craft of software development in a language model world to be much more interactive, you know. So the thing that I'm building is neither of those things. It's something between the two. And it's built around this idea of crafting a dialogue, you know, where the outcome of the dialogue is the artifacts that you want, whether it be a piece of analysis or whether it be a Python library or whether it be a technical blog post or whatever. So as part of building that, I've created something called Claudette, which is a library for Claude. I've created something called Cosette, which is a library for OpenAI. They're libraries which are designed to make those APIs much more usable, much easier to use, much more concise. And then I've written AI magic on top of those. And that's been an interesting exercise because I did Claudette first, and I was looking at what Simon Willison did with his fantastic LLM library. And his library is designed around like, let's make something that supports all the LLM inference engines and commercial providers. I thought, okay, what if I did something different, which is like make something that's as Claude friendly as possible and forget everything else. So that's what Claudette was. So for example, one of the really nice things in Claude is prefill. So by telling the assistant that this is what your response started with, there's a lot of powerful things you can take advantage of. So yeah, I created Claudette to be as Claude friendly as possible. And then after I did that, and then particularly with GPT 4.0 coming out, I kind of thought, okay, now let's create something that's as OpenAI friendly as possible. And then I tried to look to see, well, where are the similarities and where are the differences? And now can I make them compatible in places where it makes sense for them to be compatible without losing out on the things that make each one special for what they are. So yeah, those are some of the things I've been working on in that space. And I'm thinking we might launch AI magic via a course called how to solve it with code. The name is based on the classic Polya book, if you know how to solve it, which is, you know, one of the classic math books of all time, where we're basically going to try to show people how to solve challenging problems that they didn't think they could solve without doing a full computer science course, by taking advantage of a bit of AI and a bit of like practical skills, as particularly for this like whole generation of people who are learning to code with and because of ChatGPT. Like I love it, I know a lot of people who didn't really know how to code, but they've created things because they use ChatGPT, but they don't really know how to maintain them or fix them or add things to them that ChatGPT can't do, because they don't really know how to code. And so this course will be designed to show you how you can like either become a developer who can like supercharge their capabilities by using language models, or become a language model first developer who can supercharge their capabilities by understanding a bit about process and fundamentals.Alessio [00:50:19]: Nice. That's a great spoiler. You know, I guess the fourth time you're going to be on learning space, we're going to talk about AI magic. Jeremy, before we wrap, this was just a great run through everything. What are the things that when you next come on the podcast in nine, 12 months, we're going to be like, man, Jeremy was like really ahead of it. Like, is there anything that you see in the space that maybe people are not talking enough? You know, what's the next company that's going to fall, like have drama internally, anything in your mind?Jeremy [00:50:47]: You know, hopefully we'll be talking a lot about fast HTML and hopefully the international community that at that point has come up around that. And also about AI magic and about dialogue engineering. Hopefully dialogue engineering catches on because I think it's the right way to think about a lot of this stuff. What else? Just trying to think about all on the research side. Yeah. I think, you know, I mean, we've talked about a lot of it. Like I think encoder decoder architectures, encoder only architectures, hopefully we'll be talking about like the whole re-interest in BERT that BERT 24 stimulated.Swyx [00:51:17]: There's a safe space model that came out today that might be interesting for this general discussion. One thing that stood out to me with Cartesia's blog posts was that they were talking about real time ingestion, billions and trillions of tokens, and keeping that context, obviously in the state space that they have.Jeremy [00:51:34]: Yeah.Swyx [00:51:35]: I'm wondering what your thoughts are because you've been entirely transformers the whole time.Jeremy [00:51:38]: Yeah. No. So obviously my background is RNNs and LSTMs. Of course. And I'm still a believer in the idea that state is something you can update, you know? So obviously Sepp Hochreiter came up, came out with xLSTM recently. Oh my God. Okay. Another whole thing we haven't talked about, just somewhat related. I've been going crazy for like a long time about like, why can I not pay anybody to save my KV cash? I just ingested the Great Gatsby or the documentation for Starlet or whatever, you know, I'm sending it as my prompt context. Why are you redoing it every time? So Gemini is about to finally come out with KV caching, and this is something that Austin actually in Gemma.cpp had had on his roadmap for years, well not years, months, long time. The idea that the KV cache is like a thing that, it's a third thing, right? So there's RAG, you know, there's in-context learning, you know, and prompt engineering, and there's KV cache creation. I think it creates like a whole new class almost of applications or as techniques where, you know, for me, for example, I very often work with really new libraries or I've created my own library that I'm now writing with rather than on. So I want all the docs in my new library to be there all the time. So I want to upload them once, and then we have a whole discussion about building this application using FastHTML. Well nobody's got FastHTML in their language model yet, I don't want to send all the FastHTML docs across every time. So one of the things I'm looking at doing in AI Magic actually is taking advantage of some of these ideas so that you can have the documentation of the libraries you're working on be kind of always available. Something over the next 12 months people will be spending time thinking about is how to like, where to use RAG, where to use fine-tuning, where to use KV cache storage, you know. And how to use state, because in state models and XLSTM, again, state is something you update. So how do we combine the best of all of these worlds?Alessio [00:53:46]: And Jeremy, I know before you talked about how some of the autoregressive models are not maybe a great fit for agents. Any other thoughts on like JEPA, diffusion for text, any interesting thing that you've seen pop up?Jeremy [00:53:58]: In the same way that we probably ought to have state that you can update, i.e. XLSTM and state models, in the same way that a lot of things probably should have an encoder, JEPA and diffusion both seem like the right conceptual mapping for a lot of things we probably want to do. So the idea of like, there should be a piece of the generative pipeline, which is like thinking about the answer and coming up with a sketch of what the answer looks like before you start outputting tokens. That's where it kind of feels like diffusion ought to fit, you know. And diffusion is, because it's not autoregressive, it's like, let's try to like gradually de-blur the picture of how to solve this. So this is also where dialogue engineering fits in, by the way. So with dialogue engineering, one of the reasons it's working so well for me is I use it to kind of like craft the thought process before I generate the code, you know. So yeah, there's a lot of different pieces here and I don't know how they'll all kind of exactly fit together. I don't know if JEPA is going to actually end up working in the text world. I don't know if diffusion will end up working in the text world, but they seem to be like trying to solve a class of problem which is currently unsolved.Alessio [00:55:13]: Awesome, Jeremy. This was great, as usual. Thanks again for coming back on the pod and thank you all for listening. Yeah, that was fantastic. Get full access to Latent Space at www.latent.space/subscribe
Episode Description: Welcome to The Cocktail Academy Podcast! In this episode, Damian welcomes the legendary mixologist Lynnette Marrero. From her diamond status with airlines to her influential role in shaping the modern cocktail scene, Lynnette shares her incredible journey through the world of bartending. Tune in as we explore her experiences, inspirations, and her pioneering efforts to elevate women in the cocktail industry.Episode Highlights:Meet Lynnette Marrero: Discover Lynnette's fascinating background and how she transitioned from the theatre world to becoming a globally recognized bartender and mixologist.Industry Insights: Lynnette shares her thoughts on the evolving perception of bartending and the opportunities within the hospitality industry.Cocktail Family Tree: Learn about the influential figures like Dale DeGroff, Julie Reiner, and Sasha Petraske who have shaped Lynnette's career and the broader cocktail community.Journey Through Bars: From her early days at Punch and Judy to iconic bars like the Flatiron Lounge, hear stories of Lynnette's progression and the vibrant bar culture in New York.Speed Rack Revolution: Dive into the inception and impact of Speed Rack, a competition Lynnette co-founded to highlight and support women in bartending globally.Brand Ambassador Life: Lynnette discusses her experiences working with renowned brands like Zacapa Rum and St. Germain, and her views on the role of a brand ambassador.New Ventures: Explore Lynnette's current projects, including her work with Aplós, a functional spirits company, and her collaboration with Jennifer Lopez on the ready-to-drink cocktail line, Delola.Masterclass Experience: Get behind-the-scenes insights into Lynnette's collaboration with Ryan Chetiyawardana on their masterclass, emphasizing classic cocktail techniques and innovative approaches.Keywords: Cocktail Academy Podcast, Lynnette Marrero, mixology, bartending, Speed Rack, women in bartending, cocktail competitions, brand ambassador, Jennifer Lopez, Delola, Aplós, modern mixology, cocktail recipes, hospitality industry, New York bars, Flatiron Lounge, Dale DeGroff, Julie Reiner, Sasha Petraske, functional spirits, masterclass, Ryan Chetiyawardana.Connect with Us:Follow Lynnette Marrero on Instagram or her website to stay updated with her latest projects.Follow Us on Instagram, Tiktok or FacebookEnjoyed this episode? Don't forget to rate, review, and subscribe to The Cocktail Academy Podcast on Apple Podcasts. Share your favorite moments from this episode on social media using #CocktailAcademyPodcast! Hosted on Acast. See acast.com/privacy for more information.
To better understand the mechanisms that drive antiphospholipid syndrome (APS), Dr. Yu Zuo and his team, evaluated the presence of circulating calprotectin (cCLP) to detect any clinical associations or even the mechanistic role among a cohort of primary APS and aPL-positive patients. Dr. Zuo sits down with us this week to discuss whether calprotectin can be a functional biomarker for those with APS thrombocytopenia and what the future holds for this study's conclusions.
Send us a Text Message.In this episode of the Power of Peacefulness Podcast, host Sharon McLaughlin is joined by Dr. Richard Gajdowski to discuss the topic of divorce, with a focus on the challenges faced by physicians. Dr. Gajdowski, an emergency room physician with a law degree, practices in Pittsburgh and New York, specializing in family law, wills, estates, and trusts. His unique background offers a comprehensive view on divorce for medical professionals.Dr. Gajdowski highlights that divorce rates among physicians are lower than the national average but points out that female physicians face a higher risk, largely due to work-related factors. He emphasizes the importance of seeking competent legal advice early in the process and understanding the specific divorce laws in one's state, as they can vary significantly.Key considerations in a divorce include the marital home and its financial implications. Dr. Gajdowski advises clients to live within their means and prepare for lifestyle adjustments. He suggests using mediation and arbitration to manage legal costs effectively and stresses the importance of clear communication with legal counsel.Dividing marital assets fairly is crucial, and Dr. Gajdowski explains the differences between equitable distribution and community property states. Retirement savings and their tax implications are also important, and he recommends consulting a tax advisor for optimal asset division. For physicians with practices, obtaining a professional appraisal is essential.Infidelity can impact divorce proceedings, influencing alimony and asset division depending on the state. Dr. Gajdowski advises considering the financial and logistical challenges of maintaining two households during separation. He explains alimony pendente lite (APL) and its rehabilitative purpose.Dr. Gajdowski provides his contact information for those seeking legal assistance, noting his practice spans Pennsylvania, New York, Ohio, California, and soon West Virginia. Listeners can reach him via gajdowskilaw.com for personalized guidance.This episode offers valuable insights and practical advice for anyone facing divorce, especially medical professionals. Dr. Gajdowski's dual expertise in law and medicine makes this a must-listen for physicians navigating marital challenges.About Dr. Gajdowski:Dr. Gajdowski is an accomplished physician and health system administrator with more than 38 years of experience as an Emergency Physician and 20 years as a Health Insurance Executive prior to practicing law.Website and Social Media Links:https://www.gajdowskilaw.com/https://www.linkedin.com/in/richard-gajdowski-md-jd-mba-mph-frcpc-facep-fclm-745b7b8/#DivorceLaw #PhysicianLifeThe Power of Peacefulness and Stress Relief Podcast was created by Sharon McLaughlin MD FACS to help normalize mental health. If you need help creating peace in your life be sure to download our peacefulness workbook.https://sharonmclaughlinmd.com/workbookI would love to hear your thoughts.Instagram-https://www.instagram.com/sharonmclaughlinmd/Tik Tok-https://www.tiktok.com/@sharonmclaughlinmdLinkedin -https://www.linkedin.com/in/sharonmclaughlinmd/Facebook-https://www.facebook.com/sharon.t.mclaughlin/Email sharon@sharonmclaughlinmd.com
My very special guest for this episode is none other than my co-bitch Gen George! Yes, my co-founder for Like Minded Bitches Drinking Wine. In case you missed it, it's a women's business group that we started 10 years ago as a passion project that's now grown to over 180,000 members. The group is on Facebook and the Boa app if you want to join us, but also very excitingly we're kicking off in-person events again, with Sydney, Melbourne and Brisbane to start.Okay now a bit more about Gen, because she is one hell of a serial entrepreneur. At the age of X, Gen started OneShift, which was an online talent marketplace that instantly connects local candidates to local businesses. Gen grew OneShift to over 40k employers and 700k potential candidates and achieved a valuation of $20m in their first year alone. 5 years ago, the business was successfully acquired. The process of growing OneShift allowed her to see a clear market gap which led to her launching Tamme - a marketing AI and analytics platform for two-sided marketplaces.In 2019, Gen established Genry Capital, which is focused on companies in manufacturing, research and development, logistics and the consumer brand space.In 2020, she established Australian Private Label, which develops and manufactures products for brands, musicians and influencers locally in Australia. APL has already grown to a 7 figure business and is growing at 35% year on year. And yes… There's more, in 2023 she launched Daily Shake, which is a premium supplements brand that is known for its fun unique flavours. Launched only a year ago, the brand is already rolling out into Coles and Mr Vitamins in Australia, and later this year launching with a Middle Eastern retailer that has 6000 points of sale. She's just incredible!! Hosted on Acast. See acast.com/privacy for more information.
En el podcast de hoy hablamos sobre todo lo que debes saber esta semana como inversor en bolsa, y vemos qué cabe esperar de los mercados financieros durante la semana que entra. 15% de DESCUENTO (limitado a 4 plazas) en Boring Capital con el código: VERANOBC. Aplícalo en: https://boringcapital.net/contrata Únete al canal GRATUITO de WhatsApp: https://whatsapp.com/channel/0029VaTrH1L72WTwHEGtyr0m Sígueme en instagram: https://instagram.com/arnau_invertirbolsa Todo lo que hacemos en Boring Capital: https://boringcapital.net/ Consulta nuestras rentabilidades pasadas en Boring Capital: https://boringcapital.net/informes-rentabilidad Sígueme en Twitter: https://twitter.com/ajnogues Suscríbete a nuestra newsletter: https://mailchi.mp/1a1f327fc3d5/ideas-de-swing
In this episode, Bernie and Anthony review the full history of APL treatment, in preparation for the upcoming plenary presentation of the APOLLO study THIS WEEKEND at EHA 2024! How did we get to our current standard of care in APL? How and why did the PETHEMA, GIMEMA, UK MRC, and other cooperative group regimens evolve over time into what they are today? And importantly, is the APOLLO control arm (no arsenic!), ok? Tune in to find out!
CLL #2325 (feat. Black Eyed Peas) – (Director’s Cut) 08/26/2004 – Thursday Night Show Source – Official Board Captured KROQ CD (2004) with Tucker Stream Recording Patches This episode is 100% complete with a Major audio upgrade, Black Eyed Peas making their only appearance on CLL. Will.I.Am, Taboo, APL.DE.AP and Fergie are all in studio and they stay the entire episode. The Love Between The Two Hosts – CLL on Youtube, with Video for select episodes. https://adamanddrdrewshow.com/1743-loveline-nostalgia-with-superfan-giovanni/ Paid Link – As an Amazon Associate I earn from qualifying purchases. Music Provided by Rich Banks Check out His Website and Soundcloud to hear more of his awesome work and perhaps commission him for your next project. Venmo
CLL #2325 (feat. Black Eyed Peas) 08/26/2004 – Thursday Night Show Source – Official Board Captured KROQ CD (2004) This episode is 100% complete with a Major audio upgrade, Black Eyed Peas making their only appearance on CLL. Will.I.Am, Taboo, APL.DE.AP and Fergie are all in studio and they stay the entire episode. The Love Between The Two Hosts – CLL on Youtube, with Video for select episodes. https://adamanddrdrewshow.com/1743-loveline-nostalgia-with-superfan-giovanni/ Paid Link – As an Amazon Associate I earn from qualifying purchases. Music Provided by Rich Banks Check out His Website and Soundcloud to hear more of his awesome work and perhaps commission him for your next project. Venmo
The Jim Crow South tried to destroy MLK's reputation by arresting him. Instead, they sent his population soaring to new heights. Could Donald Trump benefit the same way? Rich Baris joins Charlie to talk about Donald Trump's rising political strength in the face of non-stop lawfare, and weighs in on the potential of the Glenn Youngkin presidential run many Never Trumpers are now hoping for. Plus, APL joins with an update on the House's investigation into Hunter Biden in the wake of David Weiss's appointment as special counsel.Support the show: http://www.charliekirk.com/supportSee omnystudio.com/listener for privacy information.