Podcasts about Copi

  • 259PODCASTS
  • 305EPISODES
  • 37mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 23, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Copi

Latest podcast episodes about Copi

On est tous debout... toute la journée au Saguenay-Lac-Saint-Jean
POUF, ALEXE, BÉGONIA, COME'ON MAN & C'EST BON DU PAIN !

On est tous debout... toute la journée au Saguenay-Lac-Saint-Jean

Play Episode Listen Later May 23, 2025 58:55


Ce matin, vendredi 23 mai avec Vincent, Jean-Michel et Megan ! On parle avec la chanteuse Alexe pour la sortie de son nouvel EP, Copié collé ! Les bons moments de la semaine dans L’à-côté de Jean-Michel Connaissez-vous les artistes qui vont sortir des albums ? C’est ce que Megan nous a préparé

Gyno Girl Presents: Sex, Drugs & Hormones
Dr. Janeane Anderson: What Black Women's Experiences Reveal About Our Healthcare System

Gyno Girl Presents: Sex, Drugs & Hormones

Play Episode Listen Later May 2, 2025 52:29 Transcription Available


What if the biggest reason women stop life-saving treatment isn't the medication—but clinicians talk to them about it?In this eye-opening episode, I talk with Dr. Janeane Anderson, a powerhouse researcher and faculty member at the International Society for the Study of Women's Sexual Health, about the hidden reasons so many women stop taking critical medications like tamoxifen. It's not just about the side effects—it's about the silence surrounding them.We dig into her research on how poor communication, racial bias, trauma, and lack of sexual health conversations lead to lower adherence rates, especially for Black women. We also explore the idea of epistemic injustice—how patients are often dismissed, even when they know something is wrong. Janeane shares how this harm shows up in the room and what clinicians can do to build trust and improve care.From religious shame to relationship dynamics, sexual trauma, and systemic inequality, this conversation doesn't shy away from the messy, painful, and very real barriers women face in their health journeys. But we also talk about hope—what it looks like to listen better, ask different questions, and create safer spaces for patients to advocate for themselves.If you're a patient who's ever felt unheard, or a clinician who wants to do better, this one's for you.Highlights:Why Black women are disproportionately affected by advanced-stage breast cancer.The link between sexual dysfunction and stopping cancer treatment.How religion, shame, and duty shape sexual health after diagnosis.What epistemic injustice means and how it plays out in exam rooms.Simple but powerful questions doctors can ask to avoid retraumatizing patients.If this episode resonated with you, please hit subscribe, leave a review on Apple Podcasts, and share it with someone who needs to hear it. Let's change how we talk about women's health—together.Dr. Janeane N. Anderson Bio:Janeane N. Anderson is an Assistant Professor in the Department of Community and Population Health in the College of Nursing at the University of Tennessee Health Science Center (UTHSC) in Memphis, TN. Dr. Anderson completed postdoctoral research fellowships at Emory University and UTHSC. She earned a Ph.D. in Communication and a Master of Public Health degree from the University of Southern California.Dr. Anderson's research targets the relationship between patient-clinician communication practices and clinical and quality of life outcomes among Black adults with chronic health conditions, specifically breast cancer, HIV/AIDS, and vulvovaginal and pelvic pain.Past extramural funding from National Cancer Institute supported studies that explored patient-clinician communication, treatment adherence, and sexual health challenges among women with early-stage, HR+ breast cancer. Funding from the Washington DC Center for AIDS Research supported development of a shared decision-making tool to improve uptake of pre-exposure prophylaxis (PrEP) among Black sexual minority men; the Tennessee Department of Health funding supported development and implementation of a training for healthcare professional students to improve communication practices for PrEP education and counseling.Currently, she is the Co-PI of a $1.58 million industry-sponsored grant to investigate multilevel barriers to healthcare access and utilization among Black women with de novo metastatic breast cancer and those with increased risk for advanced breast disease in the U.S. Mid-South region.Dr. Anderson's professional activities also include developing faculty resources and university-level programming to address diversity, equity, and inclusion goals and objectives. She is frequently invited to give lectures on systems of oppression, patient-centered communication practices, and sensitive and socially...

Nota Bene
NOTA BENE - Les Romains ont vraiment tout copié sur les Grecs ?

Nota Bene

Play Episode Listen Later Apr 30, 2025 23:52


Mes chers camarades, bien le bonjour !Si je vous raconte les douze travaux d'Hercule en Grèce, tout de suite il y en a qui vont bondir : “ah non, fais gaffe, à Rome on dit Hercule, mais en Grèce, on dit Héraklès !” Et c'est complètement vrai ! Mais honnêtement, si on les confonds, ce n'est pas pour rien : les Romains se sont parfois beaucoup inspirés des dieux des Grecs. Mais attention, n'allez pas croire pour autant que la religion romaine n'a rien d'original, et qu'elle n'a pas radicalement changé certains trucs en piochant des idées chez d'autres peuples ! Et comme 1 200 ans d'Histoire, ça reste quand même très long, je vous propose non pas un, mais bien deux épisodes sur la religion romaine ! Aujourd'hui on va voir en quoi la mythologie romaine est fondatrice : non seulement elle raconte la fondation de la cité, mais en plus elle fonde les rapports sociaux, publics et citoyens entre Romains… Bonne écoute !➤ Quelques corrections : ➜ Les 9 travaux d'Hercule… Dire qu'on a même fait un épisode dédié aux 12 travaux… Bref, désolé pour ce fail majestueux !➜ Observer le vol des oiseaux ne se dit pas "observer les oracles", mais "observer les augures".➜ L'année romaine ne commençait pas au mois de janvier, mais au mois de mars !➜ L'expression exacte est "regagner ses pénates".

Métrica Latina
Ep. 226 | bonny lovy: unir a bolivia con su música, ser infiel varias veces y si bad bunny le copió el nombre

Métrica Latina

Play Episode Listen Later Apr 13, 2025 47:09


En este podcast Matías y Fer de Métrica entrevistan al artista boliviano Bonny Lovy. Hablaremos de sus inicios en la TV, su mudanza a Puerto Rico, cuando Luny Tunes lo quiso firmar, si Bad Bunny copió su nombre, su imagen surfer, la historia de "Enamorado" y "Desde Que La Vi", su música con Mike Bahía, su sabor de Four Loko e inversiones, la escena musical en Bolivia, su hit “La Cumbia Boliviana" y su colaboración con Flavia Laos

Sausage of Science
SoS 235: Michael Muehlenbein on his discoveries in COVID-19 and the importance of students training

Sausage of Science

Play Episode Listen Later Apr 2, 2025 47:38


Dr. Michael Muehlenbein is a prominent figure in anthropology and biology, currently serving as a professor at Baylor University. His academic journey has been marked by a deep commitment to understanding human evolution, behavior, and health through an interdisciplinary lens. Michael earned an MsPH in both Tropical Medicine and Biostatistics from Tulane University, and an MPhil and PhD in Biological Anthropology from Yale University. His research interests are diverse, encompassing topics such as the evolutionary basis of disease susceptibility, reproductive strategies, and the interplay between environmental factors and human physiology. At Baylor, he has contributed significantly to both teaching and research, mentoring students while also publishing extensively in peer reviewed journals. His work often integrates insights from evolutionary theory with practical applications in public health and medicine, making him a key contributor to discussions on how our evolutionary past shapes contemporary health challenges. Michael is also the Co PI on the NSF-funded project, “Shared markers of identity on inflammation and stress.” ------------------------------ Find the papers discussed in this episode: Muehlenbein MP, Gassen J, Nowak TJ, Henderson AD, Weaver SP, Baker EJ. (2023). Waco COVID Survey: A Community-Based SARS-CoV-2 Serological Surveillance Study in Central Texas. J Community Health, 48(1):104-112. doi: 10.1007/s10900-022-01143-y. Muehlenbein M, Gassen J, Nowak T, Henderson A, Morris B, Weaver S, Baker E. (2023). Age-Dependent Relationships Between Disease Risk and Testosterone Levels: Relevance to COVID-19 Disease. Am J Mens Health. doi: 10.1177/15579883221130195. ------------------------------ Contact Dr. Michael Muehlenbein: Michael_Muehlenbein@baylor.edu ------------------------------ Contact the Sausage of Science Podcast and Human Biology Association: Facebook: facebook.com/groups/humanbiologyassociation/, Website: humbio.org, Twitter: @HumBioAssoc Chris Lynn, Co-Host, Website: cdlynn.people.ua.edu, E-mail: cdlynn@ua.edu, Twitter: @Chris_Ly Courtney Manthey, Guest-Co-Host, HBA Junior Fellow , Website: holylaetoli.com/ E-mail: cpierce4@uccs.edu, Twitter: @HolyLaetoli Anahi Ruderman, SoS Co-Producer, HBA Junior Fellow, E-mail: aniruderman@gmail.com, Twitter: @ani_ruderman

El Gordo y La Flaca
¿Shakira copió o se inspiró en las giras de Beyonce y Taylor Swift?

El Gordo y La Flaca

Play Episode Listen Later Mar 18, 2025 24:18


Mientras Shakira sigue poniendo a gozar a sus fanáticos en sus conciertos de la gira 'Las mujeres ya no lloran' algunos no dejan pasar desapercibidas algunas similitudes entre sus presentaciones y las exitosas giras de Beyoncé y Taylor Swift. Te lo contamos aquí. Y además en El Gordo y La Flaca: Comenzó en Los Ángeles el juicio contra el productor musical Angel del Villar, ex novio de Chiquis Rivera, quien enfrenta cargos por relación con el narcotráfico.Ana de Armas ¿tiene un nuevo romance con Tom Cruise?

The Best of Weekend Breakfast
Weather predicting system piloted in Isipingo and Umgababa.

The Best of Weekend Breakfast

Play Episode Listen Later Feb 23, 2025 8:30


Professor Saloshni Naidoo, Co-PI of the project and the head of public health medicine at UKZN on what to make of their early Warning system for Extreme Weather Events project, in collaboration involving UKZN, the University of the West of Scotland (UWS), the Royal College of Surgeons in Ireland (RCSI) Faculty of Nursing and Midwifery and the University of Portsmouth (UoP).See omnystudio.com/listener for privacy information.

Beyond The Tracks Podcast
Episode 103: Jeff Copi Speaks

Beyond The Tracks Podcast

Play Episode Listen Later Jan 24, 2025 32:34


Tune in to this episode as Jeff Cope sits down with DJ Big Mike and Lloyd "The Angry Artist" and discusses his Trust  & Faith in GOD... As well as opening up about being vulnerable. Sponsored By:Nina's CandlesUp Start ComicsHusky Life Clothing6 Blessings Cafe & Catering

Auto-Radio
L'ÉMISSION - Que faire si ma plaque d'immatriculation est copiée ? du 17 novembre 2024

Auto-Radio

Play Episode Listen Later Nov 17, 2024 4:27


On appelle cela une "doublette" : des escrocs copient la plaque de votre voiture et vous recevez les PV à leur place. Un phénomène en plein boom : + 70% en 1 an, + 49% en 5 ans. Il y aurait 400.000 fausses plaques chaque année en circulation en France...

Terry Mize Podcast
Episode 367: REPORT: We're in EATON, OH at the 2024 "RENEWED" COPI Missions Conference

Terry Mize Podcast

Play Episode Listen Later Nov 1, 2024 28:31


"RENEWED" at COPI Missions ConferenceWebsite: https://terrymize.comListen to the Terry Mize Podcast- https://cutt.ly/TfnK8I6Follow Terry Mize Ministries on FACEBOOK: https://cutt.ly/terrymizeministries-FACEBOOKYOUTUBE: https://youtube.com/user/terrymizeministriesListen to the Terry Mize Podcast- https://cutt.ly/TfnK8I6Orphan Giving Site: https://orphan1.comGIVE HERE! https://cutt.ly/ttW2I5ZABOUT THE MINISTRY OF TERRY L. MIZE In short, World Missions and International Relief.For over 50 years,  Dr. Terry L. Mize has had a heart to "give living bread, to dying men, around the world". His mission, IS missions, with a mindset that we must GO, in order to do the work of Biblical missions.His ministry seeks to show every person the living authority they can have in a relationship with Jesus Christ while supplying what he calls the "5 Basic Needs of Man":#1 A roof over your head #2 Clothes on your back#3 Food on your table#4 A healthy body#5 Able to take care of your familyThrough numerous leadership teaching and training events, as well as, connecting donors, resources and ministry partners with trusted local leadership in numerous countries, he has been able to bring practical help, hope and hands-on relief to those who need it most.MORE ABOUT TERRY & RENEE' MIZEWhen Terry and Reneé, aren't traveling overseas, they are coordinating relief efforts for orphans through JMICF, speaking in churches, bible schools, and conventions in the United States. Over the years of their combined ministry, they've witnessed an incalculable number of God-given miracles, and hundreds of thousands, if not millions, come to know a personal relationship with Jesus Christ.

El Garaje Hermético de Máximo Sant
Historias que las marcas de coches esconden

El Garaje Hermético de Máximo Sant

Play Episode Listen Later Sep 29, 2024 15:15


Hay historias que las marcas prefieren que no conozcas, ¡pero!... para eso estamos nosotros… Las hay de todo tipo: Motores “fallidos”, modelos que son copias evidentes, prototipos incendiados, colaboracionismo con los nazis… ¡hasta suicidios! Os prometo que no os vais a aburrir. 1. Motor PRV: ¡Faltan dos cilindros! Vamos a hablar de un motor desarrollado durante años como un V8 al que en el último momento decidieron quitarle dos…y la pifiaron… 2. Honda NSX: “Inspiración” Porsche… y buen humor. Cuando Honda estaba desarrollando la nueva generación del Honda NSX decidió adquirir en 2014 un Porsche 911-991 GT3… El caso es que hubo una llamada a revisión de estos coches… y los ingenieros alemanes comprobaron que esta unidad había mantenido altas velocidades, de hasta 328 km/h, durante largos periodos… en un coche que Porsche recomendaba no pasar de 310 km/h. Investigaron y comprobaron que este coche en concreto había sido comprado, a través de terceros, por Honda. Cuando devolvieron el coche pusieron una nota en el capó que decía: “Buena suerte Honda. De parte de Porsche”. 3. Escarabajo: La copia de una copia. Ferdinand Porsche era un “copiota”. Copió los diseños del periodista y diseñador Josef Ganz y de su “Standard Superior Type 1. Cuando Hitler pidió a Porsche diseñar el famoso “coche del pueblo” o Volkswagen, Ferdinand le dijo que había unos diseños muy interesantes de este tal Josef Ganz… que para su desgracia era judío. Le retiraron la nacionalidad alemana, de forma que perdió su patente y Porsche puedo usar el diseño a su antojo. Lo curioso es que Ganz se había, seamos “finos” de nuevo, inspirado en el Tatra T97 diseñado por los brillantes ingenieros Ledwinka y Jaray. Tras la guerra Tatra demandó a VW por plagio… y ganó el juicio. 4. BMW, apoyada por los nazis. En la Alemania Nazi no se podía elegir: O estabas con ellos o contra ellos. Por eso no se puede ser muy duro cuando se habla del colaboracionismo de las marcas alemanas con el régimen nazi… no les quedaba otra. Lo que sucede es que puede ser que “abraces” el régimen muy a tu pesar porque no te queda otra… o que lo abraces por convicción. 5. ¡Todo un Mercedes! Con motor Renault. No, no voy a decir que Mercedes oculta que utiliza motores Renault en unos cuantos de sus modelos… ocultarlo no, pero que prefieren no contarlo y que pase desapercibido… pues eso sí… 6. El 124 que pudo ser… y no fue. A primeros de los años 60, Fiat necesitaba un sucesor para sus veteranos Fiat 1100 del tipo 103 y Agnelli encargó al proyecto al genial Dante Giacosa… Giacosa diseñó un prototipo muy moderno, con tracción delantera y motor transversal, caja de cambios con engrase separado y dirección de cremallera. Por otro lado, Oscar Montabone, llegado desde Simca que Fiat había vendido, propuso un modelo mucho más conservador… Agnelli recordaba que el que pudo haber sido el primer tracción delantera de Fiat, tuvo un accidente, con incendio incluido, que casi acaba con la vida de sus ocupantes… Y Agnelli, que como decía al comenzar tenía muy buena memoria, eso de la tracción delantera no le gustó nada… 7. Cuando la presión lleva al suicidio. Vamos con las verdades incómodas y vamos a ponernos serios… en 2006 tres trabajadores de Renault se quitaron la vida… ¿casualidad? Pues todo apunta a que no. Uno de ellos dejo una carta de despedida en su casa explicando las dificultades que sufría en su centro de trabajo. 8. El Renault… diseñado por Porsche. Al acabar la Segunda Guerra mundial el pobre Ferdinand fue detenido por la “gendarmerie” francesa y encarcelado. Pero los franceses, muy listos, le dijeron: “Si nos ayudas con el diseño de nuestro nuevo Renault, te reducimos la pena”. Y Ferdinand colaboró en el diseño del Renault 4CV. 9. El Audi diseñado por Porsche. El super-coche de Audi fue diseñado por Ferdinand Porsche… como decíamos, muy fecundo. Pero lamentablemente no pasó de la fase de prototipo. El impresionante Schnellsportwagen Auto Union Type 52 de 1930 era una versión “de calle” de los monoplazas de motor central de 16 cilindros. 10. ¡Ojo si eres fan de Ferrari! Pocas marcas pueden “presumir” con comillas, de haber demandado a alguno de sus admiradores… Ferrari, una marca muy especial, sí. A muchos. Pero mi favorita es cuando un fan de la marca, Summy Wasem de 15 años hizo una fanpage de la marca en 2008 que llegó a tener 10 millones de seguidores. Ferrari le demandó, luego llegaron a un acuerdo para que este chico fuese el administrador y luego le “birlaron” la página… Conclusión Las marcas de coches, como todas las marcas y como todas las personas, les gusta presumir de lo bueno e intentan ocultar en lo posible lo que no les parece tan bueno… Pero para eso estamos los periodistas y este canal… y hay más verdades ocultas…

Canard PC
[SCROLL NEWS #124] Nintendo attaque Palworld | Valve et les CPU ARM | les jeux copiés avant leur sortie

Canard PC

Play Episode Listen Later Sep 29, 2024 84:12


Abonnez-vous et soutenez cette chaîne : https://fr.ulule.com/canardpc/Tous nos magazines et nos offres d'abonnement : https://boutique.canardpc.com/Notre édition web sur abonnement : https://www.canardpc.com/Notre newsletter sur les nouvelles technologies : lepavenumerique.substack.com/about ► Twitch : https://www.twitch.tv/canardpc► Bluesky : https://bsky.app/profile/canardpc.com► X : https://twitter.com/Canardpcredac► Discord : https://discord.gg/nJJFe9r► Facebook : https://www.facebook.com/CanardPCmagazine► Instagram : https://www.instagram.com/canardpc/► Tiktok : @canardpcredac Tous droits réservés Presse Non-Stop / Canard PC. Aucun youtubeur n'a été maltraité pendant le tournage.

Mother Soccer - podcast futbol
La NFL le copió a Liga MX

Mother Soccer - podcast futbol

Play Episode Listen Later Sep 6, 2024 41:06


Han vuelto... ¡las Teorías Mamalonas! El "robo" de la NFL a la Liga MX, el nuevo Carlos Vela del Tri y el mexicano que reunió a Oasis en este viernes de #MotherSoccer. Nuevo episodio junto al Pollo Ortiz, Rodolfo Landeros, Santiago Padilla y José Ramón Llaca. Podcast exclusivo de futvox. Learn more about your ad choices. Visit megaphone.fm/adchoices

Think Aloud with Dr. G.
E47 - Bill Therrien

Think Aloud with Dr. G.

Play Episode Listen Later Aug 11, 2024 50:00


Bill Therrien is the Thomas G. Jewell Professor of Education at the University of Virginia. He also is the coordinator of the Research in Practice group for the STAR (Supporting Transformative Autism Research) project and is Co-PI for the Special Education Research Accelerator (SERA). He is the co-editor of Exceptional Children, the flagship research journal of the Council for Exceptional Children (CEC). Therrien has extensive experience designing and evaluating academic programming for students with autism and learning disabilities particularly in the areas of science and reading. In his work, Therrien employs a variety of methods including single subject, experimental, and quasi-experimental group research designs. Therrien has also conducted numerous meta-analyses in the areas of reading, science and special education. He successfully directed/co-directed over 15 federal and state grants totaling more than $21 million in funding.  Websites and clickable links:Bill's faculty pageDLD's websiteTECBD Conference pageAlethia Society pageFlint Michigan Lead Crisis: SettlementOther Think Aloud guests/episodes we mentioned:David Bateman - E10 and E13Peggy Weiss - E30Erica Lembke - E09To read: (Check out your local bookstore or favorite online provider)Slow Productivity: The Lost Art of Accomplishment Without Burnout by Cal NewportBooks on Stoicism

Diverse Thinking Different Learning
Ep. 197: Five Best Practices for Math Instruction - Dr. Sarah Powell

Diverse Thinking Different Learning

Play Episode Listen Later Aug 6, 2024 41:02


Welcome back, listeners, to Diverse Thinking Different Learning! In this episode, we're having a conversation with Dr. Sarah Powell, a distinguished professor at the University of Texas at Austin and Associate Director of the Meadows Center for Preventing Educational Risk. Dr. Powell's expertise in math education sheds light on effective strategies to support students who face challenges with math! The discussion explores the crucial role early math education plays in shaping a student's future academic success, emphasizing that early struggles can lead to long-term difficulties if not addressed properly. Dr. Powell elaborates on how cumulative math skills impact later learning, stressing the importance of early intervention and continuous support throughout a student's educational journey. Dr. Powell also highlights several best practices for math instruction, including the use of multiple representations to deepen understanding and systematic, explicit teaching methods to ensure mastery of concepts. She also addresses the role of math vocabulary and its significance in helping students grasp mathematical ideas more effectively. Tune in to gain valuable insights into how targeted interventions and effective teaching strategies can make a significant difference in students' math achievements. If you are an educator yourself seeking to enhance your math instruction or perhaps a parent looking to support your child's learning, this episode of the show is sure to offer practical advice and actionable strategies to help all students excel in math! Show Notes: [3:14] - Early math performance predicts future success, making early intervention important for long-term achievement. [6:06] - Dr. Powell points out how schools often prioritize reading over math, but early math interventions are just as important. [9:01] - Dr. Powell argues that teaching math vocabulary is essential for understanding concepts and participating effectively in the classroom. [11:59] - Difficulties in math may be linked to language issues, including reading, writing, and speaking. [13:04] - Using multiple representations, like manipulatives and drawings, can help students better understand math concepts. [15:24] - Dr. Powel feels that students should understand math deeply by using various representations, not just by memorizing symbols. [18:55] - Identifying common mistakes better helps target instruction than addressing isolated mistakes. [20:02] - Dr. Powell argues that effective math learning involves modeling, repeated practice, and building fluency through both speed and accuracy. [23:53] - Incorporating short fluency practices into the school day enhances math skills and helps reduce cognitive overload. [25:34] - Older students should develop fluency to avoid using basic strategies like tick marks, which can lead to mistakes. [26:55] - Effective strategies for solving word problems include the U.P.S. check method and recognizing common problem types. [31:16] - Dr. Powell explains how parents can help with word problems by discussing the problem and identifying consistent frameworks. [32:43] - Parents can also support math learning through discussions, games, and incorporating math into daily activities. [35:25] - Engaging in practical math activities, like measuring ingredients, makes math fun and relevant! [38:57] - For additional support, resources include emailing Dr. Powell as well as videos on representations, a free math course, and teacher-friendly materials! About Our Guest: Dr. Sarah R. Powell is a Professor in the College of Education at The University of Texas at Austin and Associate Director of the Meadows Center for Preventing Educational Risk. Her research, teaching, and service focus on mathematics, particularly for students who experience mathematics differently. Dr. Powell is currently Principal Investigator (PI) of an Institute of Education Sciences (IES) efficacy grant (RAAMPS) related to word-problem solving at Grade 4. Dr. Powell is also PI of SPIRAL, an IES grant which works collaboratively with Grade 4 and 5 teachers who provide mathematics instruction to students with mathematics difficulty. Dr. Powell is Co-PI of STAIR 2.0 (funded by IES) in which the team works with middle school special education math teachers and SCALE (funded by the US Department of Education) in which the team is replicating a fraction intervention in Grades 4-8. Dr. Powell collaborates on Math Words, an IES development grant about mathematics vocabulary. She also assists with a word-problem project funded as a Small Business Innovation Research (SBIR) grant to Querium. To help create the next generation of researchers focused on mathematics, Dr. Powell is PI of a doctoral leadership grant (LIME) funded by Office of Special Education Programs. Dr. Powell was awarded the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2019. Dr. Powell understands all of these efforts are a team effort, and she thanks her project leads, graduate students, research assistants, and research collaborators as well as the teachers and students who participate in these projects. Links and Related Resources: ChildNEXUS - “Important Components of Effective Math Intervention” Diverse Thinking Diverse Learning - “Ep. 60: A Multisensory Intervention for Kids Who Struggle with Math with Adrianne Meldrum” Diverse Thinking Diverse Learning - “Ep. 122: Accommodations for Students Who Struggle with Math with Adrianne Meldrum” “Intensive Intervention in Mathematics Course Content” “Specialized Math Intervention to Reach All Learners” “Pirate Math Equation Quest” Texas SPED Support - “Instructional Routines for Mathematics Intervention” YouTube - Project STAIR Connect with Dr. Sarah Powell: The University of Texas at Austin College of Education - Dr. Sarah Powell Email: srpowell@utexas.edu  Phone: 15124756556 Connect with Us: Get on our Email List Book a Consultation Get Support and Connect with a ChildNEXUS Provider Register for Our Self-Paced Mini Courses for Better Understanding and Supporting Your Child with ADHD, Dyslexia & Anxiety The Diverse Thinking Different Learning podcast is intended for informational purposes only and is not a substitute for medical or legal advice, diagnosis, or treatment. Additionally, the views and opinions expressed by the host and guests are not considered treatment and do not necessarily reflect those of ChildNEXUS, Inc or the host, Dr. Karen Wilson.  

The 21st Show
Has Illinois’ copi rebrand helped curb invasive carp?

The 21st Show

Play Episode Listen Later Jul 3, 2024


Les matins
Copi, sur la scène et dans les livres

Les matins

Play Episode Listen Later Jun 27, 2024 3:18


durée : 00:03:18 - Le Regard culturel - par : Lucile Commeaux - L'édition de la première pièce de théâtre de Copi, inédite en français, est l'occasion d'une petite réflexion sur le succès renouvelé de cet auteur né en Argentine, exilé en France, figure du Paris homosexuel dans les années soixante-dix.

En sol majeur
De Buenos Aires à Shakespeare avec Marilù Marini

En sol majeur

Play Episode Listen Later May 25, 2024 48:30


C'est pas une Marylin, c'est une Marilù. Espiègle et lumineuse, clownesse et tragique, enfantine et sans âge : voilà quelques-uns des visages de la comédienne Marilù Marini filmés par Sandrine Dumas dans un beau documentaire intitulé Marilù Marini, rencontre avec une femme remarquable. Tous les visages sont filmés au fil du temps qui passe, et tous les corps de la Marini qui aura échappé de peu à la dictature argentine de 1976, en suivant son instinct de danseuse, puis de comédienne jusqu'à Paname, où elle jouera le scandaleux Copi, mais aussi Fassbinder, Shakespeare et Beckett. Immense interprète argentine au corps tatoué par la culture française et qui ne se sent libre et vivante que sur un plateau de théâtre, dans les coulisses ou derrière son miroir à se grimer parfois monstrueusement. J'vous dis, c'est une Marilù, pas une Marylin.

Lead To Greatness Podcast
174. Copi Fish Co: Reeling In Success with Nicholas Melosi | Cedric Francis

Lead To Greatness Podcast

Play Episode Listen Later May 6, 2024 17:01


NICHOLAS MELOSI is a visionary leader with a proven track record in the business world and his groundbreaking venture, Copi Fish Co. This startup is poised to revolutionize the market by providing consumers with high-quality protein directly, bypassing intermediaries to ensure both quality and afford- ability. Their mission is simple yet transformative: to make premium protein accessible to everyone.   CONNECT WITH Nicholas Melosi Website: https://copifishco.com/ X (twitter): https://twitter.com/nicholasmelosi Instagram: https://www.instagram.com/copifishco/ LinkedIn: din.com/in/nicholas-melosi/     CONNECT WITH Cedric Francis Website: https://www.lead2greatness.com/ Facebook: https://www.facebook.com/cedricbfrancis X (twitter): https://twitter.com/cedricbfrancis Instagram: https://www.instagram.com/leadtogreatness/ LinkedIn: https://www.linkedin.com/in/cedric-b-francis-a0544037/   DONATE TODAY to assist poverty stricken communities!  Website: https://www.mtsoutreach.org

La Story
Van Gogh : peintre aimé, copié, jamais égalé

La Story

Play Episode Listen Later Apr 17, 2024 25:27


Génie de la peinture et âme torturée, Van Gogh est unique dans l'art. Un peintre chinois a pourtant produit des dizaines de milliers de copies de ses toiles à Dafen, la capitale mondiale de la peinture à l'huile. Dans « La Story », le podcast d'actualité des « Echos », Pierrick Fay et ses invités reviennent sur ce peintre fascinant.Retrouver l'essentiel de l'actualité économique grâce à notre offre d'abonnement Access : abonnement.lesechos.fr/lastoryLa Story est un podcast des « Echos » présenté par Pierrick Fay. Cet épisode a été enregistré en avril 2024. Rédaction en chef : Clémence Lemaistre. Invités : Eric Mercier (auteur du polar « Le Secret de Van Gogh » aux éditions La Martinière) et Frédéric Schaeffer (correspondant des « Echos » en Chine). Réalisation : Willy Ganne. Chargée de production et d'édition : Michèle Warnet. Musique : Théo Boulenger. Identité graphique : Upian. Photo : GREG BAKER/AFP. Sons : Euronews, Nobodyplaylists, GLOBIK, Les Inconnus, Pluto « Cherished Memories » (2023), « Van Gogh » (1991). Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Sausage of Science
SoS 212: Melanie Martin talks mother-infant COVID-19 transmission and social jetlag

Sausage of Science

Play Episode Listen Later Mar 29, 2024 37:49


Chris and Eric catch up with Dr. Melanie Martin, an Associate Professor in the University of Washington Department of Anthropology, whose research examines biocultural influences on health, growth, and development across the life course. In addition to being the Co-PI of the Biodemography Lab at the University of Washington Center for Studies in Demography and Ecology, she conducts field research with two international projects on Indigenous community health and well-being: the Chaco Area Reproductive Ecology Program (Co-Director) and the Tsimane Health and Life History Project (Affiliate). In this episode, Dr. Martin breaks down two of her papers, one looking at COVID-19 transmission in mothers and infants and another examining sleep health in undergraduates before and during the COVID-19 pandemic. ------------------------------ Find the papers discussed in this episode: Martin MA, Keith M, Pace RM, Williams JE, Ley SH, Barbosa-Leiker C, Caffé B, Smith CB, Kunkle A, Lackey KA, Navarrete AD, Pace CDW, Gogel AC, Eisenberg DTA, Fehrenkamp BD, McGuire MA, McGuire MK, Meehan CL and Brindle E (2022) SARS-CoV-2 specific antibody trajectories in mothers and infants over two months following maternal infection. Front. Immunol. 13:1015002. https://doi.org/10.3389/fimmu.2022.1015002 Alicia Rice, Olivia Sather, Kenneth P Wright, Céline Vetter, Melanie A Martin, Horacio O de la Iglesia, COVID-19 stay-at-home restrictions increase the alignment in sleep and light exposure between school days and weekends in university students, Sleep, Volume 46, Issue 7, July 2023, zsad059, https://doi.org/10.1093/sleep/zsad059 ------------------------------ Contact Melanie: martinm7@uw.edu Website: https://www.melaniemartin-anthropologist.com/ ------------------------------ Contact the Sausage of Science Podcast and Human Biology Association: Facebook: facebook.com/groups/humanbiologyassociation/, Website: humbio.org, Twitter: @HumBioAssoc Chris Lynn, Co-Host, Website: cdlynn.people.ua.edu/, E-mail: cdlynn@ua.edu, Twitter:@Chris_Ly Eric Griffith, Guest Co-Host, HBA Junior Fellow E-mail: eric.griffith@duke.edu Cristina Gildee, HBA Junior Fellow, SoS producer Website: cristinagildee.org, E-mail: cgildee@uw.edu, Twitter:@CristinaGildee

Cities and Memory - remixing the sounds of the world

"Quite coincidentally, the track "CO-PI-I" was made on the go during my recent travels in late 2023. I believe it happened for a good reason since it serves the purpose of the "Sound of Adventure" project.  "Over the years I made an interesting observation. Before venturing into new territory, two things happen: first - we suddenly are opened to fantasy, meditating on possible/probable experiences and adventures; and second - activate that sleeping inner child in us, eager to play "make believe" and explore uncharted worlds.  "It is a certain kind of excitement and anticipation many of us indulge in before setting foot on a new playground.  "I felt these dynamics and characteristics in the chosen field recording, then built up on the inspiration. Curiosity is a hunger that makes us seek new skies, smell fresh air, and try flavors we've never tasted.  "I find our ability to make up a version of a place apriori intriguing. This version is made of bits and pieces of various knowledge, from previous cultural encounters to tales and the internet, etc. All these elements contribute to our collage of a place, creating a potential for the next journey.  "Often, the real deal drastically differs from the product of imagination. Still, the fantasy is exciting and rewarding.  "'CO-PI-I' is just that, a fantasy about a place never experienced; a puzzle I put together with the field recording in the very center. "Why did I choose Vietnam? I could've easily picked a country I am familiar with, though, I wanted to create something based out of pure curiosity. A projection of a place never visited, based on all the elements mentioned. Hoping one day to compare fantasy with reality." "'COPII' incidentally stands for quite a few meaningful things. First, it means children in the Romanian language. Second, an existing organization that helps kids in Vietnam is called COPI, which I accidentally found upon googling.  "Third, COPII is also a protein complex, and fourth - COP stands for Conference of Parties that holds United Nations Climate Change Conferences. "All things considered, I hope there is enough room left for creating a fantasy, aided by the sounds of Vietnam." Nature in Vietnam reimagined by Serge Bulat. Part of the Sound of Adventure project in partnership with Exodus Travels. To learn more and explore the full collection, visit https://citiesandmemory.com/adventure.

The Answer is Yes
#332 - What is a Copi Fish? Special guest Nick Melosi with Jim Riley

The Answer is Yes

Play Episode Listen Later Mar 4, 2024 24:12


Nicholas Melosi is a visionary leader with a proven track record in the business world and now he would love to introduce his groundbreaking venture, Copi Fish Co. This startup is poised to revolutionize the market by providing consumers with high-quality protein directly, bypassing intermediaries to ensure both quality and affordability. Their mission is simple yet transformative: to make premium protein accessible to everyone. Nicholas brings a wealth of experience and expertise to the table, ensuring that each consumer receives a product that not only meets, but exceeds industry standards. Their commitment to transparency and sustainability sets them apart, guaranteeing consumers a direct link to the source of their protein. From River to Table, they prioritize quality assurance, ethical practices, and environmental responsibility. www.copifishco.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-answer-is-yes--2903418/support.

Healing Pet Loss Podcast
‘Keep your wild soul alive' – Comfort, Healing, and Empowerment from the Sacred Spirit Journey for angel dog Max

Healing Pet Loss Podcast

Play Episode Listen Later Feb 16, 2024 9:23


In this episode of the Healing Pet Loss Podcast, you will meet Max the angel dog and two of his guides. It is a powerful healing journey that will not only bring comfort and peace to a grieving heart, but that also inspires and empowers us on our path forward, reminding us that we are not alone, but always Divinely guided and supported. As Max says in the journey: "Let yourself be at peace knowing I am near."  When you have listened to the Sacred Spirit Journey here, visit the Healing Pet Loss blog where you can read the journey and see a photo of beautiful Max. https://healingpetloss.com/keep-your-wild-soul-alive-comfort-healing-and-empowerment-from-the-sacred-spirit-journey-for-angel-dog-max/If you would like Marianne Soucy to connect with your beloved pet that has passed via her Sacred Spirit Journeys, you can learn more on her website Healing Pet Loss. https://healingpetloss.com/receive-a-message-from-your-pet/

WeatherBrains
WeatherBrains 941: Two Doctors And A Mic

WeatherBrains

Play Episode Listen Later Jan 30, 2024 86:51


Our episode tonight is all about the AMS Annual Meeting 2024 live from Baltimore. First up from the Conference is Ryan Lagerquist, NOAA employee and is a Research Scientist at CIRA (Cooperative Institute for Research in the Atmosphere).  He is a meteorologist by training and is heavily involved in machine learning research. Joining us next on the show is the Chair for Coastal Artificial Intelligence at Texas A@M University-Corpus Christi and a Co-PI for the National Science Foundation Artificial Intelligence Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, or AI2ES.  Dr. Phillipe Tissot, thanks for dropping by tonight. Our email officer Jen is continuing to handle the incoming messages from our listeners. Reach us here: email@weatherbrains.com. Early reflections on AMS Annual Meeting (13:15) AI and the future of meteorology in general (16:00) Broad overview of AI and NWS Operations/Numerical weather prediction models (24:00) Community modeling/EPIC/UFS (32:00) The Astronomy Outlook with Tony Rice (No segment this week) This Week in Tornado History With Jen (53:03) National Weather Round-Up (01:02:45) E-Mail Segment (55:00) and more! Web Sites from Episode 941: 2024 AMS Annual Meeting Picks of the Week: James Aydelott - Brian Brettschneider on X: Fairbanks upper air sounding Jen Narramore - Foghorn Rick Smith - NWS SPC on X: 9 Years of SPC Outlooks Neil Jacobs - Foghorn Troy Kimmel - Foghorn Kim Klockow-McClain - Foghorn Bill Murray - Out James Spann - A Change in the Weather: Understanding Public Usage of Weather Apps James Spann - Kevin Kloesel on X: Rick Smith photo/meme The WeatherBrains crew includes your host, James Spann, plus other notable geeks like Troy Kimmel, Bill Murray, Rick Smith, James Aydelott, Jen Narramore, Dr. Neil Jacobs, and Dr. Kim Klockow-McClain. They bring together a wealth of weather knowledge and experience for another fascinating podcast about weather.  

Cult
Cult di lunedì 29/01/2024

Cult

Play Episode Listen Later Jan 29, 2024 55:17


Oggi a Cult: Maria Sole Tognazzi sul film "Dieci minuti, dal romanzo di Chiara Gamberale; al Passante Ferroviario della Stazione Garibaldi la sezione dedicata ai giovani artisti di "La memoria non tace"; la compagnia Phoebe Zeitgeist di Giuseppe Isgrò al PAC di Milano con una replica speciale di "Madame Delirio" da Copi; la rubrica di classica a cura di Giuseppe Califano...

Les matins
Copi, le burlesque et le "jeune" théâtre

Les matins

Play Episode Listen Later Jan 18, 2024 3:09


durée : 00:03:09 - Le Regard culturel - par : Lucile Commeaux - À l'occasion du spectacle "40° sous zéro", proposé d'après Copi par la compagnie du Munstrum Théâtre, petite réflexion sur notre rapport au burlesque et au premier degré sur scène.

Les interviews d'Inter
Louis Arene, digne successeur de Copi

Les interviews d'Inter

Play Episode Listen Later Jan 11, 2024 6:51


durée : 00:06:51 - Nouvelles têtes - par : Mathilde Serrell - C'est un jeune metteur en scène qui karchérise les conventions théâtrales, et décrasse instantanément les cerveaux ! L'ancien pensionnaire surdoué de la Comédie Française Louis Arène est ce matin l'invité de Mathilde Serrell.

Au cœur de l'histoire
Le Journal de santé de Louis XIV

Au cœur de l'histoire

Play Episode Listen Later Jan 10, 2024 17:02


Découvrez l'abonnement "Au Coeur de l'Histoire +" et accédez à des heures de programmes, des archives inédites, des épisodes en avant-première et une sélection d'épisodes sur des grandes thématiques. Profitez de cette offre sur Apple Podcasts dès aujourd'hui ! Louis XIV s'éteint le 1er septembre 1715, quelques jours avant son 77ème anniversaire. Le règne le plus long de l'Histoire de France vient de s'achever. Quel est le secret de cette longévité exceptionnelle ? Virginie Girod vous place au chevet du Roi-Soleil pour un examen de santé du souverain. À une époque où il n'existe ni vaccin ni antibiotique, la mortalité infantile est élevée. En 1647, Louis XIV, qui a 9 ans, est déjà roi lorsqu'on lui diagnostique la variole, une maladie mortelle. Conformément à la théorie des humeurs qui régit la médecine, on multiplie les saignées pour faire baisser la fièvre et on lui fait ingérer du chlorure de mercure. Le traitement est aussi dur que la maladie ! Le jeune roi endure la douleur et guérit miraculeusement. Louis XIV n'est qu'au début de ses pépins de santé : blennorragie, fièvre typhoïde, crises de goutte, le roi survit à tout. Mais le pire est à venir. À 48 ans, Louis XIV souffre d'une fistule annale ! C'est tellement douloureux que le roi ne peut plus rien faire, et doit s'en remettre à la chirurgie, discipline alors méprisée. L'opération est un succès et on raconte que Lully aurait composé un Te Deum en l'honneur de la santé du roi. Copié outre-Manche, l'air composé pour la fistule annale du roi aurait inspiré un peu plus tard l'hymne anglais : God Save The King ! En 1715, le souverain se plaint d'une vive douleur à la jambe. Le membre est ravagé par la gangrène et devient tout noir. Malgré la souffrance, le roi tente de continuer à vivre en suivant l'étiquette rigide qu'il a lui-même instaurée. Mais il est bientôt contraint de se reclure dans sa chambre pour attendre la mort. Après une douloureuse agonie, Louis XIV finit par s'éteindre. Le corps du Roi-Soleil est déposé dans la nécropole royale de Saint-Denis. Son cœur embaumé et placé dans un cardiotaphe en vermeil, puis il est offert à l'église Saint-Paul-Saint-Louis, à Paris. Dans le désordre de la Révolution française, le cœur est vendu à un artiste qui en fera… de la peinture ! Thèmes abordés : Louis XIV, santé, maladie, variole, Versailles "Au cœur de l'histoire" est un podcast Europe 1 Studio- Présentation : Virginie Girod - Production : Camille Bichler - Réalisation : Pierre Cazalot- Composition de la musique originale : Julien Tharaud - Rédaction et Diffusion : Nathan Laporte- Communication : Kelly Decroix- Visuel : Sidonie Mangin

Tribo Forte Podcast: Saúde. Boa Forma. Estilo De Vida!
TF Extra #433 - O (Real) Segredo Dos Países Mais Magros do Mundo (Como Copiá-los)

Tribo Forte Podcast: Saúde. Boa Forma. Estilo De Vida!

Play Episode Listen Later Nov 20, 2023 24:14


Hoje vamos dar uma volta ao mundo pelos paises mais magros e vermos o que eles tem em comum em questão da dieta. Vamos ver quais são os “segredos” destas populações que ainda mantem a forma em um mundo que fica cada vez mais obeso. Aproveite :)   ▶️ Vídeos recomendados:  - 9 Gorduras ÓTIMAS e 4 PÉSSIMAS Para Fritar e Cozinhar | Óleos, Ponto de Fumaça, Oxidação, Sabor  https://www.youtube.com/watch?v=BNm8K4idgAE&t=180s  - Este Óleo Comum Faz Você Engordar e Dá Fome!  https://www.youtube.com/watch?v=0er3suy29ZE&t=169s  - OS MELHORES E PIORES ÓLEOS E GORDURAS PARA COZINHAR E CONSUMIR | Guia Completo Sobre Gorduras  https://www.youtube.com/watch?v=KAdDqQ22F_4&t=41s  - As Melhores GORDURAS p/ Emagrecimento Fácil e Saúde (Alimentação Forte)  https://www.youtube.com/watch?v=RUEhW1W9kgA * Você já tentou um método METABÓLICO de emagrecimento? 

Le surf de l'info
Le panier RTL, souvent copié jamais égalé

Le surf de l'info

Play Episode Listen Later Nov 9, 2023 2:35


Ecoutez Le surf de l'info du 09 novembre 2023 avec Cyprien Cini.

Gossip in Spanish
Dame todo el candy! Kourtney se copió de la Colo!

Gossip in Spanish

Play Episode Listen Later Oct 31, 2023 35:27


Confluence
Ep. 90: Holly Schleicher, Co-PI on the M-HOPES Grant

Confluence

Play Episode Listen Later Oct 19, 2023 34:53


Licensed clinical psychologist Holly Schleicher kicks off Confluence's newest series on graduate student mental health, but with a twist! Holly, along with Annie Belcourt and Bryan Cochran, offered a three-part educational series for UM faculty through a grant called the Mental Health Opportunities for Professional Empowerment in STEM, or M-HOPES. In this episode, hear real clips from the trainings, including a mock conversation between a professor and student, as well as a sit-down interview with Holly about what this training involves and her biggest take-aways. Learn more about the training here, then register for the asynchronous model and complete it on your own time.

El Garaje Hermético de Máximo Sant
HIstoria del Volkswagen Escarabajo: Cuando Porsche copió

El Garaje Hermético de Máximo Sant

Play Episode Listen Later Aug 31, 2023 18:55


¡Qué decepción! El que es uno de los coches más populares de la historia, el que para el Tercer Reich iba a ser el coche de pueblo, el modelo que es un verdadero referente, el modelo diseñado por el genial Ferdinand Porsche… resulta que es una copia. En este modelo, Ferdinand Porsche no inventó nada. Y Hitler tuvo mucho que ver. ¿No os lo creéis? No lo digo yo… lo reconoce la propia VW… ¡Cuánto me ha costado hacer este guion! He investigado muchas horas, en ratos libres, en fines de semana, curioseado en libros, revistas, páginas Web y documentales, con la ayuda de Rodrigo, que pasa horas buscando imágenes e información… Contaremos como Ferdinand Porsche no inventó nada de su famoso Escarabajo, sino que copió un diseño del periodista y diseñador judío Joseff Ganz al que la Alemania nazi y post nazi le hizo la vida imposible y que también copió de la marca checa Tatra, de los T77 y T97, un diseño brillante de los ingenieros Ledwinka y Jaray. Ganz, el gran olvidado. En 1933, es decir, 5 años antes de que Adolf Hitler anunciase el proyecto de su “coche para el pueblo”, su Volkswagen, el ingeniero alemán Josef Ganz ya había diseñado y construido el Standard Superior Type I. Antes había realizado otros brillantes diseños de coche asequibles y modernos. Según Ganz los coches alemanes de la época eran “anticuados e inseguros”, incluso hizo un detallado estudio de accidentes para llegar a la conclusión que era verdaderas “cajas” por su forma, de “muertos” por su peligrosidad. ¿Qué tiene de particular este coche? Bueno, visto por fuera una cosa salta a la vista: Su diseño es muy parecido al de un VW más “antiguo”. Pero es que en su interior su chasis separado, su motor trasero y sus suspensiones oscilantes, con del mismo tipo o muy parecido a las que utilizó el VW Escarabajo de supuesto diseño Porsche. ¿Cuál era el mayor problema de Ganz? Un problema muy grave y muy serio en la Alemania Nazi: Era judío. Joseff Ganz, que no era nada tonto, había patentado muchas de las soluciones de su Standard e incluso Ferdinand Porsche llamó la atención de este “pequeño detalle” al propio Hitler, según nos cuenta el historiador holandés Paul Schilperoord, verdadero experto en el tema y que ha tenido acceso a mucha y detallada documentación original por avatares de la historia largos de contar. ¿Cómo solucionó este anuncia la Alemania Nazi? Sencillo: Retiraron la nacionalidad a Joseff Ganz que, al no ser alemán, que para el III Reich era lo mismo que no existir, no podría registrar patentes… problema resuelto. ¿Os parece una solución radical? Te adelanto que, en este historia, hay otras peores. Josef Ganz fue desprovisto de su nacionalidad alemana y perseguido por el régimen. Huyó a Suiza, donde quiso hacer el “coche del pueblo suizo”, pero el comienzo de la Segunda Guerra Mundial echo por tierra el proyecto. Al acabar la guerra fueron los ingleses los que pusieron en marcha la fábrica de VW y fue un enorme éxito. Pero VW y el gobierno alemán no reconocieron el mérito de Ganz. El entonces presidente de la compañía Nordhoff envió una carta para ofrecer a Ganz, en esos momentos retirado y solo en Australia, un trabajo y o una pequeña pensión. Parece ser que la pensión nunca la cobró y murió antes de poder regresar a Alemania. Pero en el museo de VW, en una sala llamada “Rémy Markowitsch” se hace referencia a los modelos “inspiradores” del VW, entre los que aparecen el Standard Superior de Ganz y los Tatra T77 y T97. Y vamos con la segunda historia. Adolf Hitler en uno de sus encendidos discursos, habla de que Alemania va a motorizar a todos los alemanes con su “Coche del Pueblo” su Volkswagen, que inicialmente se llamaba KDF-Wagen, o sea, Kraft durch Freude, que traducido seria “Fuerza a través de la alegría”. Hitler, para este proyecto, escoge a su ingeniero de confianza que era Ferdinand Porsche, que llevaba tiempo trabajando en un coche que el propio Porsche definía como “el coche para todos” … al menos en el planteamiento, estaban de acuerdo. Pero Hitler quería el coche ya… y ya es ya. Solo dos años después se presentaba el primero prototipo del Escarabajo, conocido como Tipo 60. A partir de este prototipo el Escarabajo lo fabrica… Mercedes-Benz. Sí, el Mercedes 130. ¿Cómo pudo Ferdinand Porsche ser tan rápido? ¿Copiando a Ganz? Sí, pero no solo a Ganz… Porque pocos meses después de la presentación del VW, el que luego acabaría llamándose Volkswagen Escarabajo o Beetle, entre otros muchos nombres, los ingenieros Ledwinka y Jaray demandan a Volkswagen y a Ferdinand Porsche por plagiar el diseño de su Tatra T97. No andaban desencaminados, porque el motor del Tatra era… un 4 cilindros Bóxer… Por si la estética, el motor y las parecidas o idénticas soluciones técnicas no bastasen en el libro “Car Wars” Adolf Hitler dice que “el Tatra es el tipo de coche que quiero en mis carreteras”. Esta demanda preocupaba mucho a Ferdinand Porsche, seguramente más por prestigio que por otra cosa, y se pone en contacto con Adolf Hitler quien le tranquiliza diciendo que “solucionaría a su modo la cuestión Tatra-Volkswagen”. ¿Y cuál era ese modo? Muy sencillo, ese mismo año Alemania invade Checoslovaquia y reconvierte la factoría de Tatra en una fábrica de armamento militar. Confisca y destruye unos 500 Tatra modelo T97 para que este tema pasase al olvido… pero no pasa. ¿Por qué? Porque al finalizar la Segunda Guerra Mundial, Tatra retoma las acciones legales contra Volkswagen y Ferdinand Porsche. ¿VW era culpable o inocente? No se sabe, porque no hubo juicio: Se llegó a un acuerdo extrajudicial por el cual VW indemniza a Tatra con tres millones de marcos alemanes. Para mí una forma de reconocer que eran culpables… pero no la única. Porque en los primeros episodios legales que enfrentaron a Tatra con el propio Ferdinand Porsche este dijo una frase para la historia, donde reconocería “haber observado de vez en cuando sobre el hombro de Ledwinka”. Blanco y en botella, ¡leche! No hace falta que os diga que con Adolf Hitler no había medias tintas: O estabas con él o estabas contra él. Muchas marcas de coches alemanas, en general toda la industria, apoyo al nacismo… pero la pregunta es, ¿les quedaba otra opción? Por eso quería revindicar a Ferdinand Porsche, genial ingeniero que con solo 23 años diseño el considerado por casi todos, el primer híbrido de la historia, el Lohner-Porsche Mixte Hybrid. Hizo multitud de diseños muy innovadores, tan largo de contar que merece otro vídeo… Coche del día. ¡Me encanta Tatra! Y me encanta el Tatra T97 que creo que se merece un lugar en lo historia que nunca ha conseguido. Realmente viendo este coche junto con el Escarabajo, resulta evidente su parecido…

ASCO eLearning Weekly Podcasts
Cancer Topics – ICC Program Malaysia

ASCO eLearning Weekly Podcasts

Play Episode Listen Later Aug 16, 2023 22:35


Providing high-quality cancer care to patients is the goal for any oncologist, yet there are many places across the globe that face multiple hurdles in achieving that goal. In this ASCO Education podcast we explore how one group is making a positive impact in the state of Surawak in Malaysia via the efforts of ASCO's International Cancer Corp Program (ICC).  Dr. Roselle de Guzman, past chair of the Asia Pacific Regional Council of ASCO, Dr Voon Pei Jaye medical oncologist and onsite director of the ICC Program at Sarawak and Dr. Evangelia D. Razis medical oncologist focused on neuro-oncology from Athens, Greece and ASCO volunteer of the ICC Malaysia Program describe the benefits of implementing the efforts of Project ECHO (Extension of Community Healthcare Outcomes) (3:38), the challenges in providing quality cancer care in Sarawak (8:31) and details on how to volunteer for the ICC program (19:45).  Speaker Disclosures Dr. Roselle de Guzman:  Honoraria - Roche Oncology (Philippines); AstraZeneca; Merck Serono, MSD Oncology Recipient, Boehringer Ingelheim, Zuellig Pharma Consulting or Advisory Role - Roche Recipient, Novartis, Boehringer Ingelheim, AstraZeneca, Zuellig Pharma (ZP) Therapeutics, Eisai Recipient, MSD Oncology Research Funding - Centus Biotherapeutics Travel, Accommodations, Expenses - Hospira (Philippines), Roche (Philippines), Merck Sharp & Dohme, Eisai, Boehringer Ingelheim, AstraZeneca, Pfizer Dr. Evangelia D. Razis: Honoraria Company - Servier pharmaceuticals. ESMO Research Funding – Tesaro, IQvia, AstraZeneca, Exelixis, PPD Global, MSD Travel, Accommodations, Expenses - Genesis Pharmaceuticals, Roche, Pfizer, Karyo Dr. Pei Jye Voon: Research Funding - Novartis Recipient, Boehringer Ingelheim, Viracta Therapeutics Inc,  ROCHE, Merck KGaA, Merck Sharp & Dohme, BeiGene, AstraZeneca, Janssen-Cilag, Johnson & Johnson Resources  If you liked this episode, please follow the show. To explore other educational content, including courses, visit education.asco.org. Contact us at education@asco.org. TRANSCRIPT Disclosures for this podcast are listed in the podcast page.  Dr. Roselle De Guzman: Providing high-quality cancer care to patients is the goal for any oncologist, yet there are many places across the globe that face multiple hurdles in achieving that goal. One such location has limited trained personnel, financial constraints, geographical challenges, and limited access to healthcare service in rural areas. The location, the state of Sarawak, located in the eastern part of Malaysia. The population is almost evenly split between urban and rural areas, which are the most dispersed in Malaysia.  The major challenge in Sarawak is the inadequate connectivity in the rural area and limited access to healthcare service. To address these issues, in 2020, a collaboration was formed between Sarawak General Hospital, University of Malaysia Sarawak and ASCO through ASCO's International Cancer Corp Program, or ICC for short. The ICC program is focused on three basic goals: incorporating a multidisciplinary approach into cancer care, integration of palliative care into oncology care, and quality improvement through ASCO's Quality Oncology Practice Initiative, or COPI program. This podcast will spotlight all the planning, activities, and results thus far of the ASCO ICC program in Malaysia. Hello, I'm Dr. Roselle de Guzman, past chair of the Asia Pacific Regional Council of ASCO. I am pleased to spotlight one of ASCO's collaborations with a lower-resource country to improve the quality of cancer care through a multifaceted approach. This year, we are focusing on Malaysia, where, through the ICC program, ASCO has been providing training in multidisciplinary care, palliative care, and quality measurement. Joining us later in the podcast will be medical oncologist Dr. Voon Pei Jye, who serves as the Onsite Coordinator for the ICC program at Sarawak.   First, we will speak to an ASCO volunteer of the ICC Malaysia Program, a medical oncologist focused on neuro-oncology, Dr. Evangelia Razis from Athens, Greece.  Welcome, Dr. Razis.  Dr. Evangelia Razis: Thank you. Thank you for the opportunity. Dr. Roselle De Guzman: First of all, Dr. Razis, what made you want to volunteer for the ICC Malaysia program, and what has been the most rewarding aspect of this service for you? Dr. Evangelia Razis: So, I've been actually collaborating with ICC for many years through ASCO and other programs as well, such as Honduras, and I find volunteering an extremely rewarding experience because you share and interact with colleagues from all over the world, you offer to those less fortunate, and you actually learn a lot through this process as well. So, volunteering is a very rewarding process for me, and I've been involved in it for many years. Plus, the opportunity to do something in neuro-oncology, which is very close to my heart, is very important, because this is a new field. I feel it needs to be exposed in all countries because it has many intricacies.  Dr. Roselle De Guzman: Well, that's really rewarding and must be really fulfilling work for you, Dr. Razis.   Dr. Razis, you also serve as a lead facilitator of the Project ECHO Neuro-Oncology Mock Tumor Board series, which delivers monthly online training to physicians from Malaysia. Can you tell us more about this project? What are mock tumor boards? Dr. Evangelia Razis: So, Project ECHO, the word stands for Extension of Community Healthcare Outcomes, and it's a project that has attempted to be near community healthcare delivered in low and middle-income countries through virtual media to support the healthcare in these areas. And in this particular effort, we are holding a neuro-oncology tumor board once a month since September with the Malaysia team. It's mock because we don't actually deliver specific patient advice for the purpose of patient care. We actually do it for educational purposes. So, we present cases and then discuss a topic.   The program has been set up for several months now by the Malaysia team based on their needs, which neuro-oncology topics they want to highlight. And we have a once a month, one-and-a-half-hour session, whereby cases are presented, and then an invited speaker from several places around the world, as I'll tell you in a minute, highlights this topic and then discusses the cases and discusses the questions that the group from Malaysia has.  And not only have we been able to be joined very regularly by the Sarawak team, but other parts of Malaysia have joined in, other centers in Malaysia have joined in different occasions. Now, the speakers have been experts from Europe and the United States based on their expertise in particular neuro-oncology topics.  Dr. Roselle De Guzman: So, Project ECHO is one of those innovative ways of delivering healthcare to extraordinarily challenging environments, those which are extremely remote or under-resourced areas. So to your knowledge, Dr. Razis, what improvements have been made since the implementation of Project ECHO? Dr. Evangelia Razis: Over the last nine months, I have noticed more insightful questions that show that some understanding of the standard neuro-oncology way of thinking, if you will, has come through to the colleagues that are joining us, though I must say that they were very knowledgeable from the beginning. I also hope that certain intricacies of neuro-oncology, such as, for example, the way to read scans and evaluate the fact that there may be pseudo progression or pseudoresponse, the way to integrate molecular parameters into the decision-making process, has now become part of the way they think about patients. And ultimately, the most important aspect has been the multidisciplinary approach to neuro-oncology and the constant use of all specialties to make a decision. Surgery, radiotherapy, radiology, pathology, all of these specialists need to come together to produce an appropriate decision for the patient. Dr. Roselle De Guzman: So one thing that's interesting as well is in 2013, Dr. Razis, your institution, HYGEIA Hospital in Athens, Greece, was one of the first outside the United States to join the Quality Oncology Practice Initiative or COPI program of ASCO. And your program was also the one to be accredited. So, Sarawak General Hospital in Malaysia is collaborating with ASCO as well for the COPI program that focuses on quality improvement. So, based on your experience, what benefits does the COPI program bring to an institution? Dr. Evangelia Razis: So, COPI, in fact, is an extremely useful way to streamline one's work and increase patient safety and patient satisfaction. I would also say that it helps reduce waste of resources, which is particularly important in resource-limited settings. And we do have a COPI version that is for limited resource settings. It's amazing, but just doing one's work lege artis does result not only in better outcomes but less waste. And that I think is extremely important for Sarawak. So, I think they will find it very useful to be streamlining their work through COPI. Dr. Roselle De Guzman: Thank you, Dr. Razis, for sharing your experience, your expertise, and your insights. Now, at this point, I would also like to introduce medical oncologist Dr. Pei Jye Voon, who serves as the Onsite Coordinator for the ICC program at Sarawak.  Dr. Voon, Welcome. Dr. Pei Jye Voon: Thank you so much.   Dr. Roselle De Guzman: Dr. Voon, can you describe what cancer care was like in this area of Malaysia for the past few years and what are the main challenges in providing quality cancer care? Dr. Pei Jye Voon: Yes, of course. So first of all, I would like to give a brief introduction of Sarawak, which is situated at the Borneo island of Malaysia and is the largest state in Malaysia with a very large land area populated by only 2.9 million people, meaning it is very sparsely populated. And for information, newly diagnosed cancer cases in our state is about 2300 cases a year, and the common cancer include breast cancer, followed by colorectal and lung cancer, as well as a cancer that is peculiar to our setting here: nasopharyngeal cancer.   Half of our 2.9 million population, as mentioned before, are residing outside the urban area, which causes the issue of accessibility of health care, particularly good cancer care, for this rural population. It has always been a great challenge as we have only one public comprehensive cancer center, and thus inequity of access to cancer care is one of the major hurdles in providing good quality cancer care in our state here. In addition, inadequate formally trained, for example, oncologists and palliative care physicians, as well as other healthcare personnel, like oncology nurses, perioperative nurses, which has also negatively impacted the quality of care that we are providing here.   Furthermore, limited availability of good, top-notch cancer infrastructures, especially at the district hospitals outside our capital city of Kuching, also poses a great challenge to us in developing good quality cancer care across the whole state. Moreover, similar to many parts of the world, the ever-increasing cost of cancer treatment, especially on the expensive new anti-cancer drugs, is another pressing issue for us as well.  In summary, I can say that inequity of access due to the geographical barrier, lack of human resources, inadequate infrastructure, and also the ever-increasing cost of cancer, are the major challenges that we are facing here in Sarawak.  Dr. Roselle De Guzman: Thank you, Dr. Voon. I'm sure the situation in Sarawak resonates with other countries, low- and middle-income countries. Of course, there are truly challenges, but of course, with the challenges come opportunities. So what benefits or changes have taken place through this collaborative ICC program? Dr. Pei Jye Voon: I have to say that participating in the ASCO ICC program is one of the greatest things that has happened to our radiotherapy oncology and palliative care department at Sarawak General Hospital. We have gained tremendously, definitely from that. And for instance, we have been actively participating in a highly personalized palliative care education program which is one of the highlights of this collaboration. Various projects have been successfully conducted, including the ASCO Palliative Care e-Course course, which subsequently led to the Train the Trainer's program. This program benefited not only the Sarawak team, but also healthcare providers across Malaysia as well. And this aspect of human development in palliative care was further consolidated with the in-person training by Dr. Frank Ferris as well as Dr Shannon Moore in November last year when they came to visit us physically. We are very grateful for that.  And in addition to enhancing palliative care, another very interesting project that is actively ongoing is the project ECHO Neuro-oncology Tumor Board Series, which delivers online monthly training to physicians across Malaysia on neuro-oncology care. This was discussed by Dr. Razis earlier on in the podcast, so I'm not going to elaborate at length here. But essentially, the idea of this project was conceived initially in view of the gap that we noted in our neuro-oncology management in our hospital, as compared to those of common cancers that we are actually treating. So through the diverse lectures and many case discussions of the recent in-person visit by the ASCO team that we saw, the management of our neuro-oncology cases has definitely been enhanced and we are looking forward to Dr. Razis coming to visit us physically as well.   At the same time, we are also looking forward to the incoming multidisciplinary board project under the ASCO ICC program on breast cancer management in August this year. I believe that Dr. Guzman is going to come to visit us, and we are looking very much forward for this as well. And at the same time, this exciting project is under active planning now. Furthermore, we are also eagerly awaiting the improvement of quality cancer care programs using evidence-based quality measures via the COPI project in the near future. Dr. Roselle De Guzman: Dr. Voon, it seems there is a lot of things happening with Sarawak General Hospital, and we know that there are so many patients globally that do not get the comforts and benefits of palliative care program. You have mentioned palliative care program. Has the ICC Sarawak program made a difference in patient quality of life thus far?  Dr. Pei Jye Voon: Again, the answer is yes. Definitely yes. So the ASCO Sarawak Palliative Care program has definitely made a great difference in the patient's quality of life. This collaborative work between SarGenHospital, our university, UNIMAS, and ASCO has been in its third year. And many important palliative care milestones in Sarawak have been accomplished. This specially designed program—I would say that this is a specially design program that fits us, that fits our needs—has been mentioned before and includes the ASCO e-course, Train the Trainer program, the mentorship program through the International Development and Education Awards through the Conquer Cancer Foundation, and last but not least is the translation of the ASCO Palliative Care Interdisciplinary Curriculum Resources to our national language to reduce the language barrier in training and education for our people here.   All these innovative programs have provided a fundamental framework of palliative care  education that is invaluable in equipping our oncologists as well as oncology trainees with the necessary knowledge and skill set to better identify and also meet the palliative care needs amongst our patients. It also ensures a more competent and timely palliative care provision at a general level by the oncology team of our hospital. I think that is extremely important. And it enables the team to incorporate the best palliative care management early in the course of the disease. We call this early introduction through palliative care in our hospital. And in some ways, actually, the ASCO collaboration has enhanced the teamwork and helped the oncology team to recognize our own limitations while providing general palliative care, thereby encouraging the timely palliative care referral whenever appropriate to ensure that patients with more complex physical, psychosocial, and spiritual needs have the necessary input and support from our palliative care team throughout the course of their illnesses.  Dr. Roselle De Guzman: So we have been discussing important points on the ICC program focusing on multidisciplinary cancer care management, palliative care program, and the COPI program. What do you think are other solutions? Are there others that exist to overcome hurdles to provide quality cancer care to people in Malaysia? Dr. Voon? Dr. Pei Jye Voon: Yes. Definitely yes as we have discussed in our conversation. So besides the ASCO ICC program, various existing and some projects which are in planning now to overcome hurdles to provide quality care to the people in Sarawak have been implemented or are currently in a very active planning phase. So in terms of inequity of access to good cancer care due to the geographical barrier, we have actually undertaken decentralization efforts of cancer care here in Sarawak. One of the actual efforts around initiatives is to host our senior long-term oncology liaison medical officers with adequate oncology experience to other district hospitals in Sarawak so that better cancer care could be delivered to patients closer to their homes. This was also consolidated with our regular visiting oncologists to these district hospitals as part of decentralization efforts as well. There is also a nursing training program for systemic treatment administration being conducted since last year in all major district hospitals, with the aim of credentialing all our nurses in the state managing cancer care patients with this essential nursing skill of administering systemic therapy in their own hospital.   In addition to that, weekly oncology and palliative care continuous medical education program across the state has been conducted since the fourth quarter of last year, to disseminate oncology knowledge rapidly to healthcare providers, especially those outside our capital city, who have inadequate exposure in oncology care. And upgrading of our cancer care infrastructure has also been actively planned and we are actually looking forward to a new comprehensive cancer center in our city in the next few years.   Besides that, our center is also robustly developing our clinical trial capacity in the hope that we can provide additional treatment options to our patients who have limited optional treatment due to cost constraints. In summary, I can say that various initiatives have been implemented to enhance the cancer care in Sarawak, and one thing for sure is the ASCO ICC program has been facilitating all this positive development. Dr. Roselle De Guzman: So many things are happening, so many things are being done. And with all your efforts, knowledge, and expertise, of course, nothing is impossible. And it's always helpful if you have a very dedicated and committed team, right? Dr. Pei Jye Voon: Yeah, definitely. We have a very dedicated team, that's for sure. Dr. Roselle De Guzman: So Dr. Voon, thank you for being with us today and for your onsite coordination of the program. And Dr. Evangelia Razis, thank you for volunteering your time and insights to the ICC program and to our podcast.  Malaysia is not the only location that the ICC program has been implemented in. There are currently nine sites in Asia, Africa, and South America currently accepting volunteers. Now I would like to give a brief information for volunteers wanting to participate. ASCO pairs eligible oncology professionals with a medical center whose needs match the expertise of the volunteer. Volunteers must be appropriately trained and credentialed medical professionals who specialize in oncology. This includes physicians specializing in medical, radiation, and surgical oncology, laboratory professionals, and nurses. Final-year oncology fellows may also participate if paired with an experienced volunteer. Volunteers spend one to four weeks on site. During that time, they teach and train staff, residents, and students, and gain insight into cancer management needs and challenges at that institution. As an added benefit, the program enables volunteers to form long-term supportive relationships with clinicians in participating countries. If you are interested in volunteering for the ASCO ICC program, please go to volunteer.asco.org - that's volunteer.asco.org - to apply. I'm Dr. Roselle De Guzman, past Chair of Asia Pacific Regional Council of ASCO.   Thank you for listening to this ASCO Education Podcast. The ASCO Education Podcast is where we explore topics ranging from implementing new cancer treatments and improving patient care to oncology well-being and professional development. If you have an idea for a topic or guest you would like to see on the show, please email us at education@asco.org. To stay up to date with the latest episodes and explore other educational content, visit education.asco.org.  The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions.   Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement.    

Astrophiz Podcasts
Astrophiz 177: A/Prof Michelle Cluver

Astrophiz Podcasts

Play Episode Listen Later Aug 14, 2023 92:50


In this extended and enthralling interview, Associate Professor Michelle Cluver from Swinburne University's Centre for Astrophysics and Supercomputing reveals the captivating world of mid-infrared research. With boundless enthusiasm, she unravels the mysteries of this innovative field, igniting our imagination and highlighting her powerful results and the immense potential of being able to peer deep through previously unseeable interstellar dust clouds. Her contagious passion for discovery is palpable as she reveals the astonishing understandings obtained through powerful instruments like Spitzer, WISE, MeerKAT, SKA pathfinders and the JWST, and as Co-PI, the promise of the 4MOST survey in cataloging the spectral properties of 6 million distant galaxies. Dr Cluver unveils the cutting-edge radio and optical technologies used to explore the depths of the mid-infrared spectrum, enabling fellow scientists to delve into uncharted territories of the universe. You will also love her insights into the nature of collaborative science, and her commitment and style in nurturing the learning and research trajectories of her graduate and undergraduate students.

Science Friday
Lab-Grown Meat Approval, Underground Climate Change, Utahraptor. July 14, 2023, Part 2

Science Friday

Play Episode Listen Later Jul 14, 2023 47:07


We have a new podcast! It's called Universe Of Art, and it's all about artists who use science to bring their creations to the next level. Listen on Apple Podcasts, Spotify, or wherever you get your podcasts.   Where's The Beef? Lab-Grown Meat Gets U.S. Approval People have been looking for meat-alternatives for decades. Vegetarians avoid animal products for many reasons, from concerns over animal treatment and slaughtering practices to the meat industry's climate impacts. Methane from cows and other livestock contribute about 15% of all greenhouse gas emissions. There have been plant-based alternatives on the market for awhile now, but another method has quietly gained steam over the past decade: meat grown in a lab, using cultured cells. This past June, the U.S. Department of Agriculture approved two companies—Eat Just and Upside—to grow and sell cultivated chicken products in the U.S. Lab-developed beef will likely be next, while some companies are even working on cultivated pet food meat. (Lab-grown mouse meat kibble, anyone?) But will growing tissue in a lab actually reduce greenhouse gas emissions, and … will people even want to eat it? Joining Ira to discuss this beefy topic is Casey Crownhart, climate reporter at the MIT Technology Review, who talks about how this kind of meat is made in a lab, the challenges the industry faces, and what lab-grown beef patty tastes like.   How Rising Temperatures Are Shifting The Ground Beneath Chicago As global temperatures rise, cities are typically hotter than rural areas. Tall buildings trap heat and temperatures don't drop nearly as low at night. Out of sight, just below the surface, it's also getting hotter. Scientists are beginning to document the unexpected consequences of underground climate change. A new study measuring the phenomenon used sensors to track increasing temperatures underground in Chicago and map how the earth has shifted beneath the city as a result. Ira talks with the lead researcher of the study, Dr. Alessandro Rotta Loria, assistant professor of civil and environmental engineering at Northwestern University, based in Chicago, Illinois.   A Fish By Any Other Name: Inside The Effort To Bring ‘Copi' To Dinner People who live near freshwater rivers or lakes are likely familiar with Asian Carp. The fish are not native to the U.S., but over the last few decades their populations have exploded in waterways like the Mississippi River Basin and the Illinois River. Over the last few years, there's been a major PR campaign to move away from the name Asian Carp, in favor of a new name: “Copi.” The reason is two-fold: First, it joins a general trend of moving species' names away from nationalistic associations, considering anti-Asian hate crimes during the COVID-19 pandemic. The other goal is to make the fish sound more delicious—creating a market that would incentivize fishing the Copi, hopefully reducing their populations. Joining Ira to talk about this is Jim Garvey, director of fisheries, aquaculture and aquatic sciences at Southern Illinois University in Carbondale, Illinois.   Thanks To A Mesozoic Hot Spot, We Finally Know How Old The Utahraptor Is Sometimes Jim Kirkland wishes he had been alive 150 years ago. That's when the golden age of North American dinosaur discovery began, and early titans of paleontology crisscrossed the Rocky Mountains unearthing dozens of new species that became household names, from the Stegosaurus to the Brontosaurus to the Triceratops. But a close second to that era is what Kirkland gets to see these days in Utah. “I am doing that kind of discovery right now,” Kirkland said. “I'm just lucky to be alive.” Kirkland, Utah's state paleontologist, uncovered and named the Utahraptor in 1993. The deadly predator became the official state dinosaur in 2018. To read the rest, visit sciencefriday.com.   To stay updated on all-things-science, sign up for Science Friday's newsletters. Transcripts for each segment will be available the week after the show airs on sciencefriday.com.

The Learning Future Podcast with Louka Parry
Redefining Good-Behaviour and Engagement: Professor Stephanie Jones

The Learning Future Podcast with Louka Parry

Play Episode Listen Later Jun 30, 2023 33:58


Are we truly promoting self-control or just compliance to adult demands? How can we engage students in deep, effortless, and meaningful learning experiences? Stephanie M. Jones is the Gerald S. Lesser Professor in Child Development and Education and Director of the EASEL Lab at the Harvard Graduate School of Education. Her research, anchored in prevention science, focuses on the effects of poverty and exposure to violence on social, emotional, and behavioral development from early childhood through early adolescence. Over the past fifteen years, her work has centered on evaluation research addressing the impact of preschool- and elementary-level social-emotional learning interventions on behavioral and academic outcomes and classroom practices, as well as new curriculum development, implementation, and testing. Stephanie is also co-Director (with Nonie Lesaux) of the Saul Zaentz Early Education Initiative and Co-PI of the Early Learning Study at Harvard (ELS@H). She serves on numerous national advisory boards and expert consultant groups related to social-emotional development, early childhood education, and child and family anti-poverty policies, including recently as a member of the Council of Distinguished Scientists for the Aspen National Commission on Social, Emotional, and Academic Development. Her research is published in academic and educational journals as well as in trade publications, and she regularly presents her work to national academic and practitioner audiences. Jones holds a Ph.D. from Yale University and a B.A. from Barnard College.—-This Season is done in partnership with Salzburg Global Seminar. https://www.salzburgglobal.org/Please check out our partner's publication advocating for education transformation: https://www.diplomaticourier.com/issue/transformed-the-case-for-education-transformationTranscript available at www.thelearningfuture.com

From Lab to Launch by Qualio
Amazing biotech at high school incubators with Dr. Linnea Fletcher

From Lab to Launch by Qualio

Play Episode Listen Later Jun 28, 2023 24:10 Transcription Available


Dr. Linnea Fletcher is a true pioneer at bridging biotech and education. On the show she explains how high school biotech incubators got started and how others can get involved. She also shares more about the upcoming innovATEBIO conference.  Dr. Fletcher she simultaneously joined the first National Science Foundation-funded National Biotechnology Education Center, Bio-Link, and received her first NSF-funded Advanced Technological Education grant to start Biotechnology high school programs in Texas. In 2015, she received an Emerging Technology Fund grant to build a Bioscience Incubator at ACC and several Wagner Peyser grants to equip it. Today, the incubator is full of start-up companies and students interning or working for these companies. About innovATEBIO www.innovATEBIO.org InnovATEBIO, the National Biotechnology Education Center funded by NSF (National Science Foundation). The center was funded 4 years ago at 7.5M for 5 years to coordinate over 134 two-year biotechnology programs and their educationalpartners for the purpose of creating a biotechnology workforce focusing on technician education. Every senior scientist needs 5 to 7 technicians for R&D, biomanufacturing and quality assurance and regulatory matters. (National Science Board report 2019). At the moment, there is not enough technicians being produced to meet the needs of the US biotechnology industry.About Dr. Fletcherhttps://innovatebio.org/iab-leadershipDr. Linnea Fletcher enjoys all forms of exercise but especially biking, hiking, and swimming. Her favorite pastimes are family events in the outdoors and travel. She received her Ph.D. in microbiology from the University of Texas at Austin, did two postdocs one at the Southwestern Medical Center and another in the Biochemistry Department at the University of Texas. She joined Austin Community College as a Department Chair in Biology and started the Biotechnology Program in 1999. At the same time, she joined the first NSF Funded National Biotechnology Education Center, Bio-Link and received her first NSF funded ATE grant to start Biotechnology high school programs in Texas. She worked as an NSF Program Officer from2008 to 2010 and was involved setting up the first Vision and Change Meeting. Once back on the job as Biotechnology Department Chair in 2015, she received an Emerging Technology Fund Grant to build a Bioscience Incubator at ACC and several additional grants to equip it. Today the incubator is full of startup companies and students interning or working for these companies. She was PI of the AC2 Bio-Link Regional Center, and is now the PI of InnovATEBIO, the NSF funded National Biotechnology Center. Combining economic development with educational opportunities is her passion. She is also PI and Co-PI on several other grants associated with the work of the center. Linnea Fletcher believes the best way to engage educate students is to involve them in industry projects from high school on –and show them that their education has a purpose and matters—involvement in startup companies does this!Qualio website:https://www.qualio.com/ Previous episodes:https://www.qualio.com/from-lab-to-launch-podcast Apply to be on the show:https://forms.gle/uUH2YtCFxJHrVGeL8 Music by keldez

The Nurse Keith Show
The Latest Developments in Psychedelic Medicine

The Nurse Keith Show

Play Episode Listen Later Jun 9, 2023 63:03


On episode 425 of The Nurse Keith Show nursing and healthcare career podcast, Keith welcomes back Andrew Penn, MS, PMHNP to discuss the latest news in the study of the therapeutic uses of psychedelics. Among the topics discussed by Keith and Andrew are updates regarding the state of the research and the pending FDA approval of both MDMA (aka: Molly or Ecstacy) and psilocybin for the treatment of various psychological conditions, and well as how nurses may end up fitting into the psychedelic treatment paradigm. Andrew Penn, MS, PMHNP is a Clinical Professor in the University of California, San Francisco, School of Nursing where his teaching has received the UCSF Academic Senate Distinction in Teaching Award, among other recognitions. He has practices as a psychiatric/mental health nurse practitioner, treating veterans and training residents at the San Francisco Veterans Administration Hospital. As a researcher, Andrew collaborates on psychedelic studies of psilocybin and MDMA in the Translational Psychedelics Research (TrPR) lab at UCSF, serving as Co-PI on a phase 2 study of psilocybin for depression and is currently working on a study using psilocybin to treat depression in patients with Parkinson's disease. A leading voice in nursing, he is a cofounder of the Organization of Psychedelic and Entheogenic Nurses, advocating for the perspective of nurses in psychedelic therapy, he has published on psychedelics in the American Journal of Nursing, Frontiers in Psychiatry, and The Journal of Humanistic Psychotherapy. An internationally invited speaker, he has lectured at SXSW, Aspen Health Ideas Festival, the Singapore Ministry of Health, and Oxford University, and can be found at Andrewpennnp.com. Connect with Andrew Penn AndrewPennNP.com LinkedIn OPENurses on Facebook OPENurses website ----------- Did you know that you can now earn CEUs from listening to podcasts? That's right — over at RNegade.pro, they're building a library of nursing podcasts offering continuing education credits, including episodes of The Nurse Keith Show! So just head over to RNegade.pro, log into the portal, select Nurse Keith (or any other Content Creator) from the Content Creator dropdown, and get CEs for any content on the platform! Nurse Keith is a holistic career coach for nurses, professional podcaster, published author, award-winning blogger, inspiring keynote speaker, and successful nurse entrepreneur. Connect with Nurse Keith at NurseKeith.com, and on Twitter, Facebook, LinkedIn, and Instagram. Nurse Keith lives in beautiful Santa Fe, New Mexico with his lovely fiancée, Shada McKenzie, a highly gifted traditional astrologer and reader of the tarot. You can find Shada at The Circle and the Dot. The Nurse Keith Show is a proud member of The Health Podcast Network, one of the largest and fastest-growing collections of authoritative, high-quality podcasts taking on the tough topics in health and care with empathy, expertise, and a commitment to excellence. The podcast is adroitly produced by Rob Johnston of 520R Podcasting, and Mark Capispisan is our stalwart social media manager and newsletter wrangler.

El Gusto de las 12
¿ La Sirenita copió¿ #LaSirenita copió a Marimar ?

El Gusto de las 12

Play Episode Listen Later May 31, 2023 1:26


Juan Carlos Pichardo, Ñonguito, Harold Diaz, Oscar Carrasquillo, Katherin Amesty, Begoña Guillen y Anier Barros

No Such Thing: K12 Education in the Digital Age
Live from SXSW Edu 2023: Research Storytelling in The Digital Age

No Such Thing: K12 Education in the Digital Age

Play Episode Listen Later Apr 25, 2023 74:05


Dr. Elizabeth Bishop is an educator, researcher and youth advocate with two decades of instructional and administrative experience in public schools, universities and non-profit organizations across the United States. Bishop currently teaches on the faculty of the City University of New York and the University of San Francisco. She is Co-Founder of Global Turning Points, an international consulting collective based on the praxis of critical pedagogy. Bishop's writing includes her 2015 “Becoming Activist: Critical Literacy and Youth Organizing” and her 2018 “Embodying Theory: Epistemology, Aesthetics and Resistance“ which she created in collaboration with artist Tamsen Wojtanowski. She has two new books expected out in 2022 and 2023. Dr. Bishop holds a Ph.D. in Education: Language, Literacy and Culture and has been featured in numerous articles on youth activism, civic engagement and voting including on Good Morning America, PBS NewsHour, Business Insider and PolitiFact. Find her online @DrBishopDigital. An artist by training, Dr. Kylie Peppler is a professor of Informatics & Education at University of California, Irvine where she designs and studies creative educational technologies together with industry partners. She holds a Ph.D. in Urban Schooling from UCLA, where she was part of the NSF-sponsored team that designed and studied the Scratch platform, which has grown to over 93 million users. Her research group, the Creativity Labs, is part of UCI's Connected Learning Lab, which reaches over 8,000 newsletter subscribers and a website which averages over 11,500 views per month. Recent projects include partnerships with Merlyn Mind on the innovative uses of AI in classrooms, and the development of new XR solutions with Purdue University for the future manufacturing workforce. Her work has been consistently supported by a range of foundations, federal and industry partners, including the Gordon and Betty Moore Foundation, the MacArthur Foundation, the Wallace Foundation, Google.org, US Department of Education, Boeing, Best Buy, Fossil Foundation, GAP Inc., and National Geographic.Dr. Sangita Shresthova is a writer, researcher, thinker, speaker and doer. She is an expert in mixed research methods, online learning, media literacies, popular culture, performance, new media, politics, and globalization. She is currently the Director of Research and Programs and Co-PI of the Civic Paths Group based at the University of Southern California, where her current work is focused on the civic imagination. Sangita is one of the creators of the Digital Civics Toolkit (digitalcivicstoolkit.org), a collection of resources for educators, teachers and community leaders to support youth learning. Her own artistic work has been presented in creative venues around the world including the Pasadena Dance Festival, Schaubuehne (Berlin), the Other Festival (Chennai), the EBS International Documentary Festival (Seoul), and the American Dance Festival (Durham, NC). She holds a Ph.D. from UCLA's Department of World Arts and Cultures and MSc. degrees from MIT and LSE. She received her BA from Princeton University.​She is also a faculty member at the Salzburg Academy on Media and Social Change in Austria.​ Hosted on Acast. See acast.com/privacy for more information.

City Cast Chicago
Rebranding Election Season, Northerly Island, and Carp to Copi

City Cast Chicago

Play Episode Listen Later Apr 4, 2023 27:04


It's election day! So by tonight, we may know who will be leading the country's third-largest city as voters cast their ballots in the runoff elections for mayor and City Council. But between voter turnout, back-to-back (-to-back-to-back) trips to the polls, and the timing of our elections, the City Cast Chicago team is wondering if election season needs a rebranding. We also discuss the ongoing rebranding of Northerly Island and check in on Copi a year after it got its new name! You can vote until 7 p.m. tonight! Check with the Chicago Board of Elections for your voter information and the polling places in your ward! Want some more City Cast Chicago news? Then make sure to sign up for our Hey Chicago newsletter.  Follow us @citycastchicago You can also text us or leave a voicemail at: 773 780-0246 Interested in advertising with City Cast? Find more info HERE  Learn more about your ad choices. Visit megaphone.fm/adchoices

The Daily Happy
Plenty of Copi in the Sea!

The Daily Happy

Play Episode Listen Later Mar 28, 2023 10:00


Lulu shares the marketing ploys of fish!READ MORE:https://www.wired.com/story/copi-invasive-species-rebranding-campaign/?utm_campaign=rtb&utm_medium=newsletter&utm_source=morning_brewhttps://www.cbsnews.com/news/beethoven-dna-hair-study-hearing-loss-health-issues-research/?utm_campaign=later-linkinbio-cbsnews&utm_content=later-33908037&utm_medium=social&utm_source=linkin.bioSupport the show FOLLOW US: Facebook Instagram Youtube Twitter Pinterest Apple Podcasts Google Podcasts Spotify Stitcher Send us your stories & support the show:https://www.buymeacoffee.com/thedailyhappy

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

OpenAI just rollicked the AI world yet again yesterday — while releasing the long awaited ChatGPT API, they also priced it at $2 per million tokens generated, which is 90% cheaper than the text-davinci-003 pricing of the “GPT3.5” family. Their blogpost on how they did it is vague: Through a series of system-wide optimizations, we've achieved 90% cost reduction for ChatGPT since December; we're now passing through those savings to API users.We were fortunate enough to record Episode 2 of our podcast with someone who routinely creates 90%+ improvements for their customers, and in fact have started productizing their own infra skills with Codeium, the rapidly growing free-forever Copilot alternative (see What Building “Copilot for X” Really Takes). Varun Mohan is CEO of Exafunction/Codeium, and he indulged us in diving deep into AI infrastructure, compute-optimal training vs inference tradeoffs, and why he loves suffering.Recorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold. Timestamps* 00:00: Intro to Varun and Exafunction* 03:06: GPU Efficiency, Model Flop Utilization, Dynamic Multiplexing* 05:30: Should companies own their ML infrastructure?* 07:00: The two kinds of LLM Applications* 08:30: Codeium* 14:50: “Our growth is 4-5% day over day”* 16:30: Latency, Quality, and Correctability* 20:30: Acceleration mode vs Exploration mode* 22:00: Copilot for X - Harvey AI's deal with Allen & Overy* 25:00: Scaling Laws (Chinchilla)* 28:45: “The compute-optimal model might not be easy to serve”* 30:00: Smaller models* 32:30: Deepmind Retro can retrieve external infromation* 34:30: Implications for embedding databases* 37:10: LLMOps - Eval, Data Cleaning* 39:45: Testing/User feedback* 41:00: “Users Is All You Need”* 42:45: General Intelligence + Domain Specific Dataset* 43:15: The God Nvidia computer* 46:00: Lightning roundShow notes* Varun Mohan Linkedin* Exafunction* Blogpost: Are GPUs Worth it for ML* Codeium* Copilot statistics* Eleuther's The Pile and The Stack* What Building “Copilot for X” Really Takes* Copilot for X* Harvey, Copilot for Law - deal with Allen & Overy* Scaling Laws* Training Compute-Optimal Large Language Models - arXiv (Chinchilla paper)* chinchilla's wild implications (LessWrong)* UL2 20B: An Open Source Unified Language Learner (20B)* Paper - Deepmind Retro* “Does it make your beer taste better”* HumanEval benchmark/dataset* Reverse Engineering Copilot internals* Quora Poe* Prasanna Sankar notes on FLOPs and Bandwidth* NVIDIA H100 specs - 3TB/s GPU memory, 900GB/s NVLink Interconnect* Optimizer state is 14x size of model - 175B params => 2.5TB to store state → needs at least 30 H100 machines with 80GB each* Connor Leahy on The Gradient PodcastLightning Rounds* Favorite AI Product: Midjourney* Favorite AI Community: Eleuther and GPT-J* One year prediction: Better models, more creative usecases* Request for Startup: Superathlete Fitness Assistant* Takeaway: Continue to tinker!Transcript[00:00:00] Alessio Fanelli: Hey everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my cohost, swyx, writer, editor of L Space Diaries.[00:00:20] swyx: Hey, and today we have Varun Mohan from Codeium / Exafunction on. I should introduce you a little bit because I like to get the LinkedIn background out of the way.[00:00:30] So you did CS at MIT and then you spent a few years at Nuro where you were ultimately tech lead manager for autonomy. And that's an interesting dive. Self-driving cars in AI and then you went straight into Exafunction with a few of your coworkers and that's where I met some of them and started knowing about Exafunction.[00:00:51] And then from out of nowhere you cloned GitHub Copilot. That's a lot of progress in a very short amount of time. So anyway, welcome .[00:00:59] Varun Mohan: That's high praise.[00:01:00] swyx: What's one thing about you that doesn't appear on LinkedIn that is a big part of what people should know?[00:01:05] Varun Mohan: I actually really like endurance sports actually.[00:01:09] Like I, I've done multiple triathlons. I've actually biked from San Francisco to LA. I like things that are like suffering. I like to suffer while I, while I do sports. Yeah.[00:01:19] swyx: Do you think a lot about like code and tech while you're doing those endurance sports or are you just,[00:01:24] Varun Mohan: your mind is just focused?[00:01:26] I think it's maybe a little bit of both. One of the nice things about, I guess, endurance athletics, It's one of the few things you can do where you're not thinking about, you can't really think about much beyond suffering. Like you're climbing up a hill on a bike and you see like, uh, you see how many more feet you need to climb, and at that point you're just struggling.[00:01:45] That's your only job. Mm-hmm. . Yeah. The only thing you can think of is, uh, pedaling one more pedal. So it's actually like a nice, a nice way to not think about work. Yeah,[00:01:53] Alessio Fanelli: yeah, yeah. Maybe for the audience, you wanna tell a bit about exa function, how that came to be and how coding came out[00:01:59] Varun Mohan: of that. So a little bit about exo function.[00:02:02] Before working at exa function, I worked at Neuro as Sean was just saying, and at neuro, I sort of managed large scale offline deep learning infrastructure. Realized that deep learning infrastructure is really hard to build and really hard to maintain for even the most sophisticated companies, and started exa function to basically solve that gap, to make it so that it was much easier for companies.[00:02:24] To serve deep learning workloads at scale. One of the key issues that we noticed is GPUs are extremely hard to manage fundamentally because they work differently than CPUs. And once a company has heterogeneous hardware requirements, it's hard to make sure that you get the most outta the hardware. It's hard to make sure you can get, get great GPU utilization and exa function was specifically built to make it so that you could get the most outta the hardware.[00:02:50] Make sure. Your GP was effectively virtualized and decoupled from your workload to make it so that you could be confident that you were running at whatever scale you wanted without burning the bank.[00:03:00] swyx: Yeah. You gave me this metric about inefficiency,[00:03:03] Varun Mohan: right? Oh, okay. Like flop efficiency. Yeah. Yeah. So basically, I think it comes down to, for most people, one of the things about CPUs that's really nice is with containers, right?[00:03:13] You can end up having a single. You can place many containers on them and all the containers will slowly start eating the compute. It's not really the same with GPUs. Like let's say you have a single. For the most part, only have one container using that gpu. And because of that, people heavily underestimate what a single container can sort of do.[00:03:33] And the GPU is left like heavily idle. And I guess the common term now with a lot of LM workloads is like the flop efficiency of these workloads. M F U, yeah. Yeah. Model flop utilization. The model flop utilization, which is basically like what fraction of the flops or compute on the hardware is actually getting used.[00:03:49] And sort of what we did at exa function. Not only make it so that the model was always running, we also built compiler technology to make it so that the model was also running more efficiently. And some of these things are with tricks like operator fusion, like basically you could imagine fusing two operations together such that the time it takes to compute.[00:04:07] the fused operation is lower than the time it takes for each individual operation. Oh my God. Yeah. .[00:04:13] Alessio Fanelli: Yeah. And you have this technique called dynamic multiplexing, which is basically, instead of having a one-to-one relationship, you have one GP for multiple clients. And I saw one of your customers, they went from three clients to just one single GPU and the cost by 97%.[00:04:29] What were some of those learning, seeing hardware usage and efficiencies and how that then played into what, what[00:04:34] Varun Mohan: you're building? Yeah, I think it basically showed that there was probably a gap with even very sophisticated teams. Making good use of the hardware is just not an easy problem. I think that was the main I, it's not that these teams were like not good at what they were doing, it's just that they were trying to solve a completely separate problem.[00:04:50] They had a model that was trained in-house and their goal was to just run it and it, that should be an easy. Easy thing to do, but surprisingly still, it's not that easy. And that problem compounds in complexity with the fact that there are more accelerators now in the cloud. There's like TPUs, inferential and there's a lot of decisions, uh, that users need to make even in terms of GPU types.[00:05:10] And I guess sort of what we had was we had internal expertise on what the right way to run the workload was, and we were basically able to build infrastructure and make it so that companies could do that without thinking. So most[00:05:21] Alessio Fanelli: teams. Under utilizing their hardware, how should they think about what to own?[00:05:26] You know, like should they own the appearance architecture? Like should they use Xlo to get it to production? How do you think[00:05:32] Varun Mohan: about it? So I think one thing that has proven to be true over the last year and a half is companies, for the most part, should not be trying to figure out what the optimal ML architecture is or training architecture is.[00:05:45] Especially with a lot of these large language models. We have generic models and transformer architecture that are solving a lot of distinct problems. I'll caveat that with most companies. Some of our customers, which are autonomous vehicle companies, have extremely strict requirements like they need to be able to run a model at very low latency, extremely high precision recall.[00:06:05] You know, GBT three is great, but the Precision Recall, you wouldn't trust someone's life with that, right? So because of that, they need to innovate new kinds of model architectures. For a vast majority of enterprises, they should probably be using something off the shelf, fine tuning Bert models. If it's vision, they should be fine tuning, resonant or using something like clip like the less work they can do, the better.[00:06:25] And I guess that was a key turning point for us, which is like we start to build more and more infrastructure for the architectures that. The most popular and the most popular architecture was the transformer architecture. We had a lot of L L M companies explicitly reach out to us and ask us, wow, our GT three bill is high.[00:06:44] Is there a way to serve G P T three or some open source model much more cheaply? And that's sort of what we viewed as why we were maybe prepared for when we internally needed to deploy transform models our.[00:06:58] Alessio Fanelli: And so the next step was, Hey, we have this amazing infrastructure. We can build kind of consumer facing products, so to speak, at with much better unit economics, much better performance.[00:07:08] And that's how code kind[00:07:10] Varun Mohan: of came to be. Yeah. I think maybe the, the play is not maybe for us to be just, we make a lot of consumer products. We want to make products with like clear ROI in the long term in the enterprise. Like we view code as maybe one of those things. Uh, and maybe we can, we can talk about code maybe after this.[00:07:27] We. Products like co-pilot as being extremely valuable and something that is generating a lot of value to professionals. We saw that there was a gap there where a lot of people probably weren't developing high intensive L L M applications because of cost, because of the inability to train models the way they want to.[00:07:44] And we thought we could do that with our own infrastructure really quickly.[00:07:48] swyx: I wanna highlight when you say high intensive, you mean basically generate models every key, uh, generate inferences on every keystroke? That's[00:07:55] Varun Mohan: right. Yeah. So I would say like, there's probably two kinds of L l M applications here.[00:07:59] There's an L L M application where, you know, it rips through a bunch of data and maybe you wait a couple minutes and then you see something, and then there's an application where the quality is not exactly what you want, but it's able to generate enough, sorry, low enough latency. It's still providing a ton of value.[00:08:16] And I will say there's like a gap there where the number of products that have hit that co-pilot spot is actually not that high. Mm. A lot of them are, are kind of like weight and, you know, just generate a lot of stuff and see what happens because one is clearly more compute intensive than the other Basically.[00:08:31] swyx: Well co uh, I don't know if we told the whole story yet, you were going to[00:08:35] Varun Mohan: dive into it. . Yeah, so I guess, I guess the story was I guess four or five months ago we sort of decided internally as a team we were like very early adopters of co-pilot. I'm not gonna sit here and say co-pilot, it's not a great tool.[00:08:45] We love co-pilot. It's like a fantastic tool. We all got on the beta. The moment it came out we're like a fairly small T, but we, like we all got in, we were showing each other completions. We end up writing like a lot of cuda and c plus plus inside the company. And I think there was probably a thought process within us that was like, Hey, the code we write is like very high aq.[00:09:04] You know? So like there's no way it can help. And one of the things in c plus plus that's like the most annoying is writing templates. Writing template programming is maybe one of those things. No one, maybe there's like some people in the C plus O standards community that can do it without looking at the, looking at anything online.[00:09:19] But we struggle. We struggle writing bariatric templates and COPA just like ripped through. Like we had a 500 line file and it was just like writing templates like, and we didn't really even test it while we were running it. We then just compiled it and it just, We're like, wow. Like this is actually something that's not just like it's completing four loops, it's completing code for us.[00:09:38] That is like hard in our brains to reach, but fundamentally and logically is not that complicated. The only reason why it's complicated is there's just a lot of rules, right. And from then we were just like, wow, this is, that was maybe the first l l m application for us internally, because we're not like marketers that would use, uh, Jasper, where we were like, wow, this is like extremely valuable.[00:09:58] This is not a toy anymore. So we wanted to take our technology to build maybe apps where these apps were not gonna be toys, right? They were not gonna be like a demo where you post it on Twitter and then you know there's hype and then maybe like a month later, no one's using.[00:10:11] swyx: There's a report this morning, um, from co-pilot where they, they were estimating the key tabs on amount of code generated by a co-pilot that is then left in code repos and checked in, and it's something like 60 to 70%[00:10:24] Varun Mohan: That's, that's nuts, but I totally believe it given, given the stats we have too. There's this flips in your head once you start using products like this, where in the beginning there's like, there's like skepticism, like how, how valuable can it be? And suddenly now like user behavior fundamentally changes so that now when I need to write a function, I'm like documenting my code more because I think it's prompting the model better, right?[00:10:43] So there's like this crazy thing where it's a self-fulfilling prophecy where when you get more value from it, more of your code is generated. From co-pilot[00:10:50] swyx: just to walk through the creation process, I actually assumed that you would have grabbed your data from the pile, which is the Luther ai, uh, open source, uh, code information.[00:11:00] But apparently you scraped your own[00:11:01] Varun Mohan: stuff. Yeah. We ended up basically using a lot of open, I guess, permissively licensed code, uh, in the public internet, mainly because I think also the pile is, is fairly a small subset. Uh, I think maybe after we started there was the, that was also came to be, but for us, we had a model for ourselves even before that, uh, was the point.[00:11:21] Ah, okay. So the timing was just a little bit off. Yeah, exactly. Exactly. But it's awesome work. It's, it seems like there's a good amount of work that's getting done Decentrally. Yeah. Which is a little bit surprising to me because I'm like more bullish on everyone needs to get together in a room and make stuff happen.[00:11:35] Like we're all in person in Mountain View. But yeah, no, it's pretty impressive. Yeah. Luther in general, like everything they've done, I'm pretty impressed with it. Yeah, and we're[00:11:42] swyx: gonna talk about that. Cause I, I didn't know you were that involved in the community[00:11:45] Varun Mohan: that early on I wasn't involved. It was more of like a, I was watching and maybe commenting from time to time.[00:11:50] So they're a very special community for sure. Yeah,[00:11:52] swyx: yeah, yeah. That's true. That's true. My impression is a bunch of you are geniuses. You sit down together in a room and you. , get all your data, you train your model, like everything's very smooth sailing. Um, what's wrong with that[00:12:02] Varun Mohan: image? Yeah, so probably a lot of it just in that a lot of our serving infrastructure was already in place, Uhhuh before then.[00:12:09] So like, hey, we were able to knock off one of these boxes that I think a lot of other people maybe struggle with. The open source serving offerings are just, I will say, not great in that. That they aren't customized to transformers and these kind of workloads where I have high latency and I wanna like batch requests, and I wanna batch requests while keeping latency low.[00:12:29] Mm-hmm. , right? One of the weird things about generation models is they're like auto regressive, at least for the time being. They're auto aggressive. So the latency for a generation is a function of the amount of tokens that you actually end up generating. Like that's like the math. And you could imagine while you're generating the tokens though, unless you batch a.[00:12:46] It's gonna end up being the case that you're not gonna get great flop utilization on the hardware. So there's like a bunch of trade offs here where if you end up using something completely off the shelf, like one of these serving thing, uh, serving frameworks, you're gonna end up leaving a lot of performance on the table.[00:13:00] But for us, we were already kind of prepared. To sort of do that because of our infrastructure that we had already built up. And probably the other thing to sort of note is early on we were able to leverage open source models, sort of bootstrap it internally within our company, but then to ship, we finally had some requirements like, Hey, we want this model to have fill in the middle capabilities and a bunch of other things.[00:13:20] And we were able to ship a model ourselves. So we were able to time it so that over the course of multiple months, different pieces were like working out properly for us. So it wasn't. . You know, we started out and we were just planning the launch materials. The moment we started there was like maybe some stuff that was already there, some stuff that we had already figured out how to train models at scale internally.[00:13:38] So we were able to just leverage that muscle very quickly. I think the one[00:13:41] swyx: thing that you had figured out from the beginning was that it was gonna be free forever. Yeah. Yeah, co-pilot costs $10[00:13:47] Varun Mohan: a month. Co-pilot costs $10 a month. I would argue significantly more value than $10 a month. The important thing for us though, was we are gonna continue to build more great products on top of code completion.[00:13:58] We think code completion is maybe day one of what the future looks like. And for that, clearly we can't be a product that's like we're $10 a month and we're adding more products. We want a user base that loves using us. And we'll continue to stay with us as we continue to layer on more products. And I'm sure we're gonna get more users from the other products that we have, but we needed some sort of a differentiator.[00:14:17] And along the way we realized, hey, we're pretty efficient at running these workloads. We could probably do this. Oh, so it wasn't,[00:14:23] swyx: it was a plan to be free from the start. You just[00:14:25] Varun Mohan: realized we, yeah. We realized we could probably, if we cut and optimized heavily, we could probably do this properly. Part of the reasoning here was we were confident we could probably build a pro tier and go to the enter.[00:14:35] But for now, originally when we, when we started, we weren't like, we're just gonna go and give every, all pieces of software away for free. That wasn't like sort of the goal there. And[00:14:43] swyx: since you mentioned, uh, adoption and, you know, traction and all that, uh, what can you disclose about user growth? Yeah, user adoption.[00:14:50] Varun Mohan: Yeah. So right now we have. We probably have over 10,000 users and thousands of daily actives, and people come back day over day. Our growth is like around, you know, four to 5% day over day right now. So all of our growth right now is sort of like word of mouth, and that's fundamentally because like the product is actually one of those products where.[00:15:08] Even use COT and use us, it's, it's hard to tell the difference actually. And a lot of our users have actually churned off of cot isn't Yeah. I,[00:15:14] swyx: I swept Yeah. Yeah. To support you guys, but also also to try[00:15:17] Varun Mohan: it out. Yeah, exactly. So the, the crazy thing is it wasn't like, Hey, we're gonna figure out a marketing motion of like, Going to the people that have never heard of co-pilot and we're gonna like get a bunch of users.[00:15:27] We wanted to just get users so that in our own right we're like a really great product. Uh, and sort of we've spent a lot of engineering time and obviously we co-wrote a blog post with you, Sean, on this in terms of like, there's a lot of engineering work, even beyond the latency, making sure that you can get your cost down to make a product like this actually work.[00:15:44] swyx: Yeah. That's a long tail of, of stuff that you referenced,[00:15:47] Varun Mohan: right? Yes. Yeah, exactly.[00:15:48] swyx: And you, you said something to the order of, um, and this maybe gets into co-pilot for X uh, which is something that everybody is keen about cuz they, they see the success of co-pilot. They're like, okay, well first of all, developer tools, there's more to do here.[00:16:00] And second of all, let's say the co-pilot idea and apply for other disciplines. I don't know if you wanna Yeah.[00:16:06] Varun Mohan: There's[00:16:06] Alessio Fanelli: gonna some. Key points that, that you touched on. Um, how to estimate, inference a scale, you know, and the latency versus quality trade-offs. Building on first party. So this is free forever because you run your own models, right?[00:16:19] That's right. If you were building on open ai, you wouldn't be able to offer it for free real-time. You know, when I first use coding, It was literally the same speed as Copi is a little bit[00:16:29] swyx: faster. I don't know how to quantify it,[00:16:31] Varun Mohan: but we are faster. But it's one of those things that we're not gonna like market as that's the reason because it's not in and of itself a right for you to like, I'm just gonna be open with you.[00:16:39] It's not a reason for you to like suddenly turn off a copilot where if our answers were trash, uh, but we were faster. You know what I mean? But your focus[00:16:46] Alessio Fanelli: was there. We used the alpha, I think prem on our discord came to us and say, you guys should try this out. So it was really fast. Even then, prompt optimization is another big thing, and model outputs and UX kind of how you bring them together.[00:17:00] Which ones of these things are maybe like the one or two that new founders should really think about first?[00:17:07] Varun Mohan: Yeah, I think, I think my feeling on this is unless you are ex, you probably should always bootstrap on top of an existing a. Because like even if you were to, the only reason why we didn't is because we knew that this product was actually buildable.[00:17:22] Probably if we worked hard enough to train a model, we would actually be able to build a great product already. But if you're actually going out and trying to build something from scratch, unless you genuinely believe, I need to fine tune on top of, you know, terabytes of data terabyte is a very large amount of data, but like tens of gigabytes of data.[00:17:37] Probably go out and build on top of an API and spend most of your time to make it so that you can hit that quality latency trade off properly. And if I were to go out and think about like the three categories of like an LM product, it's probably like latency, quality, and correct ability. The reality is, you know, if I were to take a product like co-pilot or Coum, the latency is very low.[00:17:58] The quality I think, is good enough for the task, but the correct ability is, is very easy. Credibility. What, what is correct ability? Correct ability means, let's say the quality is not there. Like you consider the the case where, The answer is wrong. How easy is it for your user to actually go and leverage parts of the generation?[00:18:16] Maybe a, a concrete example. There's a lot of things people are excited about right now where I write a comment and it generates a PR for me, and that's like, that's like really awesome in theory. I think that's like a really cool thing and I'm sure at some point we will be able to get there. That will probably require an entirely new model for what it's worth that's trained on diffs and commits and all these other things that looks at like improvements and code and stuff.[00:18:37] It's probably not gonna be just trained on generic code. But the problem with those, those sort of, I would say, applications are that, let's suppose something does change many files, makes large amounts of changes. First of all, it's guaranteed not gonna be. Because even the idea of like reviewing the change takes a long time.[00:18:54] So if the quality and the correct ability is just not there, let's say you had 10 file, a 10 file change and you modified like, you know, file two and four, and those two modifications were consistent, but the other eight files were not consistent. Then suddenly the correct ability is like really hard.[00:19:10] It's hard to correct the output of the model. And so the user interface is 100% really important. But maybe until you get the latency down or the correct ability, like correct ability, like a lot better, it's probably not gonna be shippable. And I think that's what you gotta spend your time focusing on.[00:19:26] Can you deliver a product that is actually something users want to use? And I think this is why I was talking about like demo. It's like very easy to hand to handpick something that like works, that works for a demo, exceedingly hard for something that has large scope, like a PR to work consistently. It will take a lot of engineering effort to make it work on small enough chunks so that a user is like, wow, this is value generative to me.[00:19:49] Because eroding user trust or consumer trust is very easy. Like that is, it is is much, much, it's very easy to erode user trust versus enterprise. So just be mindful of that, and I think that's probably like the mantra that most of these companies need to operate under. Have you done any[00:20:05] Alessio Fanelli: analysis on. What the ratio between code generated and latency is.[00:20:11] So you can generate one line, but you could also generate the whole block. You can generate Yeah. A whole class and Yeah. You know, the more you generate the, the more time it takes. Like what's the sweet spot that, that you[00:20:21] Varun Mohan: found? Yeah, so I think there was a great study and, and I'm not sure if it's possible to link it, but there was a great study about co-pilot actually that came out.[00:20:28] Basically what they said was there were two ways that developers usually develop with a code assistant technology. They're either in what's called like acceleration mode or exploration mode. And exploration mode is basically you're in the case where you don't even know what the solution space for the function is.[00:20:43] and you just wanna generate a lot of code because you don't even know what that looks like. Like it might use some API that you've never heard of. And what you're actually doing at that point is like you're writing a clean comment, just wishing and praying that you know, the generation is long enough and gets you, gets you far enough, right?[00:20:57] acceleration mode is basically you are doing things where you are very confident in what you're doing and effectively. Code gives you that muscle so that you can basically stay in flow state and you're not thinking about like exactly what the APIs look like, but push comes to shove. You will figure out what the APIs look like, but actually like mentally, it takes off like a load in your head where you're like, oh wow.[00:21:18] Like I can just do this. The intent to execution is just a lot, a lot lower there. And I think effectively you want a tool that captures that a little bit. And we have heuristics in terms of captur. Whether or not you're in acceleration versus exploration mode. And a good heuristic is, let's say you're inside like a basic block of a piece of code.[00:21:37] Let's say you're inside a a block of code or an IF statement, you're probably already in acceleration mode and you would feel really bad if I started generating the ELs clause. Because what happens if that else causes really wrong? That's gonna cause like mental load for you because you are the way programmers think.[00:21:51] They only want to complete the if statement first, if that makes sense. So there are things where we are mindful of like how many lines we generate if you use the product, like multi-line generations happen and we are happy to do them, but we don't want to do them when we think it's gonna increase load on developers, if that makes sense.[00:22:07] That[00:22:07] Alessio Fanelli: makes sense. So co-pilot for x. , what are access that you think are interesting for people to build[00:22:13] Varun Mohan: in? Didn't we see some, some tweet recently about Harvey ai, uh, company that, that is trying to sell legal? It's like a legal, legal assistance. That's, that's pretty impressive, honestly. That's very impressive.[00:22:23] So it seems like I would really love to see what the product looks like there, because there's a lot of text there. You know, looking at bing, bing, ai, like, I mean, it's, it's pretty cool. But it seems like groundedness is something a lot of these products struggle with, and I assume legal, if there's one thing you want them to.[00:22:39] To get right. It's like the groundedness. Yeah.[00:22:42] swyx: Yeah. I've made the analogy before that law and legal language is basically just another form of programming language. You have to be that precise. Yes. Definitions must be made, and you can scroll to find the definition. It's the same thing. Yes. ,[00:22:55] Varun Mohan: yes. Yeah. But like, I guess there's a question of like comprehensiveness.[00:22:59] So like, let's say, let's say the only way it generates a suggestion is it provides like, you know, citations to other legal. You don't want it to be the case that it misses things, so you somehow need the comprehensiveness, but also at the same time, you also don't want it to make conclusions that are not from the site, the things at sites.[00:23:15] So, I don't know, like that's, that's very impressive. It's clear that they've demonstrated some amount of value because they've been able to close a fairly sizable enterprise contract. It was like a firm with 3,500 lawyers, something nuts, honestly. Very cool. So it's clear this is gonna happen, uh, and I think people are gonna need to be clever about how they actually make it work.[00:23:34] Within the constraints of whatever workload they're operating in. Also, you, you guys[00:23:37] swyx: are so good at trading stuff, why don't you, you try[00:23:39] Varun Mohan: cloning it. Yeah. So I think, I think that's, that's, uh, preview the roadmap. Yeah, yeah, yeah, yeah. No, no, no, but I'm just kidding. I think one of the things that we genuinely believe as a startup is most startups can't really even do one thing properly.[00:23:52] Mm-hmm. Focus. Yeah. Yeah. Usually doing one thing is really hard. Most companies that go public have like maybe a couple big products. They don't really have like 10, so we're under no illusions. Give the best product experience, the amount of engineering and attention to detail, to build one good product as hard.[00:24:08] So it's probably gonna be a while before we even consider leaving code. Like that's gonna be a big step because the amount of learning we need to do is gonna be high. We need to get users right. We've learned so much from our users already, so, yeah, I don't think we'd go into law anytime soon.[00:24:22] swyx: 3,500 lawyers with Ellen and Ry, uh, is, is is apparently the, the new[00:24:27] Varun Mohan: That's actually really big.[00:24:28] Yeah. Yeah. I can congrat.[00:24:29] swyx: Yeah, it's funny cuz like, it seems like these guys are moving faster than co-pilot. You know, co-pilot just launched, just announced enterprise, uh, like co-pilot for teams or co-pilot for Enterprise. Yeah. After like two years of testing.[00:24:40] Varun Mohan: Yeah, it does seem like the co-pilot team has built a very, very good product.[00:24:44] Um, so I don't wanna like say anything, but I think it is the case to startups will be able to move faster. I feel like that is true, but hey, like GitHub has great distribution. Whatever product they do have, they will be able to sell it really. Shall[00:24:56] swyx: we go into model numbers and infra estimates? our favorite[00:25:01] Varun Mohan: topics.[00:25:02] Nice small models. Nice.[00:25:04] swyx: So this is, um, relevant to basically I'm researching a lot of skilling law stuff. You have a lot of thoughts. You, you host paper discussions[00:25:12] Varun Mohan: in your team. Yeah, we, we try to like read papers that we think are really interesting and relevant to us. Recently that's been, there's just a fire hose of papers.[00:25:21] You know, someone even just curating what papers we should read internally as a company. Yeah, I think, I think there's, there's so much good content[00:25:28] swyx: out there. You should, you guys should have a podcast. I mean, I told you this before. Should have a podcast. Just, just put a mic near where, where you guys are[00:25:33] Varun Mohan: talking.[00:25:34] We gotta, we gotta keep developing coding though, . No, but you're doing this discussion[00:25:38] swyx: anyway. You[00:25:38] Varun Mohan: might as well just, oh, put the discussion on a podcast. I feel like some of the, some of the thoughts are raw, right? Like, they're not gonna be as, as nuanced. Like we'll just say something completely stupid during our discussions.[00:25:48] I don't know, , maybe that's exciting. Maybe that's, it's kinda like a justin.tv, but for ML papers, Okay, cool. I watched that.[00:25:55] swyx: Okay, so co-pilot is 12 billion parameters. Salesforce cogen is up to 16. G P t three is 175. GP four is gonna be 100 trillion billion. Yeah. So what, what we landed on with you is with, uh, with Cilla, is that we now have an idea of what compute optimal data scaling is.[00:26:14] Yeah. Which is about 20 times parameters. Is that intuitive to you? Like what, what did that[00:26:18] Varun Mohan: unlock? I think basically what this shows is that bigger models are like more data efficient, like given the same number of tokens, a big model like trained on the same number of tokens. A bigger model is like, is gonna learn more basically.[00:26:32] But also at the same time, the way you have to look at it is there are more flops to train a bigger model on the same number of tokens. So like let's say I had a 10 billion parameter model and I trained it on on 1 million tokens, but then I had a 20 billion parameter model at the end of it will be a better.[00:26:47] It will have better perplexity numbers, which means like the probability of like a prediction is gonna be better for like the next token is gonna be better. But at the end of it, you did burn twice the amount of compute on it. Right? So Shinto is an interesting observation, which says if you have a fixed compute budget, And you want the best model that came out of it because there's like a difference here where a model that is, that is smaller, trained on the same number of tokens as fewer flops.[00:27:12] There's a a sweet spot of like number of tokens and size a model. I will say like people probably like. Are talking about it more than they should, and, and I'll, I'll explain why, but it's a useful result, which is like, let's say I have, you know, some compute budget and I want the best model. It tells you what that, what you should generate.[00:27:31] The problem I think here is there is a real trade off of like, you do need to run this model somewhere. You need to run it on a piece of hardware. So then it comes down to how much memory does that piece of hardware have. Let's say for a fixed compute budget, you could train a 70 billion parameter. What are you gonna put that on?[00:27:47] Yeah, maybe you could, could you put that on an 80 gig, A 100? It would be a stretch. You could do things like f, you know, in eight F p a, to reduce the amount of memory that's on the box and do all these other things. But you have to think about that first, right? When you want to go out and train that model.[00:27:59] The worst case is you ended up training that mo, that model, and you cannot serve it. So actually what you end up finding is for a lot of these code completion models, they are actually what you would consider over-trained . So by that I mean like, let's look at a model like Cogen. It's actually trained on, I believe, and, and I could be wrong by, you know, a hundred billion here or there.[00:28:18] I got some data. Oh, okay. Let's look at the 3 billion parameter model. It's a 2.7. I think it's actually a 2.7 billion barometer model. It's weird because they also trained on natural language on top of code, but it's trained on hundreds of billions of tokens. If you applied that chinchilla, Optimization to it, you'd be like, wow, this is, this is a stupid use of compute.[00:28:36] Right? Because three, they should be going to 60, any anything more than 60. And they're like, they should have just increased the model size. But the reality is if they had like the compute optimal one might not be one that's easy to serve, right? It could just have more parameters. And for our case, our models that we train internally, they might not be the most compute.[00:28:56] In other words, we probably could have had a better model by making it larger, but the trade off would've been latency. We know what the impact of having higher latency is, and on top of that, being able to fit properly on our hardware constraints would've also been a concern.[00:29:08] swyx: Isn't the classic stopping point when you, you see like loss kind of levels off.[00:29:12] Right now you're just letting chinchilla tell you,[00:29:16] Varun Mohan: but like you should just look at loss. The problem is the loss will like continue to go down. It'll just continue to go down like, like in a, in a way that's like not that pleasing. It's gonna take longer and longer. It's gonna be painful, but it's like one of those things where if you look at the perplexity number of difference between.[00:29:31] Let's say a model that's like 70 billion versus 10 billion. It's not massive. It's not like tens of percentage points. It's like very small, right? Mm. The reality is here, like, I mean this comes down to like IQ of like these models in some sense, like small wins at the margins are massive wins in terms of iq.[00:29:47] Like it's harder to get those and they don't look as big, but they're like massive wins in terms of reasoning. They can now do chain of thought, all these other things. Yeah, yeah, yeah.[00:29:55] swyx: It's, and, and so apparently unlocked around the[00:29:57] Varun Mohan: 20 billion. Yes. That's right. Some kind of magic. Yeah. I think that was from the UL two or maybe one of those land papers.[00:30:03] Any thoughts on why? Like is there is? I don't know. I mean, emergence of intelligence, I think. I think maybe one of the things is like we don't even know, maybe like five years from now of what we're gonna be running are transformers. But I think it's like, we don't, we don't 100% know that that's true. I mean, there's like a lot of maybe issues with the current version of the transformers, which is like the way attention works, the attention layers work, the amount of computers quadratic in the context sense, because you're like doing like an n squared operation on the attention blocks basically.[00:30:30] And obviously, you know, one of the things that everyone wants right now is infinite context. They wanna shove as much prop as possible in here. And the current version of what a transformer looks like is maybe not ideal. You might just end up burning a lot of flops on this when there are probably more efficient ways of doing it.[00:30:45] So I'm, I'm sure in the future there's gonna be tweaks to this. Yeah. Uh, but it is interesting that we found out interesting things of like, hey, bigger is pretty much always better. There are probably ways of making smaller models significantly better through better data. That is like definitely true. Um, And I think one of the cool things that the stack showed actually was they did a, like a, I think they did some ablation studies where they were like, Hey, what happens if we do, if we do decontamination of our data, what happens if we do de-duplication?[00:31:14] What happens if we do near dup of our data and how does the model get better? And they have like some compelling results that showcase data quality really matters here, but ultimately, Yeah, I think it is an interesting result that at 20 billion there's something happening. But I also think like some of these things in the future may look materially different than what they look like right now.[00:31:30] Hmm. Do you think[00:31:31] Alessio Fanelli: the token limitation is actually a real architectural limitation? Like if you think about the tokens need as kind of like atic, right? Like once you have. 50,000 tokens context, like 50,000 or infinite. For most use cases, it's like the same. Where do you think that number is, especially as you think about code, like some people have very large code bases, there's a lot.[00:31:53] Have you done any work there to figure out where the sweet[00:31:55] Varun Mohan: spot is? Yeah, look, I think what's gonna really end up happening is if people come up with a clever way and, and it, there was some result research that I believe came out of Stanford. I think the team from the Helm group, I think came out with some architecture that looks a little bit different than Transformers, and I'm sure something like this will work in the future.[00:32:13] What I think is always gonna happen is if you find a cheap way to embed context, people are gonna figure out a way to, to put as much as possible in because L LM so far have been like virtually stateless. So the only thing that they have beyond fine tuning is like just shoveling everything you can inside.[00:32:28] And there are some interesting papers, like retro, actually there are maybe some interesting pieces of thought like ideas that have come out recently. Yeah, let's go through them. So one of the really interesting ideas, I think is retro. It's this paper that came out of DeepMind and the idea is actually, let's say you send out, you send out, uh, a prompt.[00:32:44] Okay? Send out a prompt. You compute the burt embedding of that. And then you have this massive embedding database. And by massive, I'm not talking about like gigabytes, I'm talking about terabytes. Like you have, geez, you actually have 10 times the number of tokens as what was used to train the model. So like, let's say you had a model that was trained on a trillion tokens, you have a 10 trillion embed, uh, like embedding database.[00:33:04] And obviously Google has this because they have all content that ever existed in humanity and they have like the best data set and sort of, they were able to make one of these, uh, embedding databases. But the idea here, which is really cool, is you end. Taking your prompt, computing, the bird, embedding you find out the things that were nearby.[00:33:20] So you do roughly like a semantic search or an embedding search within that. And then you take those, you take the documents that were from those embeddings and you shove those in the model too, in what are called like cross chunked attention. So you like shove them in the model with it as well.[00:33:34] Suddenly now the model is able to take in external. Which is really exciting actually, because suddenly now you're able to get dynamic context in, and the model in some sense is deciding what that context is. It's not deciding it completely. In this case, because the Bert model in this case was actually frozen.[00:33:50] It wasn't trained with the retro model as well, but. The idea is you're somehow adding or augmenting context, which I think is like quite exciting. There's probably two futures. Either context becomes really cheap. Right now it's quadratic. Maybe there's a future where it becomes linear in the, in the size of the context, but the future might actually be the model itself dictates, Hey, I have this context.[00:34:10] You have this data source. Give me this. The model itself is going out into your database and like being like, I want this information, and this is kind of like. What Bing search is looking like. Right? Or bing chat is sort of looking like where it's like I, the model is probably, there's probably some model that's saying I want this information.[00:34:27] And that is getting augmented into the context. Now the model itself knows what context it sort of has and it can sort of like build a state machine of sort of what it needs. And that's probably what the future of this looks like. So you, you[00:34:37] swyx: predict monster embedding database[00:34:39] Varun Mohan: companies? Probably Monster embedding database companies or, yeah.[00:34:43] The model in some sense will need to talk to, Talk to these embedding databases. I'm actually not convinced that the current breed of embedding database companies are like ready for what the future sort of looks like. I think I'm just looking at their pricing, how much it costs per gigabyte and it's prohibitive at the scale we're talking about, like let's say you actually did want to host a 10 terabyte embedding database.[00:35:03] A lot of them were created, let's say two years ago, two, three years ago, where people were like, you know, embedding databases are small and they need to make the cost economics work. But maybe, yeah, there's probably gonna be a big workload there. I will just say for us, we will probably just build this in-house to start with, and that's because I think the technology probably isn't there.[00:35:20] And I think that the technology isn't there yet. Like waiting on point solutions to come up is a lot harder, um, than probably building it up. The way I, I like to think about this is probably the world looks on the LM space. Looks like how the early internet days were, where I think the value was accrued to probably like Google and Google needed to figure out all the crazy things to make their workload work.[00:35:41] And the reason why they weren't able to outsource is, is no one else was feeling the pain. ,[00:35:46] swyx: they're just solving their own pain points. They're just solving their own pain points. They're so far ahead of everyone else. Yes, yes. And just wait[00:35:50] Varun Mohan: for people to catch up. Yes. Yes. And that's maybe different than how things like Snowflake look where the interface has been decided for what SQL looks like 50 years ago.[00:35:58] And because of that, you can go out and build the best database and Yeah, like everyone's gonna be like, this doesn't make my beer taste better. And buy your database basically. That's[00:36:08] swyx: a great reference, by the way. Yeah. We have some friends of the, the pod that are working on embedding database, so we'll try to connect you Toroma[00:36:14] Varun Mohan: and see.[00:36:14] Yeah. Oh, I actually know Anton. I worked with him at Neuro. Oh. Although, there you go. Yeah. Uh, what do you, well, what do you think about, I mean,[00:36:20] swyx: so chromas pivoting towards an embedding[00:36:22] Varun Mohan: database. I think it's an interesting idea. I think it's an interesting idea. I wonder what the early set of workloads that.[00:36:27] They will hit our, and you know what the scaling requirements are. This is maybe the classic thing where like, the teams are great, but you need to pick a workload here that you care about the most. You could build anything. You could build anything. When you're an infrastructure company, you can go in, if I was selling, serving in for, I could build, serving for like linear aggression.[00:36:44] I could build this, but like, unless you hit the right niche for the end user, it's gonna be. . So I think it, I'm excited to see what comes out and if they're great, then we'll use it. Yeah.[00:36:54] swyx: I also like how you slowly equated yourself to Google there. Oh, we're not, we're not Google. You're, you're gonna be the Google of ai.[00:37:00] Varun Mohan: We're definitely, we're definitely not Google. But I was just saying in terms of like, if you look at like the style of companies that came out. Yeah. You know? Absolutely. Or maybe we should live in the cutting edge in[00:37:08] swyx: the future. Yeah. I think that's the pitch.[00:37:10] Varun Mohan: Okay, thanks for b***h us.[00:37:13] Alessio Fanelli: So you just mentioned the older vector embedding source are kind of not made for the L l M generation of compute size.[00:37:21] what does l LM ops look like? You know, which pieces need to be drastically different? Which ones can we recycle?[00:37:27] Varun Mohan: Yeah. One of the things that we've found, like in our own thing of building code that's been just shows how much is missing, and this is the thing where like, I don't know how much of this you can really outsource, which is like we needed to build eval infrastructure.[00:37:40] That means how do you build a great code? And there are things online like human eval, right? And uh, I was telling, which is the benchmark telling Sean about this, the idea of human eval is really neat for code. The idea is you provide a bunch of functions with Docstrings and the eval instead of being, did you predict next token?[00:37:56] It's like, did you generate the entire function and does the function run correctly against a bunch of unit tests? Right. And we've built more sophisticated evals to work on many languages, to work on more variety of code bases. One of the issues that ends up coming up with things like human eval is contam.[00:38:12] Because a lot of these, uh, things that train models end up training on all of GitHub GitHub itself has human eva, so they end up training on that. And then the numbers are tiny, though. It's gonna be tiny, right? But it doesn't matter if it's tiny because it'll just remember it. It'll remember that it's, it's not that it's that precise, but it will, it's like, it's basically like mixing your, your training and validation set.[00:38:32] It's like, oh, yeah, yeah, yeah, yeah. But we've seen cases where like online where someone is like, we have a code model that's like, they we're like, we did this one thing, and HU and human eval jumped a ton and we were just like, huh, did human eval get into your data set? Is that really what happened there?[00:38:46] But we've needed to build all this eval. And what is shown is data cleaning is massive, but data cleaning looks different by. Like code data cleaning is different than what is a high quality piece of code is probably different than what's a high quality legal document. Yeah. And then on top of that, how do you eval this?[00:39:01] How do you also train it at scale at whatever cost you really want to get? But those are things that the end user is either gonna need to solve or someone else is gonna need to solve for them. And I guess maybe one of the things I'm a little bearish on is if another company comes out and solves eval properly for a bunch of different verticals, what was the company that they were selling to really?[00:39:21] What were they really doing at that point? If they themselves were not eval for their own workload and all these other things? I think there are cases where, let's say for code where we probably couldn't outsource our eval, like we wouldn't be able to ship models internally if we didn't know how to eval, but it's clear that there's a lot of different things that people need to take.[00:39:38] Like, Hey, maybe there's an embedding piece. How large is this embedding database actually need to be? But hey, this does look very different than what classic ML ops probably did. Mm-hmm. . How[00:39:47] Alessio Fanelli: do you compare some of these models? Like when you're thinking about model upgrading and making changes, like what does the testing piece of it internally?[00:39:56] Yeah. For us look like.[00:39:56] Varun Mohan: For us, it's like old school AB testing. We've built like infrastructure to be able to say, ramp up users from one to 10 to. 50% and slowly roll things out. This is all classic software, uh, which[00:40:09] swyx: you do in-house. You don't, you don't buy any[00:40:10] Varun Mohan: services. We don't buy services for that.[00:40:13] There are good services, open source services that help you just don't need them. Uh, yeah, I think that's just like not the most complicated thing for us. Sure. Basically. Yeah. Uh, but I think in the future, maybe, we'll, obviously we use things like Google Analytics and all this other stuff, but Yeah. For things of ramping our models, finding out if they're actually better because the eval also doesn't tell the whole story because also for us, Even before generating the prompt, we do a lot of work.[00:40:36] And the only way to know that it's really good across all the languages that our users need to tell us that it's actually good. And, and they tell us by accepting completions. So, so GitHub[00:40:44] swyx: co-pilot, uh, the extension does this thing where they, they like, they'll set a timer and then within like five minutes, 10 minutes, 20 minutes, they'll check in to see if the code is still there.[00:40:54] I thought it was a[00:40:54] Varun Mohan: pretty creative way. It's, it's a very, it's honestly a very creative way. We do do things to see, like in the long term, if people did. Accept or write things that are roughly so because they could accept and then change their minds. They could accept and then change their minds. So we, we are mindful of, of things like that.[00:41:09] But for the most part, the most important metric is at the time, did they actually, did we generate value? And we want to know if that's true. And it's, it's kind of, it's honestly really hard to get signal unless you have like a non-trivial amount of usage, non-trivial, meaning you're getting, you're doing hundreds of thousands of completions, if not millions of completions.[00:41:25] That sounds like, oh wow. Like, that's like a very small amount. But like it's classic. Maybe like if you look at like when I used to be an intern at Quora, like, you know, now more than seven, eight years ago. When I was there, I like shipped a change and then Cora had like millions of daily actives and then it looked like it was good, and then a week later it was just like way worse.[00:41:43] And how is this possible? Like in a given hour we get like hundreds of thousands of interaction, just like, no, you just need way more data. So this is like one of those things where I think having users is like genuinely very valuable to us, basically. Users is all you need. . Yeah.[00:41:59] swyx: Um, by the way, since you brought out Quora, have you tried po any, any thoughts[00:42:03] Varun Mohan: on po I have not actually tried po I've not actually tried.[00:42:05] I[00:42:05] swyx: mean, it seems like a question answering website that's been around for 20 years or something. Would be very, would be very good at question answering. Yeah.[00:42:12] Varun Mohan: Also Adam, the ceo, is like incredibly brilliant. That guy is like insanely smart, so I'm sure they're gonna do,[00:42:18] swyx: they have accidentally built the perfect like data collection company for For qa.[00:42:22] Varun Mohan: Yeah. . It takes a certain kind of person to go and like cannibalize your original company like the in, I mean, it was kinda stagnant for like a few years. Yeah, that's probably true. That's[00:42:31] swyx: probably true. The observation is I feel like you have a bias to its domain specific. , whereas most research is skewed towards, uh, general models, general purpose models.[00:42:40] I don't know if there's like a, a deeper insight here that you wanna go into or, or not, but like, train on all the things, get all the data and you're like, no, no, no. Everyone needs like customized per task,[00:42:49] Varun Mohan: uh, data set. Yeah. I think I'm not gonna. Say that general intelligence is not good. You want a base model that's still really good and that's probably trained on normal text, like a lot of different content.[00:43:00] But I think probably one thing that old school machine learning, even though I'm like the kind of person that says a lot of old school machine learning is just gonna die, is that training on a high quality data set for your workload is, is always gonna yield better results and more, more predictable results.[00:43:15] And I think we are under no illusions that that's not the case. Basical. And[00:43:19] swyx: then the other observation is bandwidth and connectivity, uh, which is not something that people usually think about, but apparently is a, is a big deal. Apparently training agreed in the synchronous needs, high GPU coordination.[00:43:29] These are deleted notes from Sam Altman talking about how they think about training and I was like, oh yeah, that's an insight. And[00:43:34] Varun Mohan: you guys have the same thing. Yeah. So I guess for, for training, you're right in that it is actually nuts to think about how insane the networks are for NVIDIA's most recent hardware, it's.[00:43:46] For the H 100 boxes, you shove eight of these H 100 s on a. Between two nodes. The bandwidth is 3,200 gigabits a second, so 400 gigabytes a second between machines. That's like nuts when you just sit and think about it. That's like double the memory bandwidth of what a CPU has, but it's like between two machines.[00:44:04] On top of that, within the machine, they've created this, this fabric called envy link that allows you to communicate at ultra low latency. That's even lower than P C I E. If you're familiar, that's like the communication protocol. . Yeah, between like the CPU and the other devices or other P C I E devices.[00:44:21] All of this is to make sure that reductions are fast, low latency, and you don't need to think about it. And that's because like a lot of deep learning has sort of evolved. Uh, training has evolved to be synchronous in the OG days. There is a lot of analysis in terms of how good is asynchronous training, which is like, Hey, I have a node, it has a current state of the model.[00:44:39] It's gonna update that itself locally, and it'll like every once in a while, go to another machine and update the weights. But I think like everyone has converged to synchronous. I'm not exactly sure. There's not a lot of good research on asynchronous training right now. Or maybe there is an, I haven't read it.[00:44:52] It's just that there isn't as much research because people are just like, oh, synchronous works. Uh, and the hardware is continually upleveled to handle[00:44:59] swyx: that. Yeah. It was just un unintuitive to me cuz like the whole purpose of GPUs could train things. A lot of things in parallel. Yes.[00:45:05] Varun Mohan: But the crazy thing is also, maybe I can, I can give some dumb math here.[00:45:09] Sure. Here, which is that, uh, let's go with uh, G B T three, which is like 170 billion per. The optimizer state, so while you're training is 14 times the size of the model, so in this case, if it's like 170 billion parameters, it's probably, I'm not great at mental math here, but that's probably around 2.5 terabytes to just store the optimizer state.[00:45:30] That has gotta be sharded across a lot of machines. Like that is not a single gpu. Even if you take an H 100 with 80 gigs to just shard that much, that's like 40, at least 30 machines. So there's like something there where these things need to communicate with each other too.[00:45:44] swyx: You need to vertically scale horizontally.[00:45:46] Varun Mohan: Yeah. You gotta co-located, you gotta somehow feel like you have this massive, the, the ideal programming paradigm is you feel like you have this massive computer. That has no communication, you know, overhead at all, but it has like infinite computer and infinite memory bandwidth.[00:45:59] swyx: That's the AI cluster. Um, okay, well, uh, we want to head to the questions.[00:46:05] Alessio Fanelli: So favorite AI product that you are not[00:46:08] Varun Mohan: building? Yeah, I'm friends with some of the folks at Mid Journey and I really think the Mid Journey product is super cool, especially seeing how the team is iterating and the quality of generations. It consistently gets upleveled. I think it's like quite neat and I think internally at at exa functional, we've been trying out mid Journey for like random content to like generate images and stuff.[00:46:26] Does it bother[00:46:26] swyx: you that they have like a style. I don't know. It, it seems like they're hedging themselves into a particular, like you want mid journey art, you go there.[00:46:33] Varun Mohan: Yeah. It's a brand of art. Yeah, you're right. I think they do have a style, but it seems more predictably good for that style. Okay. So maybe that's too, so just get good at, uh, domain specific thing.[00:46:41] Yeah. Yeah. maybe. Maybe I, maybe I'm just selling, talking to a booker right now. . Yeah. Uh, okay.[00:46:46] swyx: Uh, next question. Uh, favorite AI people and[00:46:48] Varun Mohan: communities? Yeah, so I think I mentioned this before, but I think obviously the open. The opening eye folks are, are insane. Like we, we only have respect for them. But beyond that, I think Elu is a pretty special group.[00:46:59] Especially it's been now probably more than a year and a half since they released like G P T J, which was like back when open source G PT three Curri, which was comparable. And it wasn't like a model where like, It wasn't good. It was like comparable in terms of perplexity to GT three curity and it was trained by a university student actually, and it just showed that, you know, in the end, like I would say pedigree is great, but in if you have people that are motivated know how computers work and they're willing to just get their hands dirty, you can do crazy things and that was a crazy project that gave me more hope.[00:47:34] Decentral training being potentially pretty massive. But I think that was like a very cool thing where a bunch of people just got on Discord and were chatting and they were able to just turn this out. Yeah. I did[00:47:42] swyx: not know this until I looked in further into Luther, but it was not a formal organization.[00:47:45] Was a company was a startup. It's not, yeah. Bunch of guys on Discord.[00:47:48] Varun Mohan: They gotta you, they gotta keep you research grant and they somehow just wrote some codes. .[00:47:52] Alessio Fanelli: Yeah. Yeah. Listen to APAC with Connor, who's the person, and basically Open Eye at the time was like, we cannot release G P T because it's like too good and so bad.[00:48:01] And he was like, He actually said he was sick, so he couldn't leave home for like a, a few weeks. So it was like, what else am I gonna do? And ended up

Shark farmer Podcast/ agriculture farm
345 Roy Sorce & Clint Carter Copi not Carp

Shark farmer Podcast/ agriculture farm

Play Episode Listen Later Jan 10, 2023 54:07


Does the U.S. really import 90% of it's seafood? Listen to how the Midwest is starting to utilize the hated flying Asian Carp (copi)

High Truths on Drugs and Addiction
Episode # 99 High Truths on Drugs and Addiction with Dr. Eric Wish and Drug Trends

High Truths on Drugs and Addiction

Play Episode Listen Later Nov 14, 2022 48:22


Drug Trends are important for public health and public safety.  As a physician if there is a new disease such as COVID or Monkeypox, I need to knew the signs, symptoms and treatment. Similarly if there are new drugs and poisoning I need to be able to make the diagnosis and apply appropriate treatment. That is why find it important to work with law enforcement and our medical examiner who are the first to identify drug trends. Dr. Eric wish tracks drug trends nationally. Dr. Eric Wish received his Ph.D. in psychology from Washington University in St. Louis. He subsequently completed a NIDA post-doctoral fellowship in psychiatric epidemiology in the Department of Psychiatry at the Washington University School of Medicine. Between 1986 and 1990, Dr. Wish served as a Visiting Fellow at the National Institute of Justice in the Department of Justice, where he supervised the development and launching of the Drug Use Forecasting (DUF, later ADAM) program. In 2013, Dr. Wish developed the Community Drug Early Warning System (CDEWS), a new system for detecting emerging drugs by expanded testing of urine specimens obtained from criminal justice drug testing programs. In 2014, Dr. Wish received a 5 year award from NIH/NIDA to establish the Coordinating Center for the National Drug Early Warning System (NDEWS). As part of NDEWS, he oversaw the Drug Outbreak Testing Service (DOTS) pilot study, which collected and analyzed urine specimens from hospitals and treatment facilities. Also, from 2017-2020, he served as Co-PI of the MPowering the State Initiative's Opioid Use Disorders Project. As part of the MPower project, Dr. Wish led development of the Emergency Department Drug Surveillance (EDDS) system to track drug toxicology trends using de-identified electronic health records (EHR) from 7 hospitals in Maryland. In 2021 he received funding from the Office of National Drug Control Policy (ONDCP) to expand the EDDS system to collect EHRs and urine specimens from five hospitals nationally to monitor urine drug trends and identify emerging drugs being used by drug overdose patients. EDDS is now being further expanded to include 20 additional hospitals across the United States. Dr. Wish has published numerous articles and spoken widely about such issues as synthetic cannabinoids and other new psychoactive substances, recent increases in heroin and fentanyl use, the identification of drug use in offenders, relapse to heroin use by Vietnam veterans, and the validity of self-reports of drug use. Since 1990, Dr. Wish has been Director of the Center for Substance Abuse Research (CESAR) at the University of Maryland, College Park.

MORANmente incorrectos.
LA REPE: Jesus Alzamora se copió La Lengua

MORANmente incorrectos.

Play Episode Listen Later Nov 11, 2022 5:13


Una cortita del episodio de ayerSupport the show

Beyond Medicine
Starting a podcast, Med-Device & More with Maxwell Cooper, MD

Beyond Medicine

Play Episode Listen Later Oct 27, 2022 60:27


Join the BMG community at https://www.beyondmedicinegroup.com/Watch the full episode on YouTube & Subscribe! https://www.youtube.com/channel/UCpbRzNdbtHJMfhPfCi12qWwMaxwell is a Resident Physician (PGY3) at Emory University receiving training in both diagnostic and interventional radiology. He is also the host of The DaVinci Hour Podcast where here interviews physicians, executives, medtech innovators, and entrepreneurs making an impact on healthcare. Maxwell is passionate about medical innovation and currently working as a Co-PI on a grant-funded medical device development project involving a collaboration between Emory and Georgia Tech.To learn more follow maxwell at:The DaVinci Hour Podcasthttps://www.dviacademy.com/the-davinci-hourDaVinci AcademyYouTube: https://www.youtube.com/c/DaVinciAcademyMedWebsite: https://www.dviacademy.com/My social accounts:IG: @maxwellcoopermdLinkedIn: https://www.linkedin.com/in/maxwellcoopermd/

Science Friday
Cancer Vaccines, Planting Wildflowers, Eating Copi Fish. August 5th, 2022, Part 1

Science Friday

Play Episode Listen Later Aug 5, 2022 47:23


White House Declares Monkeypox Outbreak A Public Health Emergency The Biden administration declared the monkeypox outbreak a public health emergency on Thursday. Earlier in the week the White House appointed Robert Fenton, regional administrator at FEMA to direct the federal government's response to the monkeypox outbreak, along with a deputy director from the CDC. This comes after criticism from activists and public health experts, who have said that the federal government has been dragging its feet on access to vaccines, testing and treatment for the virus. Ira talks with Tim Revell, deputy United States editor for New Scientist, about the latest monkeypox updates and other top science stories including; new research into the shape of the human brain; how hand gestures can improve zoom calls and a plant that harnesses the power of a raindrop to gulp down insects.   New Steps Toward a Vaccine For Cancer Vaccines have long been used to prevent infection from viruses. But now, scientists are working on a different kind of vaccine—one that targets cancer. Dr. Kai Wucherpfennig is working on a cancer vaccine that would target tumors that tend to spread quickly and are resistant to treatment, like melanoma and triple negative breast cancer. This type of vaccine is intended to be used after a patient has had their tumor removed. The goal is to prevent the spread of cancer cells to other parts of the body, which is called metastasis. So far, this type of cancer vaccine is effective in animals, and the results were recently published in the journal Nature. Ira talks with Dr. Kai Wucherpfennig, chair of cancer immunology and virology at the Dana-Farber Cancer Institute and professor of neurology at Harvard Medical School, about his latest research into cancer vaccines, and how recent advances in understanding the immune system has jump-started research into new types of cancer immunotherapies.   Restoring A Sensitive Ecosystem, One Wildflower At A Time The New England blazing star is more than just a pretty blossom: it's an integral part of a globally-rare ecosystem called a “sandplain grassland.” Just like the name suggests, sandplain grasslands have sandy soil with tall grass, no trees and an exceptionally high number of rare plant and animal species. That includes plants like the New England blazing star, an important food source for various grassland insects. Today volunteers would plant 1,000 of them to help restore Bamford Preserve, a 60-acre parcel of sandplain grassland on Martha's Vineyard. As climate change threatens both human health and the natural world, experts say that protecting biodiversity hotspots like this one will offer the most bang-for-the-buck — protecting threatened species while offering other ecosystem benefits, like open space and flood protection. Read the full story on sciencefriday.com.   A Fish By Any Other Name: Inside The Effort To Bring ‘Copi' To Dinner People who live near freshwater rivers or lakes are likely familiar with Asian Carp. The fish are not native to the U.S., but over the last few decades their populations have exploded in waterways like the Mississippi River Basin and the Illinois River. Over the last few years, there's been a major PR campaign to move away from the name Asian Carp, in favor of a new name: “Copi.” The reason is two-fold: First, it joins a general trend of moving species' names away from nationalistic associations, considering anti-Asian hate crimes during the COVID-19 pandemic. The other goal is to make the fish sound more delicious—creating a market that would incentivize fishing the Copi, hopefully reducing their populations. Joining Ira to talk about this is Jim Garvey, director of fisheries, aquaculture and aquatic sciences at Southern Illinois University in Carbondale, Illinois.   Transcripts for each segment will be available the week after the show airs on sciencefriday.com.