Podcasts about tvm

  • 97PODCASTS
  • 143EPISODES
  • 51mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about tvm

Latest podcast episodes about tvm

Crypto Hipster Podcast
How to Help Web3 Developers Easily Deploy Hybrid dApps on Telegram, with Pavel Altukhov @ TAC (Video)

Crypto Hipster Podcast

Play Episode Listen Later Apr 3, 2025 36:23


Pavel Altukhov is the co-founder of TAC, a Layer 1 blockchain that enables Web3 developers to seamlessly deploy Hybrid dApps accessible to Telegram's 950M+ users. Prior to TAC, he founded and built bemo, the first liquid staking protocol on TON, and co-developed a TVM version of a CDP stablecoin. Pavel has a strong background in traditional finance with extensive experience in portfolio asset management and development of systematic trading strategies. He founded vlg.digital, a crypto asset management firm that initially operated as a tradfi asset management company before transitioning to digital assets in 2020. Pavel's experience from investor to TON builder gives him a rare edge, positioning him to drive the development of novel technologies that empower developers and accelerate innovation across Web3.Pavel's X | TAC Website | TAC X | TAC Telegram

Crypto Hipster Podcast
How to Help Web3 Developers Easily Deploy Hybrid dApps on Telegram , with Pavel Altukhov @ TAC (Audio)

Crypto Hipster Podcast

Play Episode Listen Later Apr 3, 2025 36:23


Pavel Altukhov is the co-founder of TAC, a Layer 1 blockchain that enables Web3 developers to seamlessly deploy Hybrid dApps accessible to Telegram's 950M+ users. Prior to TAC, he founded and built bemo, the first liquid staking protocol on TON, and co-developed a TVM version of a CDP stablecoin. Pavel has a strong background in traditional finance with extensive experience in portfolio asset management and development of systematic trading strategies. He founded vlg.digital, a crypto asset management firm that initially operated as a tradfi asset management company before transitioning to digital assets in 2020. Pavel's experience from investor to TON builder gives him a rare edge, positioning him to drive the development of novel technologies that empower developers and accelerate innovation across Web3.Pavel's X | TAC Website | TAC X | TAC Telegram

Regionaljournal Aargau Solothurn
Trotz erfolgreicher Saison: HSG Baden-Endingen will neuen Trainer

Regionaljournal Aargau Solothurn

Play Episode Listen Later Feb 2, 2025 17:26


In der Handball-Nationalliga-B ist die Rückrunde gestartet. Baden-Endingen liegt auf dem guten zweiten Platz. Da kommt die Nachricht, dass die Vereinsführung nicht mit dem aktuellen Trainer weitermachen will, überraschend. Der Trainer Björn Navarin zeigt denn auch kein Verständnis für den Entscheid. Weitere Themen: · Anders als bei der HSG Baden-Endingen läuft es dem TV Möhlin in der Handball-Nati-B nicht nach Wunsch. Auch das erste Rückrundenspiel ging verloren. Die Fricktaler kämpfen gegen den Abstieg. · Den FC Aarau Frauen ist der Start in die Rückrunde geglückt. Die Red Boots gewinnen gegen den FC St. Gallen 2:1. · In der Challenge League der Männer läuft es dem FC Aarau gut. Auch das zweite Spiel nach der Winterpause gewinnt der FCA und bleibt damit an Leader Thun dran. · Der Kanton Aargau hat den Vertrag mit der Stiftung Krebsregister Aargau verlängert. Die Stiftung führt das obligatorische Krebsregister auch in den Jahren 2025 bis 2027. Dafür gibt der Kanton 3.7 Millionen Franken aus.

Revue de presse Afrique
À la Une: après Mayotte, le Mozambique également touché par le passage du cyclone Chido

Revue de presse Afrique

Play Episode Listen Later Dec 18, 2024 3:57


Images de désolation sur le site du quotidien mozambicain Noticias après le passage du cyclone Chido. Toits envolés, arbres déracinés, débris dispersés… Dernier bilan, pointe le journal : « au moins 34 morts, au moins 319 blessés et plus de 34 000 familles touchées. Compte tenu de la gravité du phénomène, des brigades de secours et des officiels se rendront ce mercredi dans les zones touchées par la catastrophe en vue d'évaluer les dégâts et d'apporter tout le soutien nécessaire à la population ».D'après le site de la télévision mozambicaine, TVM, « plus de 400 000 habitants du district d'Eráti, dans la province de Nampula, risquent de souffrir de la faim dans les prochains jours, en raison du passage du cyclone Chido, qui a détruit une partie des excédents agricoles. Alors que la zone est toujours dans l'obscurité, le gouvernement appelle à une surveillance accrue, car il est à craindre que la situation encourage l'entrée de terroristes, affirme TVM, compte tenu de la proximité de ce point avec la province de Cabo Delgado ». Cette province du nord du pays est en effet en proie à une insurrection djihadiste, le groupe Ansar Al-Sunna, affilié à l'État islamique.Appel à la solidarité internationaleLe site panafricain Afrik.com précise que les trois provinces les plus touchées sont  « Cabo Delgado, donc, Nampula et Niassa. Les vents violents ont soufflé jusqu'à 260 km/h. (…) Face à l'ampleur de la catastrophe, les autorités mozambicaines ont lancé un appel à la solidarité internationale. Les besoins sont immenses : abris, nourriture, médicaments, eau potable… La communauté internationale est appelée à se mobiliser pour venir en aide aux populations sinistrées. »Afrik.com qui rappelle aussi « qu'avant de frapper le Mozambique, le cyclone Chido a ravagé l'archipel français de Mayotte. Les autorités redoutent un bilan humain très lourd, évoquant même la possibilité de “plusieurs centaines“, voire de “milliers“ de morts. Chido est le cyclone le plus intense qu'ait connu Mayotte depuis 90 ans ».Indifférence…Mais « contrairement à Mayotte, au centre de toutes les attentions des autorités et des médias français, le Mozambique, pratiquement personne n'en parle ». C'est du moins ce que relève Ledjely en Guinée. « Accueillie avec un certain fatalisme, la catastrophe est même reléguée au second plan par la crise post-électorale, qui a éclaté en octobre dernier avec la contestation de l'élection de Daniel Chapo par Venancio Mondlane. Même si ce dernier a annoncé une pause de quelques jours pour rendre hommage aux victimes de l'ouragan. Aucune communication non plus de la part des instances sous-régionales et panafricaines, déplore encore le site guinéen. Ce sont les organisations humanitaires, dont l'Unicef, ​​qui se mobilisent pour attirer l'attention du monde sur ce qui s'y passe et sur les risques sanitaires qui pourraient en découler. Une indifférence qui n'est pas sans rappeler le manque de mobilisation de nos États face aux enjeux liés au changement climatique, s'agace Ledjely. En effet, alors que des catastrophes comme Chido aujourd'hui et Freddy, l'année dernière, nous rappellent l'urgence de la mobilisation, force est de constater que les États africains traînent les pieds face au changement climatique. Très souvent, c'est la société civile africaine qui est sur ce front ».Maillon faible…Le quotidien Aujourd'hui au Burkina Faso revient sur la situation à Mayotte. « Ce bout d'Afrique frappé par l'ouragan Chido, perdu au milieu de la mer, (…) où les inégalités et les retards de développement sont prégnants ! (…) Mayotte est le maillon faible de ces (lointains) territoires français, estime le quotidien ouagalais, et Chido n'a fait que mettre en exergue l'écart abyssal qui existe entre la vie à Mayotte et en France hexagonale ! (…) Excepté le passeport français de ces Mahorais, dont le tiers vient des Comores, qu'est-ce qui les distingue face à Chido à un bout d'Afrique ? Pas grand-chose, répond Aujourd'hui, et ce qui fait enrager les habitants de cette île, c'est qu'ils se rendent compte qu'ils sont bien Français, sur le papier, mais qu'ils n'ont en réalité rien à voir avec un Français de Paris, Nantes ou Bordeaux ! Chido s'ajoute aux malheurs d'un territoire encombrant pour la France métropolitaine, où les problèmes politiques se greffent à d'autres, économiques, liés au pouvoir d'achat, à la sécurité et à l'immigration ».

Revue de presse Afrique
À la Une: après Mayotte, le Mozambique également touché par le passage du cyclone Chido

Revue de presse Afrique

Play Episode Listen Later Dec 18, 2024 3:57


Images de désolation sur le site du quotidien mozambicain Noticias après le passage du cyclone Chido. Toits envolés, arbres déracinés, débris dispersés… Dernier bilan, pointe le journal : « au moins 34 morts, au moins 319 blessés et plus de 34 000 familles touchées. Compte tenu de la gravité du phénomène, des brigades de secours et des officiels se rendront ce mercredi dans les zones touchées par la catastrophe en vue d'évaluer les dégâts et d'apporter tout le soutien nécessaire à la population ».D'après le site de la télévision mozambicaine, TVM, « plus de 400 000 habitants du district d'Eráti, dans la province de Nampula, risquent de souffrir de la faim dans les prochains jours, en raison du passage du cyclone Chido, qui a détruit une partie des excédents agricoles. Alors que la zone est toujours dans l'obscurité, le gouvernement appelle à une surveillance accrue, car il est à craindre que la situation encourage l'entrée de terroristes, affirme TVM, compte tenu de la proximité de ce point avec la province de Cabo Delgado ». Cette province du nord du pays est en effet en proie à une insurrection djihadiste, le groupe Ansar Al-Sunna, affilié à l'État islamique.Appel à la solidarité internationaleLe site panafricain Afrik.com précise que les trois provinces les plus touchées sont  « Cabo Delgado, donc, Nampula et Niassa. Les vents violents ont soufflé jusqu'à 260 km/h. (…) Face à l'ampleur de la catastrophe, les autorités mozambicaines ont lancé un appel à la solidarité internationale. Les besoins sont immenses : abris, nourriture, médicaments, eau potable… La communauté internationale est appelée à se mobiliser pour venir en aide aux populations sinistrées. »Afrik.com qui rappelle aussi « qu'avant de frapper le Mozambique, le cyclone Chido a ravagé l'archipel français de Mayotte. Les autorités redoutent un bilan humain très lourd, évoquant même la possibilité de “plusieurs centaines“, voire de “milliers“ de morts. Chido est le cyclone le plus intense qu'ait connu Mayotte depuis 90 ans ».Indifférence…Mais « contrairement à Mayotte, au centre de toutes les attentions des autorités et des médias français, le Mozambique, pratiquement personne n'en parle ». C'est du moins ce que relève Ledjely en Guinée. « Accueillie avec un certain fatalisme, la catastrophe est même reléguée au second plan par la crise post-électorale, qui a éclaté en octobre dernier avec la contestation de l'élection de Daniel Chapo par Venancio Mondlane. Même si ce dernier a annoncé une pause de quelques jours pour rendre hommage aux victimes de l'ouragan. Aucune communication non plus de la part des instances sous-régionales et panafricaines, déplore encore le site guinéen. Ce sont les organisations humanitaires, dont l'Unicef, ​​qui se mobilisent pour attirer l'attention du monde sur ce qui s'y passe et sur les risques sanitaires qui pourraient en découler. Une indifférence qui n'est pas sans rappeler le manque de mobilisation de nos États face aux enjeux liés au changement climatique, s'agace Ledjely. En effet, alors que des catastrophes comme Chido aujourd'hui et Freddy, l'année dernière, nous rappellent l'urgence de la mobilisation, force est de constater que les États africains traînent les pieds face au changement climatique. Très souvent, c'est la société civile africaine qui est sur ce front ».Maillon faible…Le quotidien Aujourd'hui au Burkina Faso revient sur la situation à Mayotte. « Ce bout d'Afrique frappé par l'ouragan Chido, perdu au milieu de la mer, (…) où les inégalités et les retards de développement sont prégnants ! (…) Mayotte est le maillon faible de ces (lointains) territoires français, estime le quotidien ouagalais, et Chido n'a fait que mettre en exergue l'écart abyssal qui existe entre la vie à Mayotte et en France hexagonale ! (…) Excepté le passeport français de ces Mahorais, dont le tiers vient des Comores, qu'est-ce qui les distingue face à Chido à un bout d'Afrique ? Pas grand-chose, répond Aujourd'hui, et ce qui fait enrager les habitants de cette île, c'est qu'ils se rendent compte qu'ils sont bien Français, sur le papier, mais qu'ils n'ont en réalité rien à voir avec un Français de Paris, Nantes ou Bordeaux ! Chido s'ajoute aux malheurs d'un territoire encombrant pour la France métropolitaine, où les problèmes politiques se greffent à d'autres, économiques, liés au pouvoir d'achat, à la sécurité et à l'immigration ».

BIGtruck Podcast
Partner content: Waarom waterstof en batterij elektrische trucks niet zonder elkaar kunnen!

BIGtruck Podcast

Play Episode Listen Later Nov 13, 2024 63:29


TVM, Oegema LogisticsPlus, Green Planet en anderen gaan de diepte in over allerlei interessante onderwerpen omtrent waterstof en batterij elektrische trucks. Hoe richt je deze nieuwe energievoorziening in? Waar loop je tegenaan? Hoe los je dat op? En wat verwachten we van de overheid? Beluister het in deze podcast, die opgenomen werd tijdens de Green Planet Pesse Open Huis dagen. Dit is een video podcast, ook op Spotify, en het gesprek wordt geleid door Wim Brons van de BIGtruck podcast.

The Thesis Review
[48] Tianqi Chen - Scalable and Intelligent Learning Systems

The Thesis Review

Play Episode Listen Later Oct 28, 2024 46:29


Tianqi Chen is an Assistant Professor in the Machine Learning Department and Computer Science Department at Carnegie Mellon University and the Chief Technologist of OctoML. His research focuses on the intersection of machine learning and systems. Tianqi's PhD thesis is titled "Scalable and Intelligent Learning Systems," which he completed in 2019 at the University of Washington. We discuss his influential work on machine learning systems, starting with the development of XGBoost,an optimized distributed gradient boosting library that has had an enormous impact in the field. We also cover his contributions to deep learning frameworks like MXNet and machine learning compilation with TVM, and connect these to modern generative AI. - Episode notes: www.wellecks.com/thesisreview/episode48.html - Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter - Follow Tianqi Chen on Twitter (@tqchenml) - Support The Thesis Review at www.patreon.com/thesisreview or www.buymeacoffee.com/thesisreview

MÓKA Podcast
#230 Kárász Róbert

MÓKA Podcast

Play Episode Listen Later Sep 29, 2024 45:23


New York-i Filmbemutató? Romantikus randevú? Mokka vagy M.Ó.K.A. ?    Ebben az epizódban örömmel üdvözöltük Kárász Róbertet, a magyar televíziós műsorvezetés és újságírás egyik legismertebb és legsokoldalúbb alakját. Kárász Róbert hosszú éveken át meghatározó szereplője volt a hazai televíziós piacnak, és már évtizedek óta jelen van a médiában, több jelentős csatornán is. Képességei és karizmája révén a közönség egyik kedvence lett, és sokan ismerik és szeretik az RTL Klub és a TV2 reggeli műsoraiból, ahol évekig dolgozott műsorvezetőként.   A beszélgetésünk során alaposan áttekintettük Róbert pályafutásának főbb mérföldköveit, beleértve a korai éveit is, amikor Pécsett, az egyetemen kezdte rádiós karrierjét. Elmesélte, hogyan találkozott először a mikrofon világával, és milyen volt egyetemistaként a rádiózás izgalmas világába belekóstolni. Róbert részletesen beszélt arról is, hogy ez az időszak hogyan formálta a jövőjét, és hogyan tanult meg olyan alapvető készségeket, amelyek végigkísérték egész pályafutását.   Tovább haladva rátértünk a televíziós pályafutására, különös tekintettel a reggeli műsorok világára, amelyek szinte összeforrtak a nevével. Róbert hosszú éveken át meghatározó arca volt az RTL Klub "Reggeli" műsorának, ahol természetességével és kedvességével nyűgözte le a nézőket. Az évek alatt a reggeli tévézés megszokott részévé vált, és generációk nőttek fel azon, hogy vele kezdik a napot. A műsorvezetői stílusa mindig magával ragadó és könnyed volt, ami miatt a nézők személyes kapcsolatot éreztek vele, mintha régi barátjuk lenne, akivel minden reggel találkozhatnak. Természetesen nem hagyhattuk ki a TV2 legendás Mokka című műsorát sem, amely Róbert karrierjének talán egyik legfontosabb állomása volt. A műsor sokáig a reggeli televíziózás etalonja volt, és Róbert személyisége, vezetési stílusa nagyban hozzájárult ahhoz, hogy a Mokka Magyarország egyik legkedveltebb reggeli műsora lett. A beszélgetés során Róbert elárulta, milyen kihívásokkal szembesült a napi reggeli műsorok készítése során, és hogyan tudott mindig frissen és energikusan megjelenni a képernyőn, függetlenül attól, hogy mi történt a kulisszák mögött.   A műsorvezetői karrierje mellett Kárász Róbert aktív társadalmi ügyekben is. Többször láthattuk őt különböző jótékonysági események kapcsán, és elkötelezetten foglalkozik olyan kérdésekkel, amelyek fontosak számára. Az adás során részletesen beszéltünk arról, hogy mi motiválja ezeken a területeken, és hogyan látja saját szerepét a közéletben.   A karrierjéről folytatott beszélgetés mellett nem maradhatott el a magánéletére vonatkozó néhány kérdés sem. Kárász Róbert szívesen megosztott velünk néhány személyes történetet az életéről, és a nézők betekintést nyerhettek abba, hogy milyen ember ő a kamerákon kívül. Mesélt az életének azon fontos fordulópontjairól, amelyek formálták őt mint embert, és amelyek hozzájárultak ahhoz, hogy ma ott tartson, ahol. Az adás egyik különleges pillanata az volt, amikor Róbert beszélt az új menyasszonyáról, és arról, hogyan találkoztak. Ez a rész különösen szívmelengető volt, hiszen Róbert őszintén mesélt az érzelmeiről, és arról, hogy milyen boldog ebben az új életszakaszban. Bár a magánéletét mindig is igyekezett távol tartani a nyilvánosságtól, ebben az epizódban mégis bepillantást nyújtott a közönség számára, és megosztotta azt a boldogságot, amit az új párkapcsolata hozott az életébe.   Ez az epizód igazi élményt nyújtott mindenkinek, aki szeretne többet megtudni Kárász Róbert életéről, karrierjéről és személyes történeteiről. Nemcsak a televíziózás világába nyertünk betekintést, hanem abba is, hogy mi motiválja őt nap mint nap, és milyen értékek mentén éli az életét.

Mi Última Neurona
Neurociencia Social: Empatía, Violencia y Salud Cerebral c/ Dr. Hernando Santamaría García

Mi Última Neurona

Play Episode Listen Later Aug 26, 2024 102:48


En este episodio del podcast "Mi Última Neurona", Jessica Chomik-Morales entrevista al Dr. Hernando Santamaría García, un destacado neurocientífico y psiquiatra de Colombia. La conversación abarca su trayectoria académica y su interés en la neurociencia social. Discuten la importancia de la interdisciplinariedad en la investigación y cómo los procesos sociales y cerebrales están conectados.El Dr. Santamaría comparte sus investigaciones sobre la influencia de la jerarquía social en el cerebro y cómo la toma de perspectiva desempeña un papel crucial. También habla del uso de técnicas de resonancia magnética y electroencefalogramas en su trabajo.Puedes mirar la conversación aquí: www.youtube.com/@miultimaneuronaSitio Web: https://www.miultimaneurona.com/MARCAS DE TIEMPO00:00 Intro00:47 Presentación01:36 Trayectoria y cómo comenzó el interés en neurociencia04:59 Maestría en Barcelona e inicios como investigador06:01 Espacios multidisciplinarios de investigación08:02 Paper If you are good, I get better09:13 Construcción una jerarquía social artificial12:41 Interacción con persona jerárquicamente inferior15:10 Información social instintiva18:12 Evolución del ser humano en relaciones sociales para sobrevivir26:15 Nociones de neurociencia social: Cerebro no estático29:12 PhD en Jerarquía Social30:35 Trabajo con Agustín Ibáñez y Facundo Manes33:38 Su investigación en el postdoctorado36:11 Compartir experiencia afectiva del otro (Affective Sharing)40:41 Empatía y salud mental de los profesionales de la salud49:31 Diferencia neuronal entre personalidad antisocial, baja empatía y lo que la usan poco54:10 Cuando los niños aprenden a mentir57:02 diferencia personas que trabajan con público y personas antisociales59:35 Violencia en Colombia y personalidad antisocial1:03:31 Reacción sociedad civil en guerra, Nazis y Colombia1:10:46 Distintos tipos de violencia1:11:40 Normalización de la violencia1:13:30 Cognición y reclutamiento de los ex guerrilleros para los estudios1:18:37 Violencia de la humanidad en el mundo1:25:59 Deshumanización del ser humano1:29:43 Psiquiatría computacional1:39:38 Consejos para los oyentes1:42:13 OutroEsta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimación y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Mi Última Neurona
Neurociencia, Meditación de Atención Plena y Comunicación Científica c/ Dra. Lina Becerra

Mi Última Neurona

Play Episode Listen Later Aug 19, 2024 46:17


En este episodio de "Mi Última Neurona," Jessica Chomik-Morales entrevista a la Dra. Lina Becerra, destacada investigadora de la Universidad del Valle y la Pontificia Universidad Javeriana en Cali, Colombia. La conversación abarca una amplia gama de temas, desde enfermedades cerebrales como la esquizofrenia y el autismo hasta el COVID-19 y la meditación. Profundizamos en su investigación sobre la meditación mindfulness y cómo puede mejorar la atención y el manejo del estrés. La Dra. Becerra también comparte su experiencia en el laboratorio, enfatizando la importancia de la formación en investigación y su papel como mentora.Puedes ver el episodio aquí: www.youtube.com/@miultimaneuronaSitio Web: https://www.miultimaneurona.com/MARCAS DE TIEMPO 00:00 Intro01:01 Presentación01:50 Trayectoria académica de Lina Becerra04:48 El primer acercamiento al laboratorio08:33 Variedad de investigaciones: Esquizofrenia, autismo, COVID-1909:12 La docencia en vez de la medicina11:11 Estudio sobre meditación y estrés12:49 ¿Cuál es el foco de atención al meditar?14:00 Cómo afecta el meditar a reducir estrés14:59 Momento del día para practicar mindfulness16:05 La importancia de publicar en inglés y en español19:43 Relato como método de divulgación científica20:26 Podcast como medio de llegar al público22:31 Cómo se originó la creación de su contenido por YouTube25:40 Una de las investigaciones actuales: Síndrome de Guillain-Barré y relación con el COVID-1927:59 Los efectos de los Vapes28:29 Estudios TEA y epilepsia con inmunodepresora histología32:09 Distintas formas de estudiar autismo y epilepsia33:53 Cómo reclutan pacientes para los estudios36:46 Su experiencia en ciencia y neurociencia siendo mujer y en Latinoamérica42:24 Consejos para los oyentes45:02 Cierre y despedida45:43 OutroLinks de interés para este episodioCanal de YouTube de Lina: www.youtube.com/@encefalina Esta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimación y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Mi Última Neurona
Neurociencia, Adicción y la Ciencia en México con el Dr. Eduardo Garza-Villarreal

Mi Última Neurona

Play Episode Listen Later Aug 12, 2024 71:06


En la conversación, se descubre cómo el cerebro adicto puede cambiar su enfoque de una sustancia a otra y la dificultad de encontrar tratamientos efectivos. También se abordan cuestiones éticas en la investigación en neurociencia y la importancia de educar al público sobre la ciencia. Además, se discuten los desafíos que enfrentan los investigadores al buscar financiamiento y la falta de capacitación en solicitudes de subvenciones.Este diálogo ofrece una visión interesante de la carrera e investigación del Dr. Eduardo Garza Villarreal en neurociencia, con énfasis en el uso de la música como terapia para el dolor crónico. También se analizan las diferencias en la formación médica y las oportunidades de investigación en México y otros lugares.Si te interesa la neurocíencia desde una perspectiva informada y accesible, este video es para ti. ¡No dudes en suscribirte para más contenido sobre la mente y el cerebro!Ver el episodio aquí: www.youtube.com/@miultimaneuronaSitio Web: https://www.miultimaneurona.com/MARCAS DE TIEMPO00:00 Intro00:49 Presentación01:07 Motivación para estudiar neurociencia04:28 Las diferencias entre México y Dinamarca para el estudio10:30 La música y el dolor. ¿Cómo el cuerpo disminuye el dolor al escuchar música?14:52 Uso de la música para pacientes con Parkinson17:30 Las dificultades de hacer investigación y clínica a la vez en México19:05 El regreso a México luego de Dinamarca21:34 Definición de fondos semilla (startup funds)25:52 La importancia de la filantropía en la ciencia27:49 Investigación en adicciones31:51 Estimulación Magnética Transcraneal39:52 Cómo cambiar el estigma sobre las adicciones44:19 Predisposición genética en adicciones50:48 El problema con los vapes51:56 La dificultad de hacer estudios sobre adicciones en México52:38 Consejos para los oyentesLinks de interés para este capítulo:Artículo de la paciente de fibromialgia (https://www.frontiersin.org/articles/...)Marisela Morales (https://irp.nih.gov/pi/marisela-morales)Esta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimación y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Mi Última Neurona
De Medicina a Neurociencia: Una exploración de la Epilepsia c/ Dr. Luis Concha

Mi Última Neurona

Play Episode Listen Later Aug 5, 2024 51:04


Bienvenidos a otro episodio de 'Mi Última Neurona' presentado por Jessica Chomik-Morales. A lo largo de la entrevista, exploramos la neurociencia junto al Dr. Luis Concha del Instituto de Neurobiología de la UNAM. Descubre su viaje desde la medicina general hasta la investigación en epilepsia, una discusión de la 'normalidad' cerebral, la resonancia magnética como técnica y sus consejos para futuros científicos.Ver el episodio aquí: www.youtube.com/@miultimaneuronaSitio Web: https://www.miultimaneurona.com/Esta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimación y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Mi Última Neurona
Neurociencia Computacional, Inteligencia Artificial y Algoritmos en Clínica c/ Dr. Oswaldo Pérez

Mi Última Neurona

Play Episode Listen Later Jul 29, 2024 43:29


En esta entrevista con el Dr. Oswaldo Pérez, exploramos temas interesantes como neurociencia computacional, inteligencia artificial, algoritmos y modelos computacionales. Además, descubrimos cómo los algoritmos están contribuyendo a la clínica, especialmente en la prevención del edema macular en pacientes con diabetes. También hablamos sobre telemedicina y cómo los modelos de lenguaje, como Chat GPT, están influyendo en la comunicación en el campo médico. ¡No te la pierdas si te interesa la ciencia y la tecnología!Puedes ver la entrevista aquí: www.youtube.com/@miutltimaneuronaMARCAS DE TIEMPO:00:00 Intro01:02 Presentación01:34 Trayectoria académica de Oswaldo Pérez02:53 Reconstruir el estímulo neuronal03:50 Programar con actividad neuronal06:42 Machine Learning con redes neuronales convolucionales08:10 Formación en neurociencias10:10 Aplicación de mutidisciplinas en la neurociencia11:22 Escala de milisegundos: Como el sistema nervioso procesa información de tiempo17:46 ¿Cuáles son las implicaciones para la inteligencia artificial de reconocer el paso del tiempo?21:29 ¿Hay neuronas que están especializadas para el procesamiento del tiempo a escalas más largas como horas? 29:03 ¿Qué es un edema macular diabético? 39:35 Consejos para oyentesSitio Web: https://www.miultimaneurona.com/Esta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimación y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Mi Última Neurona
Neurobiología de lal conducta sexual y plasticidad cerebral c/ Dr. Raúl Paredes

Mi Última Neurona

Play Episode Listen Later Jul 24, 2024 56:59


En este episodio, el Dr. Raúl Paredes y yo exploramos su trayectoria académica y sus investigaciones en Neurobiología de la conducta sexual y plasticidad cerebral. Hablamos sobre qué estructuras y circuitos cerebrales influyen en diferentes comportamientos y cómo esta información puede aplicarse a individuos normales y aquellos con patologías. También abordamos temas interesantes como la monogamia en topillos de la pradera y su relación con la oxitocina, además de discutir el problema de la diabetes en México y las investigaciones para ayudar a pacientes amputados debido a esta enfermedad. ¡No te pierdas esta conversación informativa!Video de la charla: https://www.youtube.com/watch?v=UJtoi5Bp_qoMarcas de Tiempo: 00:00 Intro01:02 Presentación02:21 Trayectoria académica del Dr. Paredes04:37 Post-doc con el Dr. Michael Baum en Universidad de Boston06:28 Dr. Anders Agmo en la Universidad Anáhuac09:28 ¿Por qué estudiar la conducta sexual?10:11 Estudiar conducta sexual con animales: Topillo de la pradera11:01 Similitud entre el sistema hormonal sexual de las ratas y los humanos (modelos animales)13:34 Hormonas y el Efecto Coolidge en los topillos de la pradera (Ventura-Aquino et al, 2017)15:54 ¿El ser humano es monógamo por diseño?18:13 ¿Qué es el Efecto Coolidge? Diferencias entre la saciación sexual de los machos ylas hembras21:24 Como “preguntarle” a un modelo animal qué le gusta24:35 Las hormonas, la homosexualidad en seres humanos, y conductas homotípicas enmodelos animales25:16 Escala de Kinsey29:12 Diferencias Biológicas en aspectos físicos, cognitivos, y comportamientos31:01 Resonancia magnética funcional para estudiar áreas y redes40:23 Plasticidad cerebral y biomecánica en pacientes amputados y deportistas41:55 La Diabetes es la causa numero uno de amputados en EEUU y México45:31 Estudios de pacientes con prótesis48:47 Dr. Pawan Sinha, Neurocientifico referenciado del MIT https://www.sinhalab.mit.edu52:02 Consejos para oyentesPueden escuchar los episodios de la primera y segunda temporada en: - Spotify: https://open.spotify.com/show/4tif9z6...- Apple Podcast: https://podcasts.apple.com/us/podcast...- Sitio Web: https://www.miultimaneurona.com/Esta temporada es patrocinada por el McGovern Brain Institute, MIT Department of Brain and Cognitive Sciences, el Picower Center for Learning and Memory, y MIT International Science and Technology InitiativesAnimacion y diseño por jpdesign.tvMúsica y diseño de sonido por David Samuel Productions

Money Girl's Quick and Dirty Tips for a Richer Life
Time Value of Money (TVM) and Calculating Investment Returns

Money Girl's Quick and Dirty Tips for a Richer Life

Play Episode Listen Later May 29, 2024 11:15


The time value of money, or TVM, is a fundamental concept that affects your financial planning and investment success. In this episode, we review the time value of money, why it matters, and how to calculate your investment returns.

Cyclist Magazine Podcast
103. Ex-pro Scott Sunderland on 90s cycling, Classics, Cancellara, breaking stuff and RideLondon

Cyclist Magazine Podcast

Play Episode Listen Later Apr 18, 2024 72:37


This week, Will and James talk to ex-pro, team ds and race director Scott Sunderland. Today, Sunderland is race director for the RideLondon Classique and Tour of Flanders among others, but during the 1990s and 2000s he rode professionally for TVM, Lotto and GAN before taking on sports director roles at Team CSC and Cervélo Test Team. Here, Sunderland talks racing through the troubled 1990s; a near career-ender when he got hit by a team car; orchestrating Classics wins with Fabian Cancellara; and breaking tens of thousands of pounds of kit in the Arenberg Forest, all in the name of science. Honest. We also discuss the upcoming RideLondon Classique WorldTour women's race, which takes place on 24th-26th May.Interview begins at 9.28.For details of the RideLondon Classique, hit this link---This episode is brought to you by ketones experts deltaG. deltaG makes a variety of ketone drinks to use for different situations, so head over to deltaGketones.com to explore the science, and use the code CYCLIST for 20% off your first purchase.---Did you know Cyclist is also stunning monthly magazine? Subscribe now at store.cyclist.co.uk/cycpod and get every issue for less than in the shops, delivered straight to your door. Hosted on Acast. See acast.com/privacy for more information.

Gamechangers
#10: het geheim van schaatscoach Gerard Kemkers

Gamechangers

Play Episode Listen Later Feb 17, 2024 53:46


Gerard Kemkers was zelf een van de grootste schaatstalenten van Nederland, maar moest vroegtijdig stoppen vanwege een ‘zwabbervoet'. Hij bleek echter ook aanleg te hebben voor het vak schaatscoach. Hij professionaliseerde de commerciële schaatsploeg TVM door het aanstellen van experts en het vakoverschrijdend denken. Talenten als Sven Kramer en Ireen Wüst floreerden en groeiden uit tot de beste schaatsers van ons land. Ondanks al die prijzen, blijft de verkeerde baanwissel van Kramer een smet op zijn carrière. In deze aflevering van Gamechangers vertelt Gerard over het maximale uit een team halen, omgaan met teleurstellingen en luistert hij voor het eerst naar het commentaar op die verkeerde baanwissel.Zie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.

Synaptic Tails
S.M.A.R.T. - T for Tailor

Synaptic Tails

Play Episode Listen Later Jan 19, 2024 29:44


In the final episode of the 'S.M.A.R.T. Approach' podcast series, hosts Dr Emma Hancox and Dr Mark Lowrie explore the notion of tailoring epilepsy management strategies to individual cases. They delve into the complexities of epilepsy management, discussing aspects such as the consideration of euthanasia, the impact of epilepsy on pets and their owners, and the balance of medication options for optimum seizure control. Throughout their conversation, they stress the importance of understanding seizures as unique to each case and emphasise the need for empathy, patience, and persistence in management strategies.Resources:Access TVM UK Vet Resources at https://www.tvm-uk.com/registration-page/

Synaptic Tails
S.M.A.R.T. - A for Advise

Synaptic Tails

Play Episode Listen Later Jan 19, 2024 29:34


In this Synaptic Tails episode, hosts Dr Emma Hancox and Dr Mark Lowrie focus on the 'A' in the S.M.A.R.T. approach to Epilepsy - 'Advise.' Dr Lowrie provides an in-depth dive into three key areas of epilepsy treatment: management of seizures, management of the underlying cause, and importantly, management of the owner. The conversation touches on potential therapies, handling status epilepticus, toxicity, the importance of MRIs, and the balance between seizure control and quality of life.Resources:Access TVM UK Vet Resources at https://www.tvm-uk.com/registration-page/A link to the ACVIM Consensus statement on Canine Idiopathic Epilepsy: https://onlinelibrary.wiley.com/doi/full/10.1111/jvim.13841A link to the International Veterinary Epilepsy Task Force Consensus Reports: https://www.biomedcentral.com/collections/ivetf

Synaptic Tails
S.M.A.R.T. - M for Measure

Synaptic Tails

Play Episode Listen Later Jan 19, 2024 31:13


Join hosts Dr Emma Hancox and Dr Mark Lowrie in this episode of Synaptic Tails as theydelve into the 'Measure' aspect of the S.M.A.R.T approach to epilepsy in veterinary practice. Focusing on phenobarbital, they discuss the importance of regular monitoring, assessing liver function, understanding drug serum concentration, variations in metabolism, and the use of other antiepileptic drugs like bromide.Resources:Access TVM UK Vet Resources at https://www.tvm-uk.com/registration-page/

Synaptic Tails
S.M.A.R.T. - S for Speak

Synaptic Tails

Play Episode Listen Later Jan 19, 2024 30:10


In our inaugural Synaptic Tails podcast episode, Dr. Emma Hancox and Dr. Mark Lowrie introduce TVM's S.M.A.R.T Approach to Epilepsy. Focusing on 'Speak,' they engage in frank discussions about the challenges of seizure management, the role of veterinary staff, common misconceptions, and the significance of open communication between the vet, the pet, and the owner. The hosts also delve into epilepsy triggers and how varying environments impact seizure occurrence.Resources:Access TVM UK Vet Resources at https://www.tvm-uk.com/registration-page/Book a Lunch and Learn with TVM UK at https://www.tvm-uk.com/book-your-lunch-and-learn/For the paper referenced in this podcast visit: https://bvajournals.onlinelibrary.wiley.com/doi/10.1002/vetr.2482

Synaptic Tails
S.M.A.R.T. - R for Realistic

Synaptic Tails

Play Episode Listen Later Jan 19, 2024 31:04


In this episode of the Synaptic Tails Podcast, Dr Emma Hancox and Dr Mark Lowrie delve into realistic methods of managing epileptic patients within the veterinary field. They discuss the nuances of patient referrals, the influence of behavioural changes, and the complexities of managing pharmacoresistant or refractory epilepsy. Furthermore, they explore potential adjunct therapies such as dietary modifications, the utilisation of CBD oil, and novel technologies like vagal nerve stimulators.Resources:Access TVM UK Vet Resources at https://www.tvm-uk.com/registration-page/

Synaptic Tails
Welcome to Synaptic Tails: Episodes Coming 19th January

Synaptic Tails

Play Episode Listen Later Jan 11, 2024 0:44


Welcome to the Synaptic Tails podcast, where neurology meets practical tips in veterinary care. Hosted by Dr Emma Hancox, a Technical Vet Advisor at TVM UK, a Dômes Pharma Brand, alongside Dr Mark Lowrie of Movement Referrals.In each episode, we delve into managing neurology cases in first-opinion practice, sharing insights, tips, and tricks we've gained through our experiences.But that's not all! Over the upcoming episodes, we'll introduce you to TVM's S.M.A.R.T. Approach To Epilepsy. What does S.M.A.R.T. stand for? Speak, Measure, Advise, Realistic, and Tailor. We'll explore how this innovative approach can be applied to real-life cases, providing practical solutions to enhance patient care.

The Last Word with Matt Cooper
Best Non-Alcoholic Drinks For The Holiday Season

The Last Word with Matt Cooper

Play Episode Listen Later Dec 9, 2023 13:00


Vaughan Yates, Founder of TVM, and Alex Morrell, Beverage Director at TVM joined The Last Word to discuss the best non-alcoholic drinks for your festive season!Catch the full chat by pressing the 'Play' button on this page.

The Cycling Legends Podcast [free version; no premium access]

In the 1980s, men's pro cycling was a closed-door to women, unless you were a podium girl, of course. In 1985, a 25 year-old from Connecticut changed all that. Chris Sidwell catches up with pro cycling's first female soigneur, Shelley Verses. In this hour-long interview, Shelley, who worked with 7-Eleven, La Vie Claire, Toshiba, TVM and Saturn, talks about learning the ropes and becoming accepted in the macho world of the men's peloton and supporting some of the best riders in the world. The Cycling Legends Podcast is proud to be supported by Vive le Velo, performance cycles and accessories. Music by Epidemic Sound

Open Source Startup Podcast
E114: How OctoML Helps Developers Build with Llama 2 & Stable Diffusion

Open Source Startup Podcast

Play Episode Listen Later Nov 7, 2023 42:50


Tianqi Chen is Co-Founder and Chief Technologist of OctoML, the compute infrastructure platform for tuning and running generative models in the cloud. OctoML was founded by the creators of Apache TVM, the machine learning compiler framework for CPUs, GPUs, and accelerators OctoML has raised $132M from investors including Amplify, Addition, Madrona, and Tiger. In this episode, we discuss the importance of supporting multiple models, the advancements from LLaMA and Stable Diffusion this year, building the TVM and OctoML communities, predictions on GenAI in the enterprise (hybrid ML, for example), whether GenAI is over-invested in & more!

Svensktoppen
Oscar Magnusson: Sorgen hindrar mig inte längre

Svensktoppen

Play Episode Listen Later Oct 19, 2023 15:48


Sven-Ingvars är ett av Sveriges största band. När Sven-Erik Magnusson gick bort för sex år sedan tog Oscar Magnusson över rollen längst fram. I början stod han gärna i sin pappas skugga, idag känner han att han kan ta plats. Lyssna på alla avsnitt i Sveriges Radio Play. I veckan släpptes nya albumet ”Jag kan varken leva med dig eller utan dig” med ett delvis annat och mer vemodigt sound och där de flesta låtarna är skrivna av Oscar Magnusson. När nu gruppen ger sig ut på turné tar också Oscar en mer självklar plats på scen.- Under första turnén kändes det jättekonstigt att gå ut och ställa sig längst fram på scen. Idag känns det helt naturligt. Även om sorgen finns kvar så kan jag nu hantera den och jag har även skrivit av mig mycket i den nya musiken. De nya låtarna är nog mer melankoliska än vad många är vana att höra från Sven-Ingvars, berättar Oscar Magnusson.Värnar om sitt privatlivMusiken har funnits med Oscar sedan barnsben och även ett liv i offentligheten. Idag värnar han därför extra mycket om sitt privatliv- När jag växte upp minns jag olika hemma-hos-artiklar som gjordes hos oss. Jag tog inte skada av det, men det var också en annan och mer oskyldig tid. Idag är det viktigt för mig att hålla min familj utanför media. För mig är det viktigaste att få vara pappa för mina barn, att finnas där när jag inte är ute turné.Kort Svensktoppsfakta om Sven-Ingvars: Gruppen har haft 55 låtar på Svensktoppen med hits som Fröken Fräken, Två Mörka Ögon, Byns Enda Blondin och Röda Trådens Slut.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Want to help define the AI Engineer stack? Have opinions on the top tools, communities and builders? We're collaborating with friends at Amplify to launch the first State of AI Engineering survey! Please fill it out (and tell your friends)!If AI is so important, why is its software so bad?This was the motivating question for Chris Lattner as he reconnected with his product counterpart on Tensorflow, Tim Davis, and started working on a modular solution to the problem of sprawling, monolithic, fragmented platforms in AI development. They announced a $30m seed in 2022 and, following their successful double launch of Modular/Mojo

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 10, 2023 52:10


We have just announced our first set of speakers at AI Engineer Summit! Sign up for the livestream or email sponsors@ai.engineer if you'd like to support.We are facing a massive GPU crunch. As both startups and VC's hoard Nvidia GPUs like countries count nuclear stockpiles, tweets about GPU shortages have become increasingly common. But what if we could run LLMs with AMD cards, or without a GPU at all? There's just one weird trick: compilation. And there's one person uniquely qualified to do it.We had the pleasure to sit down with Tianqi Chen, who's an Assistant Professor at CMU, where he both teaches the MLC course and runs the MLC group. You might also know him as the creator of XGBoost, Apache TVM, and MXNet, as well as the co-founder of OctoML. The MLC (short for Machine Learning Compilation) group has released a lot of interesting projects:* MLC Chat: an iPhone app that lets you run models like RedPajama-3B and Vicuna-7B on-device. It gets up to 30 tok/s!* Web LLM: Run models like LLaMA-70B in your browser (!!) to offer local inference in your product.* MLC LLM: a framework that allows any language models to be deployed natively on different hardware and software stacks.The MLC group has just announced new support for AMD cards; we previously talked about the shortcomings of ROCm, but using MLC you can get performance very close to the NVIDIA's counterparts. This is great news for founders and builders, as AMD cards are more readily available. Here are their latest results on AMD's 7900s vs some of top NVIDIA consumer cards.If you just can't get a GPU at all, MLC LLM also supports ARM and x86 CPU architectures as targets by leveraging LLVM. While speed performance isn't comparable, it allows for non-time-sensitive inference to be run on commodity hardware.We also enjoyed getting a peek into TQ's process, which involves a lot of sketching:With all the other work going on in this space with projects like ggml and Ollama, we're excited to see GPUs becoming less and less of an issue to get models in the hands of more people, and innovative software solutions to hardware problems!Show Notes* TQ's Projects:* XGBoost* Apache TVM* MXNet* MLC* OctoML* CMU Catalyst* ONNX* GGML* Mojo* WebLLM* RWKV* HiPPO* Tri Dao's Episode* George Hotz EpisodePeople:* Carlos Guestrin* Albert GuTimestamps* [00:00:00] Intros* [00:03:41] The creation of XGBoost and its surprising popularity* [00:06:01] Comparing tree-based models vs deep learning* [00:10:33] Overview of TVM and how it works with ONNX* [00:17:18] MLC deep dive* [00:28:10] Using int4 quantization for inference of language models* [00:30:32] Comparison of MLC to other model optimization projects* [00:35:02] Running large language models in the browser with WebLLM* [00:37:47] Integrating browser models into applications* [00:41:15] OctoAI and self-optimizing compute* [00:45:45] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Okay, and we are here with Tianqi Chen, or TQ as people call him, who is assistant professor in ML computer science at CMU, Carnegie Mellon University, also helping to run Catalyst Group, also chief technologist of OctoML. You wear many hats. Are those, you know, your primary identities these days? Of course, of course. [00:00:42]Tianqi: I'm also, you know, very enthusiastic open source. So I'm also a VP and PRC member of the Apache TVM project and so on. But yeah, these are the things I've been up to so far. [00:00:53]Swyx: Yeah. So you did Apache TVM, XGBoost, and MXNet, and we can cover any of those in any amount of detail. But maybe what's one thing about you that people might not learn from your official bio or LinkedIn, you know, on the personal side? [00:01:08]Tianqi: Let me say, yeah, so normally when I do, I really love coding, even though like I'm trying to run all those things. So one thing that I keep a habit on is I try to do sketchbooks. I have a book, like real sketchbooks to draw down the design diagrams and the sketchbooks I keep sketching over the years, and now I have like three or four of them. And it's kind of a usually a fun experience of thinking the design through and also seeing how open source project evolves and also looking back at the sketches that we had in the past to say, you know, all these ideas really turn into code nowadays. [00:01:43]Alessio: How many sketchbooks did you get through to build all this stuff? I mean, if one person alone built one of those projects, he'll be a very accomplished engineer. Like you built like three of these. What's that process like for you? Like it's the sketchbook, like the start, and then you think about the code or like. [00:01:59]Swyx: Yeah. [00:02:00]Tianqi: So, so usually I start sketching on high level architectures and also in a project that works for over years, we also start to think about, you know, new directions, like of course generative AI language model comes in, how it's going to evolve. So normally I would say it takes like one book a year, roughly at that rate. It's usually fun to, I find it's much easier to sketch things out and then gives a more like a high level architectural guide for some of the future items. Yeah. [00:02:28]Swyx: Have you ever published this sketchbooks? Cause I think people would be very interested on, at least on a historical basis. Like this is the time where XGBoost was born, you know? Yeah, not really. [00:02:37]Tianqi: I started sketching like after XGBoost. So that's a kind of missing piece, but a lot of design details in TVM are actually part of the books that I try to keep a record of. [00:02:48]Swyx: Yeah, we'll try to publish them and publish something in the journals. Maybe you can grab a little snapshot for visual aid. Sounds good. [00:02:57]Alessio: Yeah. And yeah, talking about XGBoost, so a lot of people in the audience might know it's a gradient boosting library, probably the most popular out there. And it became super popular because many people started using them in like a machine learning competitions. And I think there's like a whole Wikipedia page of like all state-of-the-art models. They use XGBoost and like, it's a really long list. When you were working on it, so we just had Tri Dao, who's the creator of FlashAttention on the podcast. And I asked him this question, it's like, when you were building FlashAttention, did you know that like almost any transform race model will use it? And so I asked the same question to you when you were coming up with XGBoost, like, could you predict it would be so popular or like, what was the creation process? And when you published it, what did you expect? We have no idea. [00:03:41]Tianqi: Like, actually, the original reason that we built that library is that at that time, deep learning just came out. Like that was the time where AlexNet just came out. And one of the ambitious mission that myself and my advisor, Carlos Guestrin, then is we want to think about, you know, try to test the hypothesis. Can we find alternatives to deep learning models? Because then, you know, there are other alternatives like, you know, support vector machines, linear models, and of course, tree-based models. And our question was, if you build those models and feed them with big enough data, because usually like one of the key characteristics of deep learning is that it's taking a lot [00:04:22]Swyx: of data, right? [00:04:23]Tianqi: So we will be able to get the same amount of performance. That's a hypothesis we're setting out to test. Of course, if you look at now, right, that's a wrong hypothesis, but as a byproduct, what we find out is that, you know, most of the gradient boosting library out there is not efficient enough for us to test that hypothesis. So I happen to have quite a bit of experience in the past of building gradient boosting trees and their variants. So Effective Action Boost was kind of like a byproduct of that hypothesis testing. At that time, I'm also competing a bit in data science challenges, like I worked on KDDCup and then Kaggle kind of become bigger, right? So I kind of think maybe it's becoming useful to others. One of my friends convinced me to try to do a Python binding of it. That tends to be like a very good decision, right, to be effective. Usually when I build it, we feel like maybe a command line interface is okay. And now we have a Python binding, we have R bindings. And then it realized, you know, it started getting interesting. People started contributing different perspectives, like visualization and so on. So we started to push a bit more on to building distributive support to make sure it works on any platform and so on. And even at that time point, when I talked to Carlos, my advisor, later, he said he never anticipated that we'll get to that level of success. And actually, why I pushed for gradient boosting trees, interestingly, at that time, he also disagreed. He thinks that maybe we should go for kernel machines then. And it turns out, you know, actually, we are both wrong in some sense, and Deep Neural Network was the king in the hill. But at least the gradient boosting direction got into something fruitful. [00:06:01]Swyx: Interesting. [00:06:02]Alessio: I'm always curious when it comes to these improvements, like, what's the design process in terms of like coming up with it? And how much of it is a collaborative with like other people that you're working with versus like trying to be, you know, obviously, in academia, it's like very paper-driven kind of research driven. [00:06:19]Tianqi: I would say the extra boost improvement at that time point was more on like, you know, I'm trying to figure out, right. But it's combining lessons. Before that, I did work on some of the other libraries on matrix factorization. That was like my first open source experience. Nobody knew about it, because you'll find, likely, if you go and try to search for the package SVD feature, you'll find some SVN repo somewhere. But it's actually being used for some of the recommender system packages. So I'm trying to apply some of the previous lessons there and trying to combine them. The later projects like MXNet and then TVM is much, much more collaborative in a sense that... But, of course, extra boost has become bigger, right? So when we started that project myself, and then we have, it's really amazing to see people come in. Michael, who was a lawyer, and now he works on the AI space as well, on contributing visualizations. Now we have people from our community contributing different things. So extra boost even today, right, it's a community of committers driving the project. So it's definitely something collaborative and moving forward on getting some of the things continuously improved for our community. [00:07:37]Alessio: Let's talk a bit about TVM too, because we got a lot of things to run through in this episode. [00:07:42]Swyx: I would say that at some point, I'd love to talk about this comparison between extra boost or tree-based type AI or machine learning compared to deep learning, because I think there is a lot of interest around, I guess, merging the two disciplines, right? And we can talk more about that. I don't know where to insert that, by the way, so we can come back to it later. Yeah. [00:08:04]Tianqi: Actually, what I said, when we test the hypothesis, the hypothesis is kind of, I would say it's partially wrong, because the hypothesis we want to test now is, can you run tree-based models on image classification tasks, where deep learning is certainly a no-brainer right [00:08:17]Swyx: now today, right? [00:08:18]Tianqi: But if you try to run it on tabular data, still, you'll find that most people opt for tree-based models. And there's a reason for that, in the sense that when you are looking at tree-based models, the decision boundaries are naturally rules that you're looking at, right? And they also have nice properties, like being able to be agnostic to scale of input and be able to automatically compose features together. And I know there are attempts on building neural network models that work for tabular data, and I also sometimes follow them. I do feel like it's good to have a bit of diversity in the modeling space. Actually, when we're building TVM, we build cost models for the programs, and actually we are using XGBoost for that as well. I still think tree-based models are going to be quite relevant, because first of all, it's really to get it to work out of the box. And also, you will be able to get a bit of interoperability and control monotonicity [00:09:18]Swyx: and so on. [00:09:19]Tianqi: So yes, it's still going to be relevant. I also sometimes keep coming back to think about, are there possible improvements that we can build on top of these models? And definitely, I feel like it's a space that can have some potential in the future. [00:09:34]Swyx: Are there any current projects that you would call out as promising in terms of merging the two directions? [00:09:41]Tianqi: I think there are projects that try to bring a transformer-type model for tabular data. I don't remember specifics of them, but I think even nowadays, if you look at what people are using, tree-based models are still one of their toolkits. So I think maybe eventually it's not even a replacement, it will be just an ensemble of models that you can call. Perfect. [00:10:07]Alessio: Next up, about three years after XGBoost, you built this thing called TVM, which is now a very popular compiler framework for models. Let's talk about, so this came out about at the same time as ONNX. So I think it would be great if you could maybe give a little bit of an overview of how the two things work together. Because it's kind of like the model, then goes to ONNX, then goes to the TVM. But I think a lot of people don't understand the nuances. I can get a bit of a backstory on that. [00:10:33]Tianqi: So actually, that's kind of an ancient history. Before XGBoost, I worked on deep learning for two years or three years. I got a master's before I started my PhD. And during my master's, my thesis focused on applying convolutional restricted Boltzmann machine for ImageNet classification. That is the thing I'm working on. And that was before AlexNet moment. So effectively, I had to handcraft NVIDIA CUDA kernels on, I think, a GTX 2070 card. I have a 22070 card. It took me about six months to get one model working. And eventually, that model is not so good, and we should have picked a better model. But that was like an ancient history that really got me into this deep learning field. And of course, eventually, we find it didn't work out. So in my master's, I ended up working on recommender system, which got me a paper, and I applied and got a PhD. But I always want to come back to work on the deep learning field. So after XGBoost, I think I started to work with some folks on this particular MXNet. At that time, it was like the frameworks of CAFE, Ciano, PyTorch haven't yet come out. And we're really working hard to optimize for performance on GPUs. At that time, I found it's really hard, even for NVIDIA GPU. It took me six months. And then it's amazing to see on different hardwares how hard it is to go and optimize code for the platforms that are interesting. So that gets me thinking, can we build something more generic and automatic? So that I don't need an entire team of so many people to go and build those frameworks. So that's the motivation of starting working on TVM. There is really too little about machine learning engineering needed to support deep learning models on the platforms that we're interested in. I think it started a bit earlier than ONNX, but once it got announced, I think it's in a similar time period at that time. So overall, how it works is that TVM, you will be able to take a subset of machine learning programs that are represented in what we call a computational graph. Nowadays, we can also represent a loop-level program ingest from your machine learning models. Usually, you have model formats ONNX, or in PyTorch, they have FX Tracer that allows you to trace the FX graph. And then it goes through TVM. We also realized that, well, yes, it needs to be more customizable, so it will be able to perform some of the compilation optimizations like fusion operator together, doing smart memory planning, and more importantly, generate low-level code. So that works for NVIDIA and also is portable to other GPU backends, even non-GPU backends [00:13:36]Swyx: out there. [00:13:37]Tianqi: So that's a project that actually has been my primary focus over the past few years. And it's great to see how it started from where I think we are the very early initiator of machine learning compilation. I remember there was a visit one day, one of the students asked me, are you still working on deep learning frameworks? I tell them that I'm working on ML compilation. And they said, okay, compilation, that sounds very ancient. It sounds like a very old field. And why are you working on this? And now it's starting to get more traction, like if you say Torch Compile and other things. I'm really glad to see this field starting to pick up. And also we have to continue innovating here. [00:14:17]Alessio: I think the other thing that I noticed is, it's kind of like a big jump in terms of area of focus to go from XGBoost to TVM, it's kind of like a different part of the stack. Why did you decide to do that? And I think the other thing about compiling to different GPUs and eventually CPUs too, did you already see some of the strain that models could have just being focused on one runtime, only being on CUDA and that, and how much of that went into it? [00:14:50]Tianqi: I think it's less about trying to get impact, more about wanting to have fun. I like to hack code, I had great fun hacking CUDA code. Of course, being able to generate CUDA code is cool, right? But now, after being able to generate CUDA code, okay, by the way, you can do it on other platforms, isn't that amazing? So it's more of that attitude to get me started on this. And also, I think when we look at different researchers, myself is more like a problem solver type. So I like to look at a problem and say, okay, what kind of tools we need to solve that problem? So regardless, it could be building better models. For example, while we build extra boots, we build certain regularizations into it so that it's more robust. It also means building system optimizations, writing low-level code, maybe trying to write assembly and build compilers and so on. So as long as they solve the problem, definitely go and try to do them together. And I also see it's a common trend right now. Like if you want to be able to solve machine learning problems, it's no longer at Aggressor layer, right? You kind of need to solve it from both Aggressor data and systems angle. And this entire field of machine learning system, I think it's kind of emerging. And there's now a conference around it. And it's really good to see a lot more people are starting to look into this. [00:16:10]Swyx: Yeah. Are you talking about ICML or something else? [00:16:13]Tianqi: So machine learning and systems, right? So not only machine learning, but machine learning and system. So there's a conference called MLsys. It's definitely a smaller community than ICML, but I think it's also an emerging and growing community where people are talking about what are the implications of building systems for machine learning, right? And how do you go and optimize things around that and co-design models and systems together? [00:16:37]Swyx: Yeah. And you were area chair for ICML and NeurIPS as well. So you've just had a lot of conference and community organization experience. Is that also an important part of your work? Well, it's kind of expected for academic. [00:16:48]Tianqi: If I hold an academic job, I need to do services for the community. Okay, great. [00:16:53]Swyx: Your most recent venture in MLsys is going to the phone with MLCLLM. You announced this in April. I have it on my phone. It's great. I'm running Lama 2, Vicuña. I don't know what other models that you offer. But maybe just kind of describe your journey into MLC. And I don't know how this coincides with your work at CMU. Is that some kind of outgrowth? [00:17:18]Tianqi: I think it's more like a focused effort that we want in the area of machine learning compilation. So it's kind of related to what we built in TVM. So when we built TVM was five years ago, right? And a lot of things happened. We built the end-to-end machine learning compiler that works, the first one that works. But then we captured a lot of lessons there. So then we are building a second iteration called TVM Unity. That allows us to be able to allow ML engineers to be able to quickly capture the new model and how we demand building optimizations for them. And MLCLLM is kind of like an MLC. It's more like a vertical driven organization that we go and build tutorials and go and build projects like LLM to solutions. So that to really show like, okay, you can take machine learning compilation technology and apply it and bring something fun forward. Yeah. So yes, it runs on phones, which is really cool. But the goal here is not only making it run on phones, right? The goal is making it deploy universally. So we do run on Apple M2 Macs, the 17 billion models. Actually, on a single batch inference, more recently on CUDA, we get, I think, the most best performance you can get out there already on the 4-bit inference. Actually, as I alluded earlier before the podcast, we just had a result on AMD. And on a single batch, actually, we can get the latest AMD GPU. This is a consumer card. It can get to about 80% of the 4019, so NVIDIA's best consumer card out there. So it's not yet on par, but thinking about how diversity and what you can enable and the previous things you can get on that card, it's really amazing that what you can do with this kind of technology. [00:19:10]Swyx: So one thing I'm a little bit confused by is that most of these models are in PyTorch, but you're running this inside a TVM. I don't know. Was there any fundamental change that you needed to do, or was this basically the fundamental design of TVM? [00:19:25]Tianqi: So the idea is that, of course, it comes back to program representation, right? So effectively, TVM has this program representation called TVM script that contains more like computational graph and operational representation. So yes, initially, we do need to take a bit of effort of bringing those models onto the program representation that TVM supports. Usually, there are a mix of ways, depending on the kind of model you're looking at. For example, for vision models and stable diffusion models, usually we can just do tracing that takes PyTorch model onto TVM. That part is still being robustified so that we can bring more models in. On language model tasks, actually what we do is we directly build some of the model constructors and try to directly map from Hugging Face models. The goal is if you have a Hugging Face configuration, we will be able to bring that in and apply optimization on them. So one fun thing about model compilation is that your optimization doesn't happen only as a soft language, right? For example, if you're writing PyTorch code, you just go and try to use a better fused operator at a source code level. Torch compile might help you do a bit of things in there. In most of the model compilations, it not only happens at the beginning stage, but we also apply generic transformations in between, also through a Python API. So you can tweak some of that. So that part of optimization helps a lot of uplifting in getting both performance and also portability on the environment. And another thing that we do have is what we call universal deployment. So if you get the ML program into this TVM script format, where there are functions that takes in tensor and output tensor, we will be able to have a way to compile it. So they will be able to load the function in any of the language runtime that TVM supports. So if you could load it in JavaScript, and that's a JavaScript function that you can take in tensors and output tensors. If you're loading Python, of course, and C++ and Java. So the goal there is really bring the ML model to the language that people care about and be able to run it on a platform they like. [00:21:37]Swyx: It strikes me that I've talked to a lot of compiler people, but you don't have a traditional compiler background. You're inventing your own discipline called machine learning compilation, or MLC. Do you think that this will be a bigger field going forward? [00:21:52]Tianqi: First of all, I do work with people working on compilation as well. So we're also taking inspirations from a lot of early innovations in the field. Like for example, TVM initially, we take a lot of inspirations from Halide, which is just an image processing compiler. And of course, since then, we have evolved quite a bit to focus on the machine learning related compilations. If you look at some of our conference publications, you'll find that machine learning compilation is already kind of a subfield. So if you look at papers in both machine learning venues, the MLC conferences, of course, and also system venues, every year there will be papers around machine learning compilation. And in the compiler conference called CGO, there's a C4ML workshop that also kind of trying to focus on this area. So definitely it's already starting to gain traction and becoming a field. I wouldn't claim that I invented this field, but definitely I helped to work with a lot of folks there. And I try to bring a perspective, of course, trying to learn a lot from the compiler optimizations as well as trying to bring in knowledges in machine learning and systems together. [00:23:07]Alessio: So we had George Hotz on the podcast a few episodes ago, and he had a lot to say about AMD and their software. So when you think about TVM, are you still restricted in a way by the performance of the underlying kernel, so to speak? So if your target is like a CUDA runtime, you still get better performance, no matter like TVM kind of helps you get there, but then that level you don't take care of, right? [00:23:34]Swyx: There are two parts in here, right? [00:23:35]Tianqi: So first of all, there is the lower level runtime, like CUDA runtime. And then actually for NVIDIA, a lot of the mood came from their libraries, like Cutlass, CUDN, right? Those library optimizations. And also for specialized workloads, actually you can specialize them. Because a lot of cases you'll find that if you go and do benchmarks, it's very interesting. Like two years ago, if you try to benchmark ResNet, for example, usually the NVIDIA library [00:24:04]Swyx: gives you the best performance. [00:24:06]Tianqi: It's really hard to beat them. But as soon as you start to change the model to something, maybe a bit of a variation of ResNet, not for the traditional ImageNet detections, but for latent detection and so on, there will be some room for optimization because people sometimes overfit to benchmarks. These are people who go and optimize things, right? So people overfit the benchmarks. So that's the largest barrier, like being able to get a low level kernel libraries, right? In that sense, the goal of TVM is actually we try to have a generic layer to both, of course, leverage libraries when available, but also be able to automatically generate [00:24:45]Swyx: libraries when possible. [00:24:46]Tianqi: So in that sense, we are not restricted by the libraries that they have to offer. That's why we will be able to run Apple M2 or WebGPU where there's no library available because we are kind of like automatically generating libraries. That makes it easier to support less well-supported hardware, right? For example, WebGPU is one example. From a runtime perspective, AMD, I think before their Vulkan driver was not very well supported. Recently, they are getting good. But even before that, we'll be able to support AMD through this GPU graphics backend called Vulkan, which is not as performant, but it gives you a decent portability across those [00:25:29]Swyx: hardware. [00:25:29]Alessio: And I know we got other MLC stuff to talk about, like WebLLM, but I want to wrap up on the optimization that you're doing. So there's kind of four core things, right? Kernel fusion, which we talked a bit about in the flash attention episode and the tiny grab one memory planning and loop optimization. I think those are like pretty, you know, self-explanatory. I think the one that people have the most questions, can you can you quickly explain [00:25:53]Swyx: those? [00:25:54]Tianqi: So there are kind of a different things, right? Kernel fusion means that, you know, if you have an operator like Convolutions or in the case of a transformer like MOP, you have other operators that follow that, right? You don't want to launch two GPU kernels. You want to be able to put them together in a smart way, right? And as a memory planning, it's more about, you know, hey, if you run like Python code, every time when you generate a new array, you are effectively allocating a new piece of memory, right? Of course, PyTorch and other frameworks try to optimize for you. So there is a smart memory allocator behind the scene. But actually, in a lot of cases, it's much better to statically allocate and plan everything ahead of time. And that's where like a compiler can come in. We need to, first of all, actually for language model, it's much harder because dynamic shape. So you need to be able to what we call symbolic shape tracing. So we have like a symbolic variable that tells you like the shape of the first tensor is n by 12. And the shape of the third tensor is also n by 12. Or maybe it's n times 2 by 12. Although you don't know what n is, right? But you will be able to know that relation and be able to use that to reason about like fusion and other decisions. So besides this, I think loop transformation is quite important. And it's actually non-traditional. Originally, if you simply write a code and you want to get a performance, it's very hard. For example, you know, if you write a matrix multiplier, the simplest thing you can do is you do for i, j, k, c, i, j, plus, equal, you know, a, i, k, times b, i, k. But that code is 100 times slower than the best available code that you can get. So we do a lot of transformation, like being able to take the original code, trying to put things into shared memory, and making use of tensor calls, making use of memory copies, and all this. Actually, all these things, we also realize that, you know, we cannot do all of them. So we also make the ML compilation framework as a Python package, so that people will be able to continuously improve that part of engineering in a more transparent way. So we find that's very useful, actually, for us to be able to get good performance very quickly on some of the new models. Like when Lamato came out, we'll be able to go and look at the whole, here's the bottleneck, and we can go and optimize those. [00:28:10]Alessio: And then the fourth one being weight quantization. So everybody wants to know about that. And just to give people an idea of the memory saving, if you're doing FB32, it's like four bytes per parameter. Int8 is like one byte per parameter. So you can really shrink down the memory footprint. What are some of the trade-offs there? How do you figure out what the right target is? And what are the precision trade-offs, too? [00:28:37]Tianqi: Right now, a lot of people also mostly use int4 now for language models. So that really shrinks things down a lot. And more recently, actually, we started to think that, at least in MOC, we don't want to have a strong opinion on what kind of quantization we want to bring, because there are so many researchers in the field. So what we can do is we can allow developers to customize the quantization they want, but we still bring the optimum code for them. So we are working on this item called bring your own quantization. In fact, hopefully MOC will be able to support more quantization formats. And definitely, I think there's an open field that's being explored. Can you bring more sparsities? Can you quantize activations as much as possible, and so on? And it's going to be something that's going to be relevant for quite a while. [00:29:27]Swyx: You mentioned something I wanted to double back on, which is most people use int4 for language models. This is actually not obvious to me. Are you talking about the GGML type people, or even the researchers who are training the models also using int4? [00:29:40]Tianqi: Sorry, so I'm mainly talking about inference, not training, right? So when you're doing training, of course, int4 is harder, right? Maybe you could do some form of mixed type precision for inference. I think int4 is kind of like, in a lot of cases, you will be able to get away with int4. And actually, that does bring a lot of savings in terms of the memory overhead, and so on. [00:30:09]Alessio: Yeah, that's great. Let's talk a bit about maybe the GGML, then there's Mojo. How should people think about MLC? How do all these things play together? I think GGML is focused on model level re-implementation and improvements. Mojo is a language, super sad. You're more at the compiler level. Do you all work together? Do people choose between them? [00:30:32]Tianqi: So I think in this case, I think it's great to say the ecosystem becomes so rich with so many different ways. So in our case, GGML is more like you're implementing something from scratch in C, right? So that gives you the ability to go and customize each of a particular hardware backend. But then you will need to write from CUDA kernels, and you write optimally from AMD, and so on. So the kind of engineering effort is a bit more broadened in that sense. Mojo, I have not looked at specific details yet. I think it's good to start to say, it's a language, right? I believe there will also be machine learning compilation technologies behind it. So it's good to say, interesting place in there. In the case of MLC, our case is that we do not want to have an opinion on how, where, which language people want to develop, deploy, and so on. And we also realize that actually there are two phases. We want to be able to develop and optimize your model. By optimization, I mean, really bring in the best CUDA kernels and do some of the machine learning engineering in there. And then there's a phase where you want to deploy it as a part of the app. So if you look at the space, you'll find that GGML is more like, I'm going to develop and optimize in the C language, right? And then most of the low-level languages they have. And Mojo is that you want to develop and optimize in Mojo, right? And you deploy in Mojo. In fact, that's the philosophy they want to push for. In the ML case, we find that actually if you want to develop models, the machine learning community likes Python. Python is a language that you should focus on. So in the case of MLC, we really want to be able to enable, not only be able to just define your model in Python, that's very common, right? But also do ML optimization, like engineering optimization, CUDA kernel optimization, memory planning, all those things in Python that makes you customizable and so on. But when you do deployment, we realize that people want a bit of a universal flavor. If you are a web developer, you want JavaScript, right? If you're maybe an embedded system person, maybe you would prefer C++ or C or Rust. And people sometimes do like Python in a lot of cases. So in the case of MLC, we really want to have this vision of, you optimize, build a generic optimization in Python, then you deploy that universally onto the environments that people like. [00:32:54]Swyx: That's a great perspective and comparison, I guess. One thing I wanted to make sure that we cover is that I think you are one of these emerging set of academics that also very much focus on your artifacts of delivery. Of course. Something we talked about for three years, that he was very focused on his GitHub. And obviously you treated XGBoost like a product, you know? And then now you're publishing an iPhone app. Okay. Yeah. Yeah. What is his thinking about academics getting involved in shipping products? [00:33:24]Tianqi: I think there are different ways of making impact, right? Definitely, you know, there are academics that are writing papers and building insights for people so that people can build product on top of them. In my case, I think the particular field I'm working on, machine learning systems, I feel like really we need to be able to get it to the hand of people so that really we see the problem, right? And we show that we can solve a problem. And it's a different way of making impact. And there are academics that are doing similar things. Like, you know, if you look at some of the people from Berkeley, right? A few years, they will come up with big open source projects. Certainly, I think it's just a healthy ecosystem to have different ways of making impacts. And I feel like really be able to do open source and work with open source community is really rewarding because we have a real problem to work on when we build our research. Actually, those research bring together and people will be able to make use of them. And we also start to see interesting research challenges that we wouldn't otherwise say, right, if you're just trying to do a prototype and so on. So I feel like it's something that is one interesting way of making impact, making contributions. [00:34:40]Swyx: Yeah, you definitely have a lot of impact there. And having experience publishing Mac stuff before, the Apple App Store is no joke. It is the hardest compilation, human compilation effort. So one thing that we definitely wanted to cover is running in the browser. You have a 70 billion parameter model running in the browser. That's right. Can you just talk about how? Yeah, of course. [00:35:02]Tianqi: So I think that there are a few elements that need to come in, right? First of all, you know, we do need a MacBook, the latest one, like M2 Max, because you need the memory to be big enough to cover that. So for a 70 million model, it takes you about, I think, 50 gigahertz of RAM. So the M2 Max, the upper version, will be able to run it, right? And it also leverages machine learning compilation. Again, what we are doing is the same, whether it's running on iPhone, on server cloud GPUs, on AMDs, or on MacBook, we all go through that same MOC pipeline. Of course, in certain cases, maybe we'll do a bit of customization iteration for either ones. And then it runs on the browser runtime, this package of WebLM. So that will effectively... So what we do is we will take that original model and compile to what we call WebGPU. And then the WebLM will be to pick it up. And the WebGPU is this latest GPU technology that major browsers are shipping right now. So you can get it in Chrome for them already. It allows you to be able to access your native GPUs from a browser. And then effectively, that language model is just invoking the WebGPU kernels through there. So actually, when the LATMAR2 came out, initially, we asked the question about, can you run 17 billion on a MacBook? That was the question we're asking. So first, we actually... Jin Lu, who is the engineer pushing this, he got 17 billion on a MacBook. We had a CLI version. So in MLC, you will be able to... That runs through a metal accelerator. So effectively, you use the metal programming language to get the GPU acceleration. So we find, okay, it works for the MacBook. Then we asked, we had a WebGPU backend. Why not try it there? So we just tried it out. And it's really amazing to see everything up and running. And actually, it runs smoothly in that case. So I do think there are some kind of interesting use cases already in this, because everybody has a browser. You don't need to install anything. I think it doesn't make sense yet to really run a 17 billion model on a browser, because you kind of need to be able to download the weight and so on. But I think we're getting there. Effectively, the most powerful models you will be able to run on a consumer device. It's kind of really amazing. And also, in a lot of cases, there might be use cases. For example, if I'm going to build a chatbot that I talk to it and answer questions, maybe some of the components, like the voice to text, could run on the client side. And so there are a lot of possibilities of being able to have something hybrid that contains the edge component or something that runs on a server. [00:37:47]Alessio: Do these browser models have a way for applications to hook into them? So if I'm using, say, you can use OpenAI or you can use the local model. Of course. [00:37:56]Tianqi: Right now, actually, we are building... So there's an NPM package called WebILM, right? So that you will be able to, if you want to embed it onto your web app, you will be able to directly depend on WebILM and you will be able to use it. We are also having a REST API that's OpenAI compatible. So that REST API, I think, right now, it's actually running on native backend. So that if a CUDA server is faster to run on native backend. But also we have a WebGPU version of it that you can go and run. So yeah, we do want to be able to have easier integrations with existing applications. And OpenAI API is certainly one way to do that. Yeah, this is great. [00:38:37]Swyx: I actually did not know there's an NPM package that makes it very, very easy to try out and use. I want to actually... One thing I'm unclear about is the chronology. Because as far as I know, Chrome shipped WebGPU the same time that you shipped WebILM. Okay, yeah. So did you have some kind of secret chat with Chrome? [00:38:57]Tianqi: The good news is that Chrome is doing a very good job of trying to have early release. So although the official shipment of the Chrome WebGPU is the same time as WebILM, actually, you will be able to try out WebGPU technology in Chrome. There is an unstable version called Canary. I think as early as two years ago, there was a WebGPU version. Of course, it's getting better. So we had a TVM-based WebGPU backhand two years ago. Of course, at that time, there were no language models. It was running on less interesting, well, still quite interesting models. And then this year, we really started to see it getting matured and performance keeping up. So we have a more serious push of bringing the language model compatible runtime onto the WebGPU. [00:39:45]Swyx: I think you agree that the hardest part is the model download. Has there been conversations about a one-time model download and sharing between all the apps that might use this API? That is a great point. [00:39:58]Tianqi: I think it's already supported in some sense. When we download the model, WebILM will cache it onto a special Chrome cache. So if a different web app uses the same WebILM JavaScript package, you don't need to redownload the model again. So there is already something there. But of course, you have to download the model once at least to be able to use it. [00:40:19]Swyx: Okay. One more thing just in general before we're about to zoom out to OctoAI. Just the last question is, you're not the only project working on, I guess, local models. That's right. Alternative models. There's gpt4all, there's olama that just recently came out, and there's a bunch of these. What would be your advice to them on what's a valuable problem to work on? And what is just thin wrappers around ggml? Like, what are the interesting problems in this space, basically? [00:40:45]Tianqi: I think making API better is certainly something useful, right? In general, one thing that we do try to push very hard on is this idea of easier universal deployment. So we are also looking forward to actually have more integration with MOC. That's why we're trying to build API like WebILM and other things. So we're also looking forward to collaborate with all those ecosystems and working support to bring in models more universally and be able to also keep up the best performance when possible in a more push-button way. [00:41:15]Alessio: So as we mentioned in the beginning, you're also the co-founder of Octomel. Recently, Octomel released OctoAI, which is a compute service, basically focuses on optimizing model runtimes and acceleration and compilation. What has been the evolution there? So Octo started as kind of like a traditional MLOps tool, where people were building their own models and you help them on that side. And then it seems like now most of the market is shifting to starting from pre-trained generative models. Yeah, what has been that experience for you and what you've seen the market evolve? And how did you decide to release OctoAI? [00:41:52]Tianqi: One thing that we found out is that on one hand, it's really easy to go and get something up and running, right? So if you start to consider there's so many possible availabilities and scalability issues and even integration issues since becoming kind of interesting and complicated. So we really want to make sure to help people to get that part easy, right? And now a lot of things, if we look at the customers we talk to and the market, certainly generative AI is something that is very interesting. So that is something that we really hope to help elevate. And also building on top of technology we build to enable things like portability across hardwares. And you will be able to not worry about the specific details, right? Just focus on getting the model out. We'll try to work on infrastructure and other things that helps on the other end. [00:42:45]Alessio: And when it comes to getting optimization on the runtime, I see when we run an early adopters community and most enterprises issue is how to actually run these models. Do you see that as one of the big bottlenecks now? I think a few years ago it was like, well, we don't have a lot of machine learning talent. We cannot develop our own models. Versus now it's like, there's these great models you can use, but I don't know how to run them efficiently. [00:43:12]Tianqi: That depends on how you define by running, right? On one hand, it's easy to download your MLC, like you download it, you run on a laptop, but then there's also different decisions, right? What if you are trying to serve a larger user request? What if that request changes? What if the availability of hardware changes? Right now it's really hard to get the latest hardware on media, unfortunately, because everybody's trying to work on the things using the hardware that's out there. So I think when the definition of run changes, there are a lot more questions around things. And also in a lot of cases, it's not only about running models, it's also about being able to solve problems around them. How do you manage your model locations and how do you make sure that you get your model close to your execution environment more efficiently? So definitely a lot of engineering challenges out there. That we hope to elevate, yeah. And also, if you think about our future, definitely I feel like right now the technology, given the technology and the kind of hardware availability we have today, we will need to make use of all the possible hardware available out there. That will include a mechanism for cutting down costs, bringing something to the edge and cloud in a more natural way. So I feel like still this is a very early stage of where we are, but it's already good to see a lot of interesting progress. [00:44:35]Alessio: Yeah, that's awesome. I would love, I don't know how much we're going to go in depth into it, but what does it take to actually abstract all of this from the end user? You know, like they don't need to know what GPUs you run, what cloud you're running them on. You take all of that away. What was that like as an engineering challenge? [00:44:51]Tianqi: So I think that there are engineering challenges on. In fact, first of all, you will need to be able to support all the kind of hardware backhand you have, right? On one hand, if you look at the media library, you'll find very surprisingly, not too surprisingly, most of the latest libraries works well on the latest GPU. But there are other GPUs out there in the cloud as well. So certainly being able to have know-hows and being able to do model optimization is one thing, right? Also infrastructures on being able to scale things up, locate models. And in a lot of cases, we do find that on typical models, it also requires kind of vertical iterations. So it's not about, you know, build a silver bullet and that silver bullet is going to solve all the problems. It's more about, you know, we're building a product, we'll work with the users and we find out there are interesting opportunities in a certain point. And when our engineer will go and solve that, and it will automatically reflect it in a service. [00:45:45]Swyx: Awesome. [00:45:46]Alessio: We can jump into the lightning round until, I don't know, Sean, if you have more questions or TQ, if you have more stuff you wanted to talk about that we didn't get a chance to [00:45:54]Swyx: touch on. [00:45:54]Alessio: Yeah, we have talked a lot. [00:45:55]Swyx: So, yeah. We always would like to ask, you know, do you have a commentary on other parts of AI and ML that is interesting to you? [00:46:03]Tianqi: So right now, I think one thing that we are really pushing hard for is this question about how far can we bring open source, right? I'm kind of like a hacker and I really like to put things together. So I think it's unclear in the future of what the future of AI looks like. On one hand, it could be possible that, you know, you just have a few big players, you just try to talk to those bigger language models and that can do everything, right? On the other hand, one of the things that Wailing Academic is really excited and pushing for, that's one reason why I'm pushing for MLC, is that can we build something where you have different models? You have personal models that know the best movie you like, but you also have bigger models that maybe know more, and you get those models to interact with each other, right? And be able to have a wide ecosystem of AI agents that helps each person while still being able to do things like personalization. Some of them can run locally, some of them, of course, running on a cloud, and how do they interact with each other? So I think that is a very exciting time where the future is yet undecided, but I feel like there is something we can do to shape that future as well. [00:47:18]Swyx: One more thing, which is something I'm also pursuing, which is, and this kind of goes back into predictions, but also back in your history, do you have any idea, or are you looking out for anything post-transformers as far as architecture is concerned? [00:47:32]Tianqi: I think, you know, in a lot of these cases, you can find there are already promising models for long contexts, right? There are space-based models, where like, you know, a lot of some of our colleagues from Albert, who he worked on this HIPPO models, right? And then there is an open source version called RWKV. It's like a recurrent models that allows you to summarize things. Actually, we are bringing RWKV to MOC as well, so maybe you will be able to see one of the models. [00:48:00]Swyx: We actually recorded an episode with one of the RWKV core members. It's unclear because there's no academic backing. It's just open source people. Oh, I see. So you like the merging of recurrent networks and transformers? [00:48:13]Tianqi: I do love to see this model space continue growing, right? And I feel like in a lot of cases, it's just that attention mechanism is getting changed in some sense. So I feel like definitely there are still a lot of things to be explored here. And that is also one reason why we want to keep pushing machine learning compilation, because one of the things we are trying to push in was productivity. So that for machine learning engineering, so that as soon as some of the models came out, we will be able to, you know, empower them onto those environments that's out there. [00:48:43]Swyx: Yeah, it's a really good mission. Okay. Very excited to see that RWKV and state space model stuff. I'm hearing increasing chatter about that stuff. Okay. Lightning round, as always fun. I'll take the first one. Acceleration. What has already happened in AI that you thought would take much longer? [00:48:59]Tianqi: Emergence of more like a conversation chatbot ability is something that kind of surprised me before it came out. This is like one piece that I feel originally I thought would take much longer, but yeah, [00:49:11]Swyx: it happens. And it's funny because like the original, like Eliza chatbot was something that goes all the way back in time. Right. And then we just suddenly came back again. Yeah. [00:49:21]Tianqi: It's always too interesting to think about, but with a kind of a different technology [00:49:25]Swyx: in some sense. [00:49:25]Alessio: What about the most interesting unsolved question in AI? [00:49:31]Swyx: That's a hard one, right? [00:49:32]Tianqi: So I can tell you like what kind of I'm excited about. So, so I think that I have always been excited about this idea of continuous learning and lifelong learning in some sense. So how AI continues to evolve with the knowledges that have been there. It seems that we're getting much closer with all those recent technologies. So being able to develop systems, support, and be able to think about how AI continues to evolve is something that I'm really excited about. [00:50:01]Swyx: So specifically, just to double click on this, are you talking about continuous training? That's like a training. [00:50:06]Tianqi: I feel like, you know, training adaptation and it's all similar things, right? You want to think about entire life cycle, right? The life cycle of collecting data, training, fine tuning, and maybe have your local context that getting continuously curated and feed onto models. So I think all these things are interesting and relevant in here. [00:50:29]Swyx: Yeah. I think this is something that people are really asking, you know, right now we have moved a lot into the sort of pre-training phase and off the shelf, you know, the model downloads and stuff like that, which seems very counterintuitive compared to the continuous training paradigm that people want. So I guess the last question would be for takeaways. What's basically one message that you want every listener, every person to remember today? [00:50:54]Tianqi: I think it's getting more obvious now, but I think one of the things that I always want to mention in my talks is that, you know, when you're thinking about AI applications, originally people think about algorithms a lot more, right? Our algorithm models, they are still very important. But usually when you build AI applications, it takes, you know, both algorithm side, the system optimizations, and the data curations, right? So it takes a connection of so many facades to be able to bring together an AI system and be able to look at it from that holistic perspective is really useful when we start to build modern applications. I think it's going to continue going to be more important in the future. [00:51:35]Swyx: Yeah. Thank you for showing the way on this. And honestly, just making things possible that I thought would take a lot longer. So thanks for everything you've done. [00:51:46]Tianqi: Thank you for having me. [00:51:47]Swyx: Yeah. [00:51:47]Alessio: Thanks for coming on TQ. [00:51:49]Swyx: Have a good one. [00:51:49] Get full access to Latent Space at www.latent.space/subscribe

Junk Filter
137: Shatner in the Seventies (with Jessica Ritchey)

Junk Filter

Play Episode Listen Later Jul 7, 2023 85:52


The writer and critic Jessica Ritchey returns to the show for a look at the strange body of work William Shatner put together in the 1970s, the wilderness years in between the end of the Star Trek TV series and the start of the Star Trek film series.  We focus on four of his films, all currently available to watch on YouTube: The Horror At 37,000 Feet (1973), a TV movie that combines The Exorcist with the Airport series, with Shatner as a defrocked priest fighting against the demonic possession of a transatlantic flight.  Pray for the Wildcats (1974), also made for TV, with Shatner as a depressed ad executive forced to go with his partners on a motorcycle ride through Mexico with a rich client (Andy Griffith) who turns out to be a psychotic monster looking for consequence-free thrills. Impulse (1974), a deranged regional grindhouse feature by scholckmeister William Grefé starring Shatner as a murderous gigolo, the ultimate Florida Man depicted on screen. Disaster on the Coastliner (1979), an all-star TVM starring Shatner as a charming conman who gets swept up in the takeover of a train by a deranged engineer determined to smash it into an oncoming train (with the Vice President's wife on board!) We also discuss the notorious video of Shatner's spoken-word version of “Rocket Man” at the 1978 Saturn Awards for science fiction, and other highlights from this very strange and adventurous period in his long career. Become a patron of the podcast to access exclusive episodes every month, including this summer's entire Miami Vice sidebar series. Sign up at ⁠⁠https://www.patreon.com/junkfilter Follow Jessica Ritchey on Twitter, and support her work on Patreon. Jessica's YouTube mixtape “Shatner's Wilderness Years” with links to the programming we discuss in this show. You can order the Grindhouse Releasing limited edition blu-ray restoration of Impulse here! Trailer for Impulse (William Grefé, 1974) William Shatner Loblaws commercial for Canadian TV, 1972 The “Rocket Man” clip from the 1978 Saturn Awards, AI-remastered

Creative
Marc Galea Session Guitarist, Teacher, TV Presenter.

Creative

Play Episode Listen Later Apr 29, 2023 55:58


Marc Galea Session Guitarist, Teacher, TV Presenter. In this episode I am talking to the wonderful Marc Galea. Marc has achieved amazing things due to his tenacity and we talk about being able to think outside the box. This is a great chat. Marc is a session guitarist, RGT Registered Tutor with 13 years teaching experience ,published his own method books and released 2 ep's and 2 albums. In 2008 and 2009 Marc was awarded the internationally-acclaimed RGT Guitar Tutor of the Year Award Finalist award Marc started learning classical guitar at age 11 but immediately exchanged it for an electric guitar after hearing Brian May play. Following a trip to Cardiff to see Joe Satriani in concert when he was 16 years old, he decided to take up guitar professionally and set about studying with intent and practicing with his first serious band Juicy Affair. During 2001 Marc held a teaching post in a state school in Malta and in that same year he produced and presented the innovative successful TV series B'Sitt Kordi (with six strings ) on Local Television where he taught basic guitar and interviewed established guitarists in Malta. In august '07 while working as a session guitarist with Ivan Filletti, Marc shared the stage with Italian singer songwriter Claudio Baglioni performing during widely publicized O'SCIA concert in Valletta waterfront. In December 2008 and 2009 marc worked with Guitarist Phil Hilborne and and bassist Neil Murray (Black Sabbath, Whitesnake, Brian May, Gary moore ) on two guitar master class at the Euro Institute and a concert. . Marc was also featured teaching guitar during a TV series called Bands. This program was aired every week on TVM between October till December 08. Marc together with guitarist Steve Delia opened for legendary acoustic guitar player Gordon Giltrap during the second half of his concert, organized by Masquerade theatre company at the Manoel Theatre on the 6th may 08. In September 09 Tech Music schools became partners with the Euro Institute of Music; They organized a master class at the euro institute where marc worked with legend guitarist John Wheatcroft. Marc has played in many events and musicals and The Malta Philharmonic Orchestra and recently was part of Rockestra. He also works as a session guitarist in the studio for various local artists and currently plays with Versatile Brass band under the direction of Mro. Paul Borg. Together with his band he was part of the Malta Jazz festival 2011 and in August same year played in Jazz festival in Sicily. In 2012 Marc published a guitar method book titled A Step by Step Approach for the Modern Guitar Player. Spread over 30 lessons and divided into 6 chapters, the book is unique in that it covers 4 years' tuition, from beginners to advanced level. Visit https://www.youtube.com/watch?v=0UzjDqZJjJw www.marcgalea.com To support the podcast and get access to features about guitar playing and song writing visit https://www.patreon.com/vichyland and also news for all the creative music that we do at Bluescamp UK and France visit www.bluescampuk.co.uk For details of the Ikaro music charity visit www.ikaromusic.com Big thanks to Josh Ferrara for the music

Eternal Expansion with Erica Ellle
29. Embrace the Hurricane, Nervous System Regulation & life as a Highly Sensitive Person with Liz Mccormack

Eternal Expansion with Erica Ellle

Play Episode Listen Later Feb 20, 2023 74:38


Liz Mccormack has gone from a completely disempowered state where she was heavily co-dependent on others to make decisions for her. Sitting in the passenger seat of life and "checking out" when things got difficult, not being able to make decisions for herself, people pleasing and completely dysregulated -> to someone who is at peace, empowered in her own decision making and running a business in a country she always wanted to go to. She has helped clients through cancer dx, reprogramming their belief systems, psychic trance mediumship healing, mentored clients into connecting with their guides, communication, boundaries, self love, connecting to their seasons and cycles & nature and energy sessions. She just released a new 6 week Group "Inner Intimacy' Course that you can get get a discount on the first week it is released (this week if you are listening to this when published 2/20/2023) Highlights in this episode: - What it felt like to be LITERALLY in the eye of a hurricane - Finding reiki and Seichem healing - Tools to regulate your nervous system - Living as a highly sensitive person - Plant medicine experience & her path with the tuam babies of Ireland - Clearing energetic grief of the land - Deep surrender to the path & showing up in your full expression Connect with Liz: Follow Liz on Instagram: https://www.instagram.com/levelupwith_liz/ Email Liz: hello@levelupwithliz.co Website: Under construction Coming soon Podcast: The "Titty Talks" Podcast Offers: Online and in person Reiki / Seichem / Alchemy Session Alchemy session: An intuitive Coaching Session , Picking from all of the modalities i have i bring to the client whats needed ( (nervous system regulation, Energy work, EFT etc) TVM therapy in person - Controlled release of Oxytocin which allows clients to come out of Trauma cycle and move into an expansive way of being. 3 month Coaching programs where we review every facet of life and reflect back to the client their innate power working through shadow work and limitations. 6 week Group "inner intimacy' Course launching end of March "Yinspiration" monthly Yin Journaling and energy work session In person retreats and circles in Bali Cacao ceremonies one on one and Group Connect with Erica: Follow Erica on Instagram: https://www.instagram.com/erica.eternalexpansion/ Follow Erica on TikTok: https://www.tiktok.com/@erica.eternalexpansion Follow Erica on YouTube: https://www.youtube.com/@erica.eternalexpansion Follow Erica on Pintrest: https://www.pinterest.com/erica_eternalexpansion/ Spice it UP Mastery Mentorship: https://www.ericaellle.com/mentorship Level UP Course: https://www.ericaellle.com/course Soul FIRE Session: https://www.ericaellle.com/services --- Send in a voice message: https://podcasters.spotify.com/pod/show/eternalexpansion/message

Trap One: A Doctor Who Podcast

Thank you for downloading the Trap One Podcast. On this episode Keith (@50dw50) and Mark (@QuarkMcMalus) discuss the documentary film Doctor Who Am I, which covers TVM writer Matthew Jacobs' return to the world of Doctor Who fandom after many years. The documentary is available to watch for subscribers to Britbox here. You can also purchase on DVD and Blu-Ray. Please consider supporting the podcast by ordering here. Episode 239 of Trap One features US Jason's interviews from LI Who with Matthew Jacobs, Vanessa Yuille, Bhavnisha Parmar, Jon Davey and Wendy Padbury and you can find it here.

Eventually Supertrain
Lucan TVM Discussion

Eventually Supertrain

Play Episode Listen Later Jan 11, 2023 65:51


In this standalone episode, Dan and a returning guest (AR, anyone?) discuss the 1977 TVM about a boy raised by wolves. it's LUCAN! Plus, construction workers. Please, listen and enjoy.

Forhjulslir
Portræt: Claus Michael Møller

Forhjulslir

Play Episode Listen Later Dec 26, 2022 225:57


Forhjulslir præsenteres i samarbejde med Continental Dæk Danmark, hovedsponsor for Tour de France-starten i Danmark, bannerfører for sikkerheden på landevejene og podcastens hjælperytter på viften.  Claus Michael Møller. Urmagersønnen fra Hjørring. Den spansk/portugisiske nordjyde, som klatrede lige op med sin generations bedste bjergryttere i slut 90'erne og start 00'erne - heriblandt Juan Carlos Dominguez, José Jiménez, Oscar Sevilla, Joseba Beloki, Santiago Blanco, Marco Pantani og Roberto Heras. Det eneste bjerg han kunne klatre i sine unge år, var den barske vestenvind i Vendsyssel. Den formede ham som cykelrytter, gav ham hår på brystet, vind i håret og gjorde ham til dansk mester i enkeltstart. Senere vandt Møller den første udgave af G.P Herning og blev udtaget til OL i Barcelona 1992. Talentet tog ham derefter sydpå. Ned til sydens sol og de spanske bjerge - først til Baskerlandet og Pamplona og senere til Jávea. Her vandt Møller amatørernes udgave af La Vuelta a España, og så blev han professionel. Det var han fra 1994 til 2007, hvor han kørte 11 Grand Tours i løbet af karrieren, vandt Portugal Rundt samlet, vandt Volta ao Algarve samlet og vandt kongeetapen af Vuelta a España i 2001. Men hvem er Claus Michael Møller egentlig? Den nordjyske bjergrytter om gik sine egne veje og som skabte sig et stort navn i Portugal og Spanien. Hvordan endte han i spansk cykling i 1993? Hvorfor blev det aldrig rigtig en succes på hollandske TVM?  Hvad gik hans karantæne helt præcist ud på i 1999/2000? Hvordan endte han i portugisisk cykling? Hvordan vandt han kongeetapen i Vueltaen i 2001? Hvordan husker han sejrene og nederlagene? Hvem var cykelrytteren Claus Michael Møller, legenden i portugisisk cykling, som måske aldrig rigtig har fået den anerkendelse han fortjener i Danmark? Det fortæller han i dette store portræt, hvor han tager os med tilbage på sine minders landeveje - fra de første pedaltråd i Hjørring i 1980'erne til det sidste cykelløb i Portugal i 2007. Navn: Claus Michael Møller Født: 3. oktober 1968 Hold: 1980-1990 - Cycle Clubben Hjørring 1991-1993 - Ordrup CC 1994 - Construcciones ACR - M.R.A. (Spanien) til 31-08 1994 - TVM - Bison Kit (Holland) fra 01-09 (Stagiaire) 1995 - Construcciones ACR - MRA (Spanien) til 31-07 1995 - Castellblanch (Spanien) fra 01-08 1996 - MX Onda - Eurosport (Spanien)  1997 -  Cafés Toscaf - Macario (Spanien)  1998-1999 -  TVM - Farm Frites (Holland) - til 31-07-1999 2000-2003 - Maia - Milaneza - MSS (Portugal)  2004 - Alessio - Bianchi (Italien)  2005-2007 -  Barbot - Halcon (Portugal)    Vært: Anders Mielke

Starting Now
Making your dreams reality with Trevor Van Meter (HeyTVM)

Starting Now

Play Episode Listen Later Dec 22, 2022 82:10


This week on the podcast, I talk to TVM. He is a phenomenal illustrator whose witty illustrations have a surprising philosophical twist. In this episode, we dive deep into life philosophy and how this impacts art. We talk about stoicism, what it means to develop your less self vs. your best self, and the school of New Thought in regards to spirituality. We also talk about one of the most difficult things for a creative: making your dreams a reality. HeyTVM Nah Fungible Bones MonsterBuds ——My Podcasting Gear: Cameras, Mics, and Lights——Do you need help developing your brand and business?Work with me at SPYR!Mint or collect NFTs from projects that I've worked on: SkullKids: Generations (Mint | OpenSea) The Spoopies (Mint) Frootlings (Mint | OpenSea) The Ooglies (OpenSea) ——Enjoying the show? Let me know on Twitter! I'm @jeffSARRIS.Watch Starting Now on YouTube or listen and subscribe on Apple Podcasts, Spotify, or wherever you get your podcasts.——A huge thanks goes out to Amara Andrew for handling the live video production on Starting Now. Follow what she's up to or hire her for your video production needs at mavenbyamara.com!——Some of the links above may be affiliate links which means that I earn a small commission from qualifying purchases at no additional cost to you. Thanks for your support!

The DotCom Magazine Entrepreneur Spotlight
Ron Lasorsa, Managing General Partner, Victory Litigation Fund, A DotCom Magazine Interview

The DotCom Magazine Entrepreneur Spotlight

Play Episode Listen Later Oct 13, 2022 39:25


About Ron Lasorsa and Victory Litigation Fund: As the CEO of the General Partner, Ron is responsible for creating the Company's growth strategy, executing its business model, and directing operations. Ron brings an experienced yet non-traditional view on the market demands currently reshaping the 21st century delivery of legal services. Ron's more than 25 years of managerial and financial experience spans the legal, financial, and direct response advertising industries. Ron brings an experienced yet non-traditional view on the market demands currently reshaping the 21st Century delivery of Legal Services. Ron's almost 40 years of managerial and financial experience spans the military, legal, financial and direct response advertising industries. From 1994 to 2001, Ron was responsible for corporate stock buybacks and equity derivative sales at JPMorgan, managing over $50MM in trading commissions per year. In 2001, he left JP Morgan and went to ABN Amaro Bank to run equity prime brokerage trading. He managed over 400 prime brokerage trading accounts and was responsible for 30 traders generating over $100MM in trading commissions per year. Ron left Wall Street in 2006 to pursue new ventures in the legal services industry, focusing on online lead generation. In 2014, American Medical Systems and Endo International made headlines when they paid out a staggering $830 million to settle more than 20,000 claims that their transvaginal mesh caused severe harm to patients. In 2015, Ron founded and sold a law firm (Alpha Law) for $40.5 million in a single transaction as the minority non-attorney equity owner who had originated over 15,000 transvaginal mesh cases. Lasorsa and his partners deployed $7.3 million and $8.9 million in collateralized third-party debt from a hedge fund to acquire the cases. The final docket of cases included 6,343 women injured by TVM and retained by law firms from July 2014 through May 2015. Founded in 2022, the Victory Litigation Fund is a blockchain-built tokenized venture fund that raises capital by selling security tokens. We then work with carefully selected law firms to develop veteran-related litigation cases. Victory's mission is to advocate for the veterans and their families harmed during service to our country and win the most financial compensation possible for their injuries. To fulfill this mandate, Victory has developed several proprietary strategies to help veterans seek the justice they deserve, defend taxpayers from waste, fraud, and abuse and maximize economic value for investors.

The ALL ME® Podcast
Episode 82: Urine Testing and Hydration Status – Dr. Floris Wardenaar

The ALL ME® Podcast

Play Episode Listen Later Oct 4, 2022 47:28


The ALL ME® Podcast Urine Testing and Hydration Status – Dr. Floris Wardenaar When we think about monitoring hydration status, the average consumer may measure this by how many ounces of fluid they drink in a given day. Some general recommendations are half your body weight in ounces of water or fluid. But is this enough and are there other ways we should be measuring if we're adequately hydrated? Did you know that the color of your urine can be a very effective way to determine hydration status? In this podcast, we speak with Dr. Floris Wardenaar, who is the Director of the Athlete Lab at Arizona State University, to discuss the role urine color has on hydration status. We will discuss his career path and why he chose the field of sports nutrition, his current research and methods he uses to assess urine color, why urine color is important, the urine color chart, and key aspects you need to know to stay hydrated.     About Floris Wardenaar, PhD Dr. Wardenaar studied nutrition and dietetics at the Hogeschool van Amsterdam (HvA, Amsterdam Applied University) with specific interest in sports nutrition. During this bachelor program he followed an internship at NOC*NSF writing the brochure: What to know about nutrition and Sydney in preparation for the Olympic Games at Sydney 2000. Together with his bachelor degree he received his post bachelor degree of sports dietetics which was granted by NOC*NSF. Following this, Wardenaar started a master's program at Wageningen University in human nutrition and physiology. During the second year at Wageningen, he founded his own consultancy firm in sports nutrition advice and during the third year he was full-time vice-president of the Dutch Chamber of Student Associations (LKvV). Subsequently he followed an internship at the Department of Kinesiology at the University of Texas in Austin and wrote his master's thesis on the interaction between alcohol consumption, exercise and blood glucose levels at SENECA, expert centre of HAN Sports and Exercise Studies at the HAN University of Applied Sciences at Nijmegen. He graduated in 2005 both in nutritional physiology and in nutrigenomics. At the start of 2006, Wardenaar took up the post of lecturer at the Institute of Paramedic Studies at the HAN. From that moment he was also asked to cover the sports nutrition position of the professional TVM speed skating team as part of an agreement between this team and HAN Sports and Exercise studies. In 2007 he was added to the nutrition team of the Dutch Olympic Committee (NOC*NSF) and from 2010 he became a member of the research group of the professorate (in Dutch: lectoraat) Sports, Nutrition and Health. At the beginning of 2011 he moved completely from paramedic studies to Sports and Exercise studies. 2012 saw him take on a team leader role as senior lecturer of the expert team Sports and Exercise Nutrition with responsibility for education, research and consultancy within the Institute of Sport and Exercise. During this period, he was also president of the Dutch Association of Sports Dieticians. In September 2012 he commenced his doctoral project in cooperation with Wageningen University, which was partly financed by a regional grant Eat2Move. At the beginning of 2013 he became program manager of work package 3 within Eat2Move and from 2014-2017 he was team leader of the Team Nutrition of the Dutch Olympic Committee. Resource Definitions and Links: Follow Us: Twitter: @theTHF Instagram: @theTHF Facebook: Taylor Hooton Foundation #ALLMEPEDFREE Contact Us:  Email:  Phone: 214-449-1990 ALL ME Assembly Programs:

Talking Security for news about security, attacks, vulnerabilities and tools.
#23 - about Microsoft Defender for Endpoint Threat and Vulnerability Management

Talking Security for news about security, attacks, vulnerabilities and tools.

Play Episode Listen Later Sep 7, 2022 44:11


An episode of Talking Security about Microsoft Defender for Endpoint. This is the third part of the Defender for Endpoint series and specifically TVM or Threat and Vulnerability Management. What is TVM, how can it be used and how can it be configured? This time together with my friend Dennis van Doorn who has extensive knowledge about this part of MDE.

MLOps.community
Bringing DevOps Agility to ML// Luis Ceze // Coffee Sessions #121

MLOps.community

Play Episode Listen Later Sep 6, 2022 64:35


MLOps Coffee Sessions #121 with Luis Ceze, CEO and Co-founder of OctoML, Bringing DevOps Agility to ML co-hosted by Mihail Eric. // Abstract There's something about this idea where people see a future where you don't need to think about infrastructure. You should just be able to do what you do and infrastructure happens. People understand that there is a lot of complexity underneath the hood and most data scientists or machine learning engineers start deploying things and shouldn't have to worry about the most efficient way of doing this. // Bio Luis Ceze is Co-Founder and CEO of OctoML, which enables businesses to seamlessly deploy ML models to production making the most out of the hardware. OctoML is backed by Tiger Global, Addition, Amplify Partners, and Madrona Venture Group. Ceze is the Lazowska Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, where he has taught for 15 years. Luis co-directs the Systems and Architectures for Machine Learning lab (sampl.ai), which co-authored Apache TVM, a leading open-source ML stack for performance and portability that is used in widely deployed AI applications. Luis is also co-director of the Molecular Information Systems Lab (misl.bio), which led pioneering research in the intersection of computing and biology for IT applications such as DNA data storage. His research has been featured prominently in the media including New York Times, Popular Science, MIT Technology Review, and the Wall Street Journal. Ceze is a Venture Partner at Madrona Venture Group and leads their technical advisory board. // MLOps Jobs board https://mlops.pallet.xyz/jobs MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Landing page: https://octoml.ai/ The Boys in the Boat: Nine Americans and Their Epic Quest for Gold at the 1936 Berlin Olympics by Daniel James Brown: https://www.amazon.com/Boys-Boat-Americans-Berlin-Olympics/dp/0143125478 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Mihail on LinkedIn: https://www.linkedin.com/in/mihaileric/ Connect with Luis on LinkedIn: https://www.linkedin.com/in/luis-ceze-50b2314/ Timestamps: [00:00] Introduction to Luis Ceze [06:28] MLOps does not exist [10:41] Semantics argument [16:25] Parallel programming standpoint [18:09] TVM [22:51] Optimizations [24:18] TVM in the ecosystem [27:10] OctoML's further step [30:42] Value chain [33:58] Mature players [35:48] Talking to SRE's and Machine Learning Engineers [36:32] Building OctoML [40:20] My Octopus Teacher [42:15] Environmental effects of Sustainable Machine Learning [44:50] Bridging the gap from OctoML to biological mechanisms [50:02] Programmability [57:13] Academia making the impact [59:40] Rapid fire questions [1:03:39] Wrap up

Argos
Luizen in de Pels: Misha Wessel en Thomas Blom

Argos

Play Episode Listen Later Aug 6, 2022 52:30


In de zomerserie van Argos ‘Luizen in de pels' vertellen collega onderzoekjournalisten over hun werk, hun scoops en hun werkmethode. Vandaag zijn Misha Wessel en Thomas Blom te gast die samen onderzoeksjournalistieke televisie-documentaires maken. Het duo maakte onder andere onthullende portretten over de oud-premiers Ruud Lubbers en Wim Kok, maar ook een documentaire over doping in de Tour de France van 1998 toen de TVM-ploeg van Cees Priem uit de strijd werd gehaald. Een van hun laatste documentaires gaat over de jacht op de verdwenen miljarden van Muammar Gaddaffi, de Libische dictator die in 2011 ten val kwam. Vlak voor zijn val verscheepte hij maar liefst 12,5 miljard in contanten die nooit werden teruggevonden. In de film zijn rivaliserende teams van premiejagers op zoek naar het geld. ‘The hunt for Gaddaffi's billions' werd genomineerd voor een Emmy award en ze wonnen er een prestigieuze Rockie award mee voor onderzoek.

Argos
Luizen in de Pels: Misha Wessel en Thomas Blom

Argos

Play Episode Listen Later Aug 6, 2022 52:30


In de zomerserie van Argos ‘Luizen in de pels' vertellen collega onderzoekjournalisten over hun werk, hun scoops en hun werkmethode. Vandaag zijn Misha Wessel en Thomas Blom te gast die samen onderzoeksjournalistieke televisie-documentaires maken. Het duo maakte onder andere onthullende portretten over de oud-premiers Ruud Lubbers en Wim Kok, maar ook een documentaire over doping in de Tour de France van 1998 toen de TVM-ploeg van Cees Priem uit de strijd werd gehaald. Een van hun laatste documentaires gaat over de jacht op de verdwenen miljarden van Muammar Gaddaffi, de Libische dictator die in 2011 ten val kwam. Vlak voor zijn val verscheepte hij maar liefst 12,5 miljard in contanten die nooit werden teruggevonden. In de film zijn rivaliserende teams van premiejagers op zoek naar het geld. ‘The hunt for Gaddaffi's billions' werd genomineerd voor een Emmy award en ze wonnen er een prestigieuze Rockie award mee voor onderzoek.

DE GROTE PLAAT
Tourkoorts met Bart Voskamp

DE GROTE PLAAT

Play Episode Listen Later Jul 6, 2022 104:14


Het Deense openingsweekend was een amuse voor in de boeken, met de twee beste sprinters ter wereld in een hoofdrol. Als eerste gang kwam daar dinsdag nog de impressionante overwinning van een vliegende Vlaming bij. De Tour van 2022 lijkt een verrassend een heerlijk acht gangen diner te worden. Santé!Onze tafelgast is Bart Voskamp, die tenminste één Touretappe won, maar misschien wel recht had op twee. Hij maakte in het roemruchte shirt van TVM de zwarte wielerbladzijden van de jaren 90 van dichtbij mee én is de uitvinder van de 'klak cabriolet'. Met de geboren Wageninger praten we over zijn avonturen in Le Grande Boucle, over de rits aan de voorkant van het snelpak van Ganna, over zijn EPO-bekentenis en het dopingspook. Is het nu weg of onzichtbaar aanwezig? Uiteraard kijken we terug op het majestueuze Deense Grand Départ, filosoferen we over hoe het nu verder moet met Dylan Groenewegen en Fabio Jakobsen en blikken we vooruit naar de aankomende etappes. Met welke tactiek en wanneer gaat Jumbo-Visma ten strijde om Pogaçar te verslaan, is MVDP klaar voor de kasseienrit en de komende 'puncheur finales' en heeft Roglic al nachtmerries van de Planche de Belles Filles?Daarnaast is er veel zomerse muziek van onder meer Rotterdamse Tramhaus, het Utrechtse Cavolo Negro en Française Carla Blanc. Verder zijn er wederom mooie prijzen te winnen waaronder een voedingspakket van Enervit en een waardebon voor een Shimano Service Center. On y va!Check hier de nieuwe Isaac Element waar John op rijdt #ChallengeTheElements https://www.isaac-cycle.com/nl_NL/cycling-gear/397/Ontdek een Shimano Service Center bij jou in de buurt https://www.shimanoservicecenter.com/nl/dealers/Alle muziek die we draaien in De Grote Plaat is hier te beluisteren https://open.spotify.com/playlist/2SAjRIVrHIOCKqCeFPBr15?si=8c6b69946eff40b4 See acast.com/privacy for privacy and opt-out information.

Forhjulslir
Forhjulslir live med venner: Jesper Skibby, Per Bausager & Peter Meinert

Forhjulslir

Play Episode Listen Later Apr 24, 2022 103:14


Forhjulslir præsenteres i samarbejde med Continental Dæk Danmark, hovedsponsor for Tour de France-starten i Danmark, bannerfører for sikkerheden på landevejene og podcastens hjælperytter på viften.  Denne etape er optaget lørdag den 23. april som en live podcast foram publikum på Munkebjerg Hotel. Forhjulslir med venner var blevet inviteret på scenen foran Starks cykelnetværk. Det er en etape, som er spækket med højt humør og røverhistorier fra landevejen. Jesper Skibby, Per Bausager og Peter Meinert deler ud af deres mange oplevelser og minder som professionelle cykelryttere op gennem 1970-1990'erne. Hør blandt andet historien om den tyske cykelstjerne Dietrich Thurau, Meinerts Tour de France i 1999, hvor han var med på Armstrongs US-Postal-mandskab, om dengang Bausager angreb sin belgiske manager i 1979, om Skibbys etapesejr i Tour de France 1993, cowboyder holdet TVM og meget meget mere.  Medvirkende: Per Bausager, Peter Meinert og Jesper Skibby. Vært: Anders Mielke

Mass Tort News LegalCast
The Expert Whisperer with Vicki Maniatis

Mass Tort News LegalCast

Play Episode Listen Later Apr 5, 2022 38:46


Vicki J. Maniatis is a partner at Milberg Coleman Bryson Phillips Grossman who has worked on mass tort cases involving pharmaceuticals and medical devices for seventeen years. She is a frequent invited lecturer and moderator on a wide variety of pharmaceutical and mass tort cases including, Opioids, Trans Vaginal Mesh, Fosamax, Ortho Evra, Risperdal, Propecia, Avandia, Onglyza, as well as several medical devices. Vicki has been appointed by State and Federal Judges to serve as lead counsel and on Plaintiffs' steering committees. She has significant experience performing all levels of bellwether trial case-specific work up including, plaintiff, spouse and family member depositions, implanting, explanting, treating physicians, sales representative, and expert depositions, for over 30 cases in several mass torts including TVM, Mirena and Propecia cases. Vicki serves as a founding member of Mass Tort Med School, an annual medical seminar for Plaintiffs' attorneys that offers numerous physician speakers and cutting-edge medical issues. In May 2022, along with the Trial Lawyers of Puerto Rico, Mass Tort Med School is hosting Mass Torts Puerto Rico, a first-of-its-kind program where attorneys will have the opportunity to learn from and connect with world-class trial lawyers and experts – the Mass Tort Med School program will be bigger and better than ever. Remember to subscribe and follow us on social media… LinkedIn: https://www.linkedin.com/company/mass-tort-news Twitter: https://www.twitter.com/masstortnewsorg Facebook: https://www.facebook.com/masstortnews.org

Task Force 7 Cyber Security Radio
Ep. 202: Steps to Mitigate the Risk of the Log4J Vulnerability

Task Force 7 Cyber Security Radio

Play Episode Listen Later Dec 20, 2021 42:57


Financial Sector CISO Raj Badhwar joins co-host Andy Bonillo on Episode #202 of Task Force 7 Radio to discuss how to mitigate exposure of the Log4J vulnerability. Raj discusses the importance of zero trust implementations, API security, and good security hygiene to help your organization manage the risk of ransomware and vulnerabilities like Log4J. We ended the show with Raj talking about his recently authored books, the one book he is currently authoring, and his advice to security executives to manage up and down. All this and much more on Episode #202 of Task Force 7 Radio.

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis
OctoML scores $28M to go to market with open source Apache TVM, a de facto standard for MLOps. Backstage chat with CEO Luis Ceze

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis

Play Episode Listen Later Mar 17, 2021 32:57


Machine learning operations, or MLOps, is the art and science of taking machine learning models from the data science lab to production. It's been a hot topic for the last couple of years, and for good reason. Going from innovation to scalability and repeatability are the hallmarks of generating business value, and MLOps represents precisely that for machine learning. Apache TVM has become a de facto standard in MLOps, and OctoML is the company gearing its commercialization and scale up.  As OctoML secured a $28 million Series B funding round, we caught up with its CEO and co-founder Luis Ceze to discuss TVM, OctoML, and MLOps. Article published on ZDNet

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis
AI chips in the real world: interoperability, constraints, cost, energy efficiency, models

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis

Play Episode Listen Later Feb 3, 2021 31:05


As it turns out, the answer to the question of how to make the best of AI hardware may not be solely, or even primarily, related to hardware Today's episode features Determined AI CEO and Founder, Evan Sparks. Sparks is a PhD veteran of Berkeley's AmpLab with a long track record of accurate predictions in the chip market. We talk about an interoperability layer for disparate hardware stacks, ONNX and TVM -- two ways to solve similar problems, AI constraints and energy efficiency, and Infusing knowledge in models Article published on ZDNet

Wave On | Misty Marcum
Ep6 - The Vegan Mary

Wave On | Misty Marcum

Play Episode Listen Later Jan 14, 2021 52:48


In this episode Misty talks with The Vegan Mary on plant based diets. Mary holds a certificate of Plant-Based Nutrition from Cornell University, as well as a Master's degree in Business. Her continuing education includes multiple food and nutrition courses at Harvard and Stanford Universities. She spent the last two decades as a marketing executive before turning her sites and her experiences to vegan consultancy. Based in Oakland County, Michigan, TVM services clients across the U.S. She shares her business and what services are in place to guide in personal goals. Her team shares a wealth of knowledge on instagram with recommendations for anyone new to a vegan diet. TVM shares her coaching from a seat of love and inspiration. She also shares a special message for any entrepreneur out there seeking their passion. Local vegan restaurants in the area also get a shout out, Detroit Vegan Soul and Ale Mary's! Visit: https://www.theveganmary.co Instagram @the.veganmary Luna Moon Retail at check out use luna10 for 10% off on vegan all natural lip balms! Small batch made with only natural ingredients. --- Support this podcast: https://podcasters.spotify.com/pod/show/misty-marcum/support

The Chasing Joy Podcast
Natural Skincare & Living in the Grey(lady) with Julie Connor

The Chasing Joy Podcast

Play Episode Listen Later Jul 16, 2019 61:16


On this episode, I talk with Julie Connor about business, skincare, womanhood & so much more.   About Julie Originally from San Antonio, Texas, Nantucket has been my home for 17 years not. I just celebrated my one year wedding anniversary with my husband back in April and I am the mother to one teen and three doodles. The idea for TVM stemmed from a bit of a crisis in life. Feeling a little lost, burnt out and overwhelmed. It actually kind of found me... I decided to open TVM to bring awareness and availability of natural, organic and wildcrafted skin and body and herbal wellness products that are small batch made by small businesses owned by women.    We Talk About: How Julie came to Nantucket island How she switched gears from wedding photography to starting her own retail store on Nantucket Being a business owner and living in the grey (at the same time) Why your skin's health comes from within The importance of listening to your body Making big decisions Having a meandering path It's ok to be more than one thing Why it's hard to communicate as a woman Creating community with women Asking for help Skincare Why being packaged in glass is so important How Julie sources her products The importance of a nighttime ritual   Themes: living in the grey It's ok to be more than one thing following your feminine intuition making big decisions   Connect with Julie On instagram: https://www.instagram.com/theverdantmaiden/   Connect with Me On instagram: instagram.com/init4thelongrunblog On the blog: http://init4thelongrun.com   Join the Joy Squad Joy Squad Private Facebook Group https://www.facebook.com/groups/thejoysquad/ Chasing Joy Podcast Instagram https://www.instagram.com/chasingjoypodcast/   Beekeepers Naturals Propolis: https://beekeepersnaturals.com/collections/all/products/propolis-spray Use code CHASINGJOY15