Podcasts about neox

Gas used for human respiration

  • 44PODCASTS
  • 79EPISODES
  • 1h 10mAVG DURATION
  • ?INFREQUENT EPISODES
  • Dec 8, 2024LATEST
neox

POPULARITY

20172018201920202021202220232024


Best podcasts about neox

Latest podcast episodes about neox

Chiste Interno
Episodio 59 - Galder Varas

Chiste Interno

Play Episode Listen Later Dec 8, 2024 95:34


Accede a los episodios completos y contenido exclusivo en chisteinterno.com y en patreon.com/chisteinterno Episodio 59 - Galder Varas Galder Varas es un comediante y guionista español conocido por su destreza en el crowd work, o la improvisación con el público. Nacido en Bilbao y criado en Alicante, Galder tiene experiencia como guionista en productoras y canales de televisión como “Comedy Central”, “La Resistencia”, “Neox”, entre otros y, debido al éxito de su comedia en redes sociales, está de gira con su show “Esto no es un show” que combina el stand-up con la improvisación e interacción con el público. En nuestra conversación hablamos sobre su gira “Esto no es un show”, la inteligencia artificial, la crisis de vivienda en Estados Unidos, el uso de las redes sociales para impulsar su carrera, sus experiencias como guionista y cómo la comedia sirvió como refugio para una infancia complicada. ¡Gracias a Galder por visitar Chiste Interno! Hazte miembro y disfruta los episodios completos y contenido extra en Patreon: patreon.com/chisteinterno/membership Chiste Interno es: Oswaldo Graziani / Creación, Conducción y Producción Ejecutiva Adrián Salas / Producción, Edición y Música Pedro Graterol / Comunidad y Contenido Katherine Miranda / Asistencia de Producción Yamn Milán / Editor de formato largo Ricardo Carmona / Editor de formato corto Yxa Fuentes / Redacción Astro Studio / Estudio de Grabación chisteinterno.com TIMESTAMPS 0:00 | La gira de Galder y sus experiencias con audiencias internacionales 30:00 | La infancia de Galder y sis principals como comediante 59:00 | La importancia de crear amistades en la comedia 1:10:00 | la carrera de Galder en las redes sociales y los desafíos de la carrera de la comedia y la salud mental

The Struggle Climbing Show
Carlo Traversi: Mastering Movement, Getting Better Without Being Stronger, V16 and Beyond, and Seeking Struggle

The Struggle Climbing Show

Play Episode Listen Later Jun 19, 2024 115:56


Elite climber Carlo Traversi shares his struggles and breakthroughs in Training, Nutrition, Tactics, and Mental Game - Bonus Eps and Full Videos (FREE TRIAL!): patreon.com/thestruggleclimbingshow - CHAPTERS: Struggle: 0:15:16 Training: 0:25:58 Nutrition: 0:44:50 Tactics: 0:55:32 Mental Game: 1:16:54 Purpose: 1:29:47 - BIG THANKS TO THE AMAZING SPONSORS OF THE STRUGGLE WHO LOVE ROCK CLIMBING AS MUCH AS YOU DO: Petzl: Check out the new NEOX belay device at your local gear shop, and learn more at Petzl.com SCARPA: Whether you're a climber, trail runner, skier, or hiker, SCARPA offers an array of adventure footwear for the adventure seeker in you. with a commitment to sustainability. Shop the whole collection at SCARPA.com. SCARPA, No Place Too Far. Boulder Bears: Taste like candy, kick like coffee! Each caffeinated gummy bear contains collagen and 20mg of caffeine so you can take care of your tendons while dialing in the perfect level of boost and focus for your sesh. Plus, they're crazy delicious. Score a free travel pack plus 15% off using code STRUGGLE! Crimpd: The absolute best tool for self-coached climbers to stay on track with training. Visit Crimpd to download the app for FREE and take your training to the next level And check out ALL the show's awesome sponsors and exclusive deals at thestruggleclimbingshow.com/deals - Follow along on Instagram @thestruggleclimbingshow and YouTube /@thestruggleclimbingshow - The Struggle is carbon-neutral in partnership with The Honnold Foundation, whose mission is to promote solar energy for a more equitable world. - This show is produced and hosted by Ryan Devlin, and edited by Glen Walker. The Struggle is a proud member of the Plug Tone Audio Collective, a diverse group of the best, most impactful podcasts in the outdoor industry. - The struggle makes us stronger! I hope your training and climbing are going great.  - And now here are some buzzwords to help the almighty algorithm get this show in front of people who love to climb: rock climbing, rock climber, climbing, climber, bouldering, sport climbing, gym climbing, how to rock climb, donuts are amazing. Okay, whew, that's done. But hey, if you're a human that's actually reading this, and if you love this show (and love to climb) would you think about sharing this episode with a climber friend of yours? And shout it out on your socials? I'll send you a sticker for doing it. Just shoot me a message on IG – thanks so much!   

Horny Report
Horny Report 333

Horny Report

Play Episode Listen Later Jan 12, 2024 137:02


Recuerdo Aeroestático, Salmancito V-2, Mafia RobaSillas, Furros nigerianos, UkroBulldozer, Camel Year, “El Canicón”, Maestros Modernos , Futuro subnormal, Titan Trust, Tandori Hoholchuvk, Coge-pingüinos, Chencho Lavezzi, Naked Attraction, Modi snorkell, ChatGPNein Nein y mucho más. ENLACES Recuerdo Aeroestático https://www.larazon.es/internacional/china-envia-taiwan-5-globos-aerostaticos-como-recuerdo-que-puede-haber-guerra_2024011265a0d837872b8200012b6a28.html Salmancito V2 https://apnews.com/article/germany-saudi-weapons-exports-149e12a6599187a8d7498f6d7d63ffff AlfonsoXIII-Eipsteinismo https://www.dailymail.co.uk/news/article-12939665/Paedophile-financier-Jeffrey-Epstein-secretly-recorded-sex-tapes-Prince-Andrew-Richard-Branson-Bill-Clinton-latest-unsealed-documents-claim-Donald-Trump-accused-having-sex-girls.html PiesTruchos https://www.msn.com/es-es/noticias/virales/ins%C3%B3lito-en-argentina-los-memes-se-burlan-de-los-pies-de-milei/ss-AA1mJM6b?ocid=msedgdhp&pc=EDGEDB&cvid=ee57a882a28d4751a7673aa839e674aa&ei=47#interstitial=2 Enemigo Principal https://borneobulletin.com.bn/kim-calls-south-korea-principal-enemy/ Icono Stalin https://esrt.press/actualidad/494972-video-icono-stalin-catedral-georgia Hipocresia LevantaZarpas https://www.msn.com/es-mx/noticias/mundo/australia-dar%C3%A1-penas-de-c%C3%A1rcel-a-quien-haga-el-saludo-nazi-en-p%C3%BAblico/ar-AA1mBxrL Futbol irlandés https://www.tipperarylive.ie/news/home/1389380/tipperary-football-club-totally-shocked-after-player-shot-in-arm-during-match.html Grandes leyendas del deporte judio https://aurora-israel.co.il/se-inauguraron-los-juegos-panamericanos-macabeos-con-presencial-del-presidente-argentino/ Guerra de pandillas https://spanish.almanar.com.lb/900605 Hunter Inocente https://www.infobae.com/america/agencias/2024/01/12/hunter-biden-se-declara-no-culpable-de-nueve-cargos-federales-por-evasion-de-impuestos/ Gas Nitrogeno https://www.japantimes.co.jp/news/2024/01/11/world/crime-legal/us-nitrogen-gas-execution/ Snaprrushchat https://www.thestar.com.my/tech/tech-news/2024/01/11/man-used-snapchat-to-sexually-abuse-11-year-old-and-coerce-her-into-his-home-us-feds-say Patrullas clonadas https://www.infobae.com/mexico/2024/01/10/asi-son-las-camionetas-con-las-que-el-cjng-se-hace-pasar-por-elementos-de-la-guardia-nacional/ Microplasticos polukros https://www.publico.es/sociedad/alerta-medioambiental-costa-gallega-aparicion-toneladas-microplasticos.html#analytics-noticia:contenido-enlace Diplomáticos pobres https://www.premiumtimesng.com/news/657697-naira-depreciation-ridicules-increased-budgetary-allocation-for-nigerias-foreign-missions.html Hosteleros Proxenetas https://www.20minutos.es/noticia/5208411/0/detienen-un-hombre-su-madre-acusados-prostituir-una-mujer-un-negocio-hosteleria/ Canal Seco https://www.20minutos.es/noticia/5208188/0/maersk-recurre-ferrocarril-ante-las-restricciones-por-sequia-canal-panama/ Retornators Juzgadas https://www.lavozdegalicia.es/noticia/espana/2024/01/11/pedraz-abre-juicio-dos-espanolas-unieron-estado-islamico/00031704990166733850761.htm Peluditos no comestibles https://www.africanews.com/2024/01/10/dog-meat-production-and-sales-will-soon-become-illegal-in-south-korea/ Ministro jovencito https://aurora-israel.co.il/macron-escoge-a-gabriel-attal-de-origen-judio-como-su-nuevo-primer-ministro/ Ronda rutinaria de despidos https://www.elespanol.com/invertia/mis-finanzas/fondos-de-inversion/20240109/fondo-inversion-blackrock-despedira-trabajadores-proximos-dias/823668110_0.html#:~:text=El%20fondo%20de%20inversiones%20BlackRock,fuentes%20conocedoras%20de%20los%20planes. Furros nigerianos https://www.africanews.com/2023/12/11/nigeria-dogs-walk-the-runway-in-traditional-costumes-at-lagos-carnival/ Patrocinadores de guerra https://es.news-front.su/2024/01/12/ucrania-incluye-a-subway-en-su-lista-de-patrocinadores-de-la-guerra/ Problemas educativos https://dailytrust.com/concerns-over-poor-learning-in-nigerian-public-shools Divorcio Real https://www.msn.com/es-es/estilo/familia/al-descubierto-el-pastizal-que-felipe-vi-le-ofrece-a-letizia-para-su-divorcio/ar-AA1mBTc5?ocid=msedgdhp&pc=EDGEDB&cvid=d05f2ee5350d4c3bb7d4a3124cc6d3c6&ei=39 Cerbero Necky https://www.madridiario.es/detenidas-siete-personas-40-robos-region Mafia RobaSillas https://www.elmundo.es/madrid/2024/01/11/659e9584fdddff69278b45a7.html Skulls & Bones lunar https://www.foxweather.com/earth-space/human-remains-launching-to-moon Rey Mono (furro chino) https://metro.co.uk/2024/01/11/bloke-paid-1-000-a-month-dress-monkey-king-fed-bananas-20101691/?ico=mosaic_news NarcoRégimen https://www.msn.com/es-ar/noticias/other/para-guillermo-moreno-hay-un-v%C3%ADnculo-entre-el-narcotr%C3%A1fico-y-las-torres-de-puerto-norte-de-rosario/ar-AA1mPMgv Vetos noruegos https://cronicaglobal.elespanol.com/business/20240108/el-mayor-fondo-soberano-mundo-socio-telefonica/822917712_0.html Chencho Lavezzi https://www.20minutos.es/deportes/noticia/5207341/0/lavezzi-episodio-tijeras-escuchar-voces/ Roi du Trocadero https://www.leparisien.fr/faits-divers/paris-sami-le-roi-du-trocadero-a-t-il-mis-des-enfants-des-rues-sous-sa-coupe-pour-les-forcer-a-voler-11-01-2024-B2A4XDKLEFGOJPLD6752L3Q6QU.php Saludos Romanos https://www.elespanol.com/mundo/europa/20240111/apologia-nazi-historia-italia-meloni-escuda-vacio-legal-no-condenar-saludo-fascista/823918122_0.html Espia anglo https://spanish.almanar.com.lb/899131 Gotham City research https://as.com/actualidad/economia/que-es-gotham-city-research-el-hedge-fund-que-ha-senalado-a-grifols-por-sus-acciones-en-el-ibex-35-n/ VOXrosos https://www.infobae.com/espana/2024/01/12/vox-crece-en-morosos-la-mitad-de-sus-afiliados-ha-dejado-de-pagar-la-cuota-mensual/ ChatGPNein Nein https://www.volkswagen-group.com/en/articles/world-premiere-at-ces-volkswagen-integrates-chatgpt-into-its-vehicles-18052https://www.volkswagen-group.com/en/articles/world-premiere-at-ces-volkswagen-integrates-chatgpt-into-its-vehicles-18052 BenzoDesigualdad https://efe.com/salud/2024-01-11/mujeres-ansioliticos-estudio/ Trafico de Residuos https://efe.com/medio-ambiente/2024-01-11/trafico-residuos-basura-ilegal-francia/ Punami https://iharare.com/woman-flashes-punani-to-stop-brothers-arrest-in-western-cape/ Neox escritor https://www.goodmorningamerica.com/culture/story/keanu-reeves-announces-new-book-106237321 EcuatoRisk https://www.infobae.com/mexico/2024/01/11/cartel-de-sinaloa-vs-cjng-quien-tiene-mayor-presencia-en-ecuador/ Talibos antivacunas https://borneobulletin.com.bn/pakistani-taliban-claims-responsibility-for-anti-polio-campaign-bombing/ Modelo Bukele https://www.infobae.com/america/america-latina/2024/01/12/noboa-presento-sus-carceles-con-el-modelo-bukele-mientras-siguen-los-motines-en-ecuador/ Paul McKenzie https://www.africanews.com/2024/01/10/kenyan-court-charge-doomsday-cult-leader-within-2-weeks-or-we-release-him-on-our-terms/ MK Taylor https://actualidad-rt.com/actualidad/495359-pentagono-taylor-swift-operaciones-psicologicas Coge-pingüinos https://tn.com.ar/politica/2024/01/06/javier-milei-viajo-a-la-antartida-para-participar-de-un-estudio-de-impacto-ambiental-en-base-marambio/#:~:text=Javier%20Milei%20viaj%C3%B3%20a%20la%20Ant%C3%A1rtida%20para%20participar,en%20Base%20Marambio.%20Tambi%C3%A9n%20visitar%C3%A1%20la%20Base%20Esperanza. Gasoducto agujereado https://mpr21.info/nuevo-sabotaje-contra-un-gasoducto-en-alemania/ Inflacion nigeriana https://allafrica.com/stories/202401110019.html Peor que Al Jazeera https://aurora-israel.co.il/indignacion-de-organizaciones-judias-y-proisraelies-con-la-television-catalana/ Salmancito Presidente https://www.elcorreogallego.es/deportes/2024/01/09/fondo-inversion-arabia-saudi-compra-96687717.html CryptoFondos Bro https://www.elperiodico.com/es/economia/20240111/eeuu-autoriza-primer-fondo-asociado-96757012 El Toba https://www.moncloa.com/2024/01/09/frutero-valdeavero-corrupcion-menores-2372116/ Chatarra ruso-ucraniana https://actualidad-rt.com/actualidad/495310-ecuador-entregar-eeuu-equipo-militar 5000 pavitos https://iharare.com/man-wins-us5000-in-damages-after-wife-married-two-husbands/ Titan Trust https://www.thisdaylive.com/index.php/2024/01/10/breaking-cbn-sacks-boards-of-titan-trust-union-keystone-polaris-banks Pesos truchos https://es.news-front.su/2024/01/12/el-banco-central-de-argentina-aprueba-la-emision-de-billetes-de-10-000-y-20-000-pesos/ Huachicoleo chatarrero https://allafrica.com/stories/202401110005.html Pitillismo infantil https://www.20minutos.es/noticia/5207193/0/pueblo-portugues-donde-los-ninos-fuman-cigarros-que-les-traen-los-reyes-magos/ Hijo de puta mentiroso https://actualidad-rt.com/actualidad/495376-fmi-sugiere-hacer-vocero-milei-acuerdo Subnormalidad viral https://www.msn.com/es-es/noticias/virales/se-hace-viral-un-cuadro-de-hace-361-a%C3%B1os-con-un-ni%C3%B1o-vistiendo-zapatillas-nike-logo-incluido/ar-AA1ez0ai?ocid=msedgntp&pc=EDGEDB&cvid=363c4832dc994f7ca3cef97870c862ff&ei=44 Corrido policial https://www.infobae.com/mexico/2024/01/11/al-estilo-de-los-grandes-narcos-titular-de-la-ssp-en-aguascalientes-crea-su-propio-corrido-tumbado/ Tomer Beer https://aurora-israel.co.il/la-nueva-cerveza-tomer-creada-por-su-familia-en-honor-a-un-soldado-israeli-que-murio-en-el-7-de-octubre/ Magos kenianos https://iharare.com/gullible-harare-man-loses-usd100000-to-kenyan-magicians/ Roble Vlad https://www.newsweek.com/russian-doctor-putin-health-diagnosis-chukotka-visit-1859506 Poceros Ukros https://avia-es.com/news/kiev-masshtabno-zalilo-fekaliyami-iz-za-proryva-kanalizacii Sardá salido https://www.mundodeportivo.com/elotromundo/television/20240110/1002166797/cristina-tarrega-explota-xavier-sarda-tardear-te-crees-tienes-3-anos-dct.html Jehovista sacacuartos https://www.moncloa.com/2024/01/10/tedh-testigo-jehova-transfusiones-2374336/ Titulos Trucha https://allafrica.com/stories/202401110003.html Diplomacia sueca https://mpr21.info/suecia-ataca-a-mali-por-una-pequena-gran-venganza/ Cancion del año https://zoomsouthafrica.com/murder-over-beats-kzn-man-killed-over-ukhozi-fms-song-of-the-year/ OjosFraga https://canarias-semanal.org/art/35504/un-nieto-de-fraga-iribarne-da-el-chaquetazo-y-se-presenta-por-vox-en-galicia El Cóndor , El Bravo, El Pollo y El Cholo Iván https://www.infobae.com/mexico/2024/01/11/quienes-eran-los-escoltas-de-el-chapo-guzman-y-que-paso-con-ellos/ BlackHawk de la ONU https://allafrica.com/stories/202401110004.html Furia Trans https://www.moncloa.com/2024/01/10/colectivos-trans-ana-redondo-2373527/ Futuro subnormal https://www.20minutos.es/noticia/5208497/0/detenidos-22-activistas-futuro-vegetal-por-lanzar-pintura-congreso-los-diputados-causar-danos-obras-museo-prado/ Escasez Pirata https://www.larazon.es/internacional/sigue-ocaso-royal-navy-reino-unido-obligado-dar-baja-dos-fragatas-falta-reclutas_20240107659aff8f67d53e0001d4e6e1.html?outputType=amp Asedio terrorista https://aurora-israel.co.il/argentina-detienen-a-tres-sospechosos-de-integrar-una-celula-terrorista/ Rey FollaAnglos https://www.newzimbabwe.com/watch-zbc-presenter-blames-colonial-era-abuse-on-ndebele-king-lobengula-who-loved-sugar-live-on-tv/ “El Canicón” https://www.infobae.com/mexico/2024/01/10/cae-un-hombre-ligado-al-cjng-en-edomex/ Emisiones clasistas https://mpr21.info/el-proletario-seat-ibiza-del-2002-contamina-menos-que-el-lujoso-bmw-g05-del-2023-y-uno-de-los-dos-no-puede-entrar-a-madrid/ Naked Attraction https://www.20minutos.es/television/naked-attraction-programa-citas-marta-flich-que-cuerpo-perfecto-consigue-un-encuentro-con-los-concursantes-5208519/ Tolerancia Cero https://www.thehindu.com/news/national/zero-tolerance-policy-on-sexual-offences-against-students-in-schools-says-odisha-govt/article67730354.ece Maestros Modernos https://indianexpress.com/article/cities/lucknow/teachers-at-up-madrasas-set-to-intensify-protest-9104216/ Pareja Interreligiosa https://indianexpress.com/article/cities/bangalore/six-men-barge-into-karnataka-hotel-attack-interfaith-couple-9104892/ Satelite Espia https://www.thehindu.com/news/international/japan-launches-an-intelligence-gathering-satellite-to-watch-for-north-korean-missiles/article67733567.ece Bhogi sin Humo https://www.thehindu.com/news/cities/Coimbatore/coimbatore-collector-appeals-to-residents-to-celebrate-a-smokeless-bhogi-this-year/article67733555.ece Politica Belica https://www.koreaherald.com/view.php?ud=20240110000629 Niño Buda https://www.dawn.com/news/1804800/nepal-police-arrest-buddha-boy-over-disappearances-rape Victoria Absoluta https://borneobulletin.com.bn/bangladeshs-hasina-celebrates-absolute-victory-after-polls-without-opposition/ Huelga Policial https://es.euronews.com/2024/01/11/al-menos-15-muertos-en-disturbios-en-papua-nueva-guinea-durante-una-huelga-de-fuerzas-de-s Tension Nuclear https://english.kyodonews.net/news/2024/01/79c47d437001-japan-quake-stressed-nuclear-plant-beyond-design-limit-panel.html El clon de Macron https://www.libertaddigital.com/internacional/europa/2024-01-10/gabriel-attal-el-clon-de-macron-con-el-que-el-presidente-quiere-revitalizar-su-mandato-y-senalar-a-un-posible-sucesor-7085647/ Meloni Gramsciana https://www.elespanol.com/mundo/europa/20240104/gobierno-meloni-colocara-placa-recordar-pensador-comunista-gramsci/822418028_0.html Bloqueo Tractoril https://www.elespanol.com/mundo/europa/20240109/agricultores-pie-guerra-recortes-scholz-bloquean-alemania-tractores/823418117_0.html Plan Maestro https://www.lavozdegalicia.es/noticia/internacional/2024/01/12/afd-debate-plan-expulsar-millones-extranjeros-alemania/0003_202401G12P18992.htm Sarcofago Real https://www.vanidades.com/realeza/como-es-el-excentrico-sarcofago-de-la-reina-margarita-ii-de-dinamarca-donde-sera-sepultada-sin-su-esposo Embajadora Trans https://www.libertaddigital.com/internacional/europa/2024-01-09/feministas-estallan-nombramiento-munroe-bergdorf-polemico-varon-trans-como-embajadora-de-las-mujeres-ante-la-onu-7085358/ Tandori Hohol https://www.standard.co.uk/news/politics/rishi-sunak-ukraine-volodymyr-zelensky-military-aid-russian-putin-b1131874.html Principales Amenazas https://www.larazon.es/internacional/america/mayor-amenaza-que-sufrira-estados-unidos-2024-segun-expertos_2024011165a0077dcf867300018ad634.html Salman Preocupado https://www.arabnews.com/node/2440271/saudi-arabia Camel Year https://www.arabnews.com/node/2438741/saudi-arabia AleXiandria https://www.thestar.com.my/news/world/2024/01/11/china-built-container-terminal-starts-operation-in-egypt039s-alexandria Candidato Baleado https://www.thestar.com.my/news/world/2024/01/11/pakistan-election-candidate-shot-dead-while-campaigning Hijab Incorrecto https://www.dawn.com/news/1804806/afghan-women-detained-over-improper-hijab First Quantum https://www.thestar.com.my/news/world/2024/01/10/hundreds-protest-at-first-quantum039s-panama-copper-mine Ataque Simulado https://www.koreaherald.com/view.php?ud=20240111000613 Mineria Submarina https://www.eleconomista.es/economia/noticias/12617039/01/24/noruega-se-lanza-a-minar-el-oceano-el-plan-de-oslo-para-independizarse-de-china-y-asegurar-una-nueva-era.html Kimfluencer https://www.thescottishsun.co.uk/news/11781061/kim-jong-mystery-glam-influencer/ Doritoflacion https://www.elperiodico.com/es/cata-mayor/20240110/patatas-fritas-gusanitos-doritos-subida-precio-96725603 Tronco Martinez https://www.marca.com/tiramillas/musica/2024/01/10/659ef54ce2704e5c578b45ca.html TruchoYemenismo https://esrt.site/actualidad/495257-dura-respuesta-lider-huties-yemenies Desempleo juvenil https://es.head-post.com/index.php/2024/01/09/el-desempleo-juvenil-en-espana-roza-el-28-la-tasa-mas-alta-de-los-paises-europeos-eurostat/ Hoholitos por el mundo https://avia-es.com/news/nikto-ne-uydyot-ot-tck-ukrainskim-bezhencev-nachali-otlavlivat-na-ulicah-varshavy-i-vruchat Batallon Africano https://mpr21.info/el-ejercito-ruso-crea-un-batallon-africano-para-reemplazar-a-wagner/ La Cortinas sola https://www.washingtonexaminer.com/news/2789694/winter-weather-havoc-gop-campaigns-before-iowa/ Tornillos sueltos https://theaircurrent.com/feed/dispatches/united-finds-loose-bolts-on-plug-doors-during-737-max-9-inspections/ Modi snorkell https://twitter.com/narendramodi/status/1742831497776951361?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1742831501740544264%7Ctwgr%5E8e510c45e8a1e8825d4300699ef145db33ee0ceb%7Ctwcon%5Es2_&ref_url=https%3A%2F%2Fwww.rt.com%2Findia%2F590274-india-maldives-diplomatic-row%2F Bulldozer Hohol https://esrt.press/actualidad/495048-asaltan-excavadora-iglesia-ortodoxa-ucrania Brasuca TierraHuequista https://oglobo.globo.com/brasil/noticia/2024/01/07/o-que-se-sabe-sobre-o-caso-do-idoso-que-morreu-ao-cair-em-buraco-de-40-metros-escavado-em-caca-ao-tesouro.ghtml FreddyMercurismo despenalizado https://www.congresocdmx.gob.mx/comsoc-congreso-cdmx-deroga-delito-peligro-contagio-codigo-penal-5065-1.html Periodista incisivo https://esrt.press/actualidad/495277-video-periodista-pregunta-hunter-biden-que-tipo-crack-favorito Frauderrush https://twitter.com/nawadapolice/status/1741024881368846534 Machu Picchu privatizado https://actualidad.rt.com/actualidad/493278-seguir-protestas-venta-entradas-machu-picchu Baterias vampiricas https://www.uco.es/ucci/es/noticias-ingles/item/4470-the-first-battery-prototype-using-hemoglobin-is-developed AngloTiktoker subnormal https://esrt.press/actualidad/495406-roba-12-autos-atrapan-exhibirlos-tiktok Bibi disfrazado https://www.tasnimnews.com/en/news/2024/01/05/3017034/daesh-claims-kerman-terror-attack-clues-point-to-israel Esclavismo Starbucks https://nclnet.org/national-consumers-league-sues-starbucks-alleging-coffee-giant-deceives-customers-with-claims-of-100-ethical-coffee-tea/?p=27030

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Invites are going out for AI Engineer Summit! In the meantime, we have just announced our first Actually Open AI event with Brev.dev and Langchain, Aug 26 in our SF HQ (we'll record talks for those remote). See you soon (and join the Discord)!Special thanks to @nearcyan for helping us arrange this with the Eleuther team.This post was on the HN frontpage for 15 hours.As startups and even VCs hoard GPUs to attract talent, the one thing more valuable than GPUs is knowing how to use them (aka, make GPUs go brrrr).There is an incredible amount of tacit knowledge in the NLP community around training, and until Eleuther.ai came along you pretty much had to work at Google or Meta to gain that knowledge. This makes it hard for non-insiders to even do simple estimations around costing out projects - it is well known how to trade $ for GPU hours, but trading “$ for size of model” or “$ for quality of model” is less known and more valuable and full of opaque “it depends”. This is why rules of thumb for training are incredibly useful, because they cut through the noise and give you the simple 20% of knowledge that determines 80% of the outcome derived from hard earned experience.Today's guest, Quentin Anthony from EleutherAI, is one of the top researchers in high-performance deep learning. He's one of the co-authors of Transformers Math 101, which was one of the clearest articulations of training rules of thumb. We can think of no better way to dive into training math than to have Quentin run us through a masterclass on model weights, optimizer states, gradients, activations, and how they all impact memory requirements.The core equation you will need to know is the following:Where C is the compute requirements to train a model, P is the number of parameters, and D is the size of the training dataset in tokens. This is also equal to τ, the throughput of your machine measured in FLOPs (Actual FLOPs/GPU * # of GPUs), multiplied by T, the amount of time spent training the model.Taking Chinchilla scaling at face value, you can simplify this equation to be `C = 120(P^2)`.These laws are only true when 1000 GPUs for 1 hour costs the same as 1 GPU for 1000 hours, so it's not always that easy to make these assumptions especially when it comes to communication overhead. There's a lot more math to dive into here between training and inference, which you can listen to in the episode or read in the articles. The other interesting concept we covered is distributed training and strategies such as ZeRO and 3D parallelism. As these models have scaled, it's become impossible to fit everything in a single GPU for training and inference. We leave these advanced concepts to the end, but there's a lot of innovation happening around sharding of params, gradients, and optimizer states that you must know is happening in modern LLM training. If you have questions, you can join the Eleuther AI Discord or follow Quentin on Twitter. Show Notes* Transformers Math 101 Article* Eleuther.ai* GPT-NeoX 20B* BLOOM* Turing NLG* Mosaic* Oak Ridge & Frontier Supercomputer* Summit Supercomputer * Lawrence Livermore Lab* RWKV* Flash Attention * Stas BekmanTimestamps* [00:00:00] Quentin's background and work at Eleuther.ai* [00:03:14] Motivation behind writing the Transformers Math 101 article* [00:05:58] Key equation for calculating compute requirements (tau x T = 6 x P x D)* [00:10:00] Difference between theoretical and actual FLOPs* [00:12:42] Applying the equation to estimate compute for GPT-3 training* [00:14:08] Expecting 115+ teraflops/sec per A100 GPU as a baseline* [00:15:10] Tradeoffs between Nvidia and AMD GPUs for training* [00:18:50] Model precision (FP32, FP16, BF16 etc.) and impact on memory* [00:22:00] Benefits of model quantization even with unlimited memory* [00:23:44] KV cache memory overhead during inference* [00:26:08] How optimizer memory usage is calculated* [00:32:03] Components of total training memory (model, optimizer, gradients, activations)* [00:33:47] Activation recomputation to reduce memory overhead* [00:38:25] Sharded optimizers like ZeRO to distribute across GPUs* [00:40:23] Communication operations like scatter and gather in ZeRO* [00:41:33] Advanced 3D parallelism techniques (data, tensor, pipeline)* [00:43:55] Combining 3D parallelism and sharded optimizers* [00:45:43] Challenges with heterogeneous clusters for distribution* [00:47:58] Lightning RoundTranscriptionAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, writer and editor of Latent Space. [00:00:20]Swyx: Hey, today we have a very special guest, Quentin Anthony from Eleuther.ai. The context for this episode is that we've been looking to cover Transformers math for a long time. And then one day in April, there's this blog post that comes out that literally is called Transformers Math 101 from Eleuther. And this is one of the most authoritative posts that I've ever seen. And I think basically on this podcast, we're trying to give people an intuition around what are the rules of thumb that are important in thinking about AI and reasoning by AI. And I don't think there's anyone more credible than the people at Eleuther or the people training actual large language models, especially on limited resources. So welcome, Quentin. [00:00:59]Quentin: Thank you. A little bit about myself is that I'm a PhD student at Ohio State University, starting my fifth year now, almost done. I started with Eleuther during the GPT-NeoX20B model. So they were getting started training that, they were having some problems scaling it. As we'll talk about, I'm sure today a lot, is that communication costs and synchronization and how do you scale up a model to hundreds of GPUs and make sure that things progress quickly is really difficult. That was really similar to my PhD work. So I jumped in and helped them on the 20B, getting that running smoothly. And then ever since then, just as new systems challenges arise, and as they move to high performance computing systems and distributed systems, I just sort of kept finding myself falling into projects and helping out there. So I've been at Eleuther for a little bit now, head engineer there now, and then finishing up my PhD and then, well, who knows where I'll go next. [00:01:48]Alessio: Awesome. What was the inspiration behind writing the article? Was it taking some of those learnings? Obviously Eleuther is one of the most open research places out there. Is it just part of the DNA there or any fun stories there? [00:02:00]Quentin: For the motivation for writing, you very frequently see in like the DL training space, like these Twitter posts by like, for example, like Stas Bekman at Hugging Face, you'll see like a Twitter post that's like, oh, we just found this magic number and everything is like 20% faster. He's super excited, but doesn't really understand what's going on. And the same thing for us, we very frequently find that a lot of people understand the theory or maybe the fundamentals of why like AI training or inference works, but no one knows like the nitty gritty details of like, how do you get inference to actually run correctly on your machine split across two GPUs or something like that. So we sort of had all of these notes that we had accumulated and we're sort of sharing among engineers within Eleuther and we thought, well, this would really help a lot of other people. It's not really maybe appropriate for like a paper, but for something like a blog post or technical report, this would actually maybe squeeze a lot of performance out of people's hardware they're already running on. So I guess there are a lot of projects in Eleuther that we're sort of trying to share notes with people in a way that typical institutions don't. They sort of live within that institution and then you go to a different institution and they do something very similar, but without the lessons of the previous. And it's because everyone's trying to do their own special sauce with their own stack. Whereas Eleuther, we don't really have that constraint and we can just share everything to everybody. [00:03:14]Swyx: Yeah, this is a level of openness that basically very few people actually embrace. One, it's an extra effort to write things down, of course, but two, it is secret sauce and so that not many people do it. And therefore, oftentimes the only way to learn this stuff is to actually work in one of the large model labs. And so you guys are doing a lot. The only other instance where I can think of where people actually open sourced their process was Facebook's OPT. What else is similar, like sort of trade knowledge, but not formal research knowledge? [00:03:45]Quentin: I would say Bloom. So the Hugging Face Bloom project in big science and all of that, that was very open. I'd say it's the same caliber, if not more detailed than OPT. Other than that, I think there was like a doc from Microsoft on like their Turing NLG. Their paper is pretty relaxed in that it did talk about some of those challenges. Other than like OPT and Bloom and us, I can't think of any. It's a new thing. [00:04:10]Swyx: It matters that you are going for the sort of good enough rules of thumb, because I think a lot of people try to go for precision and being overly precise actually is not helpful. Right. Yes. [00:04:20]Quentin: You'll see some like statements in the blog posts that are just like, we think this is about 1.2 in our experience. And, you know, we don't go any further into detail and it would take maybe an extra month for us to chase down every single little piece of memory. But instead, like getting good enough is still helpful to people. [00:04:36]Alessio: Let's jump into it. The first part of the article, and we'll put this in the show notes so people will be following along with the post. So we don't need to read every single equation and every footnote for it. [00:04:46]Swyx: Okay. [00:04:46]Alessio: But the core equation here is that not the cost of compute, but the compute required to turn a transformer model is roughly equal to tau times T, where like T is the, where tau is the hardware setup throughput that you have. So number of GPUs times the actual flops per GPU. And then T is the time spent. I think people can visualize that pretty easily. It's basically like how many GPUs do you have and how much do you let them run for? And the things that come to it that people have read before in the Chinchilla paper in a way, and the OpenAI scaling law is that you can then equal this to 6PD, where P is the number of parameters in the model and D is the size of the, of the dataset in tokens. So talk a little bit about how people should think about the two. I think a lot of times the focus is on tokens parameter ratio in the training dataset and people don't think as much about the actual flops per GPU, which you're going to mention later in the blog post too, in terms of how much you can get out. So how should people think about this when they're building a model and where should they go to this equation as they're starting to think about training their own transformer-based [00:05:58]Swyx: model? [00:05:58]Quentin: You touched a little bit on the fact that people usually start with the dataset. So you have some dataset that you want to train a model on. And then from there, from the 6PD, you should see, okay, I should have about six tokens per parameter. So that determines my model size thereabouts for Chinchilla Optimal. So since then we've seen that need more something like 20 or more than that to get a good quality model. But the next question that should be on your mind in terms of a systems perspective is how long is it going to take for this model to train and what kind of budget should I expect? So let's say I want some cloud instance for some amount of time and each of them will have some price attached to it. So that's where the throughput comes in. So now that you have this model, this number of parameters, you should map that to a transformer architecture and you should benchmark what throughput you get on your software stack for that type of model. So now you have your flops per second on a single GPU. And then given whatever parallelism scheme, which I'm sure we'll get into, like data parallelism or tensor parallelism or whatever else, how is that flops number going to scale to whatever number of GPUs? And then from there, you're going to get a time. And if you have a time, you have a cost. Those are like the business answers that you'll be able to get using this formula. That's why we sort of split it into the T and the throughput terms so that you can solve for one of them, which is usually get throughput, need time, and from time you get cost. In a nutshell, that's the answer. [00:07:19]Alessio: One thing that I noticed, you mentioned some of these laws are only true when a thousand GPUs for one hour cost the same as one GPU for a thousand hours, given that we have a shortage of the biggest GPUs out there. Any thoughts there on how people should prioritize this? [00:07:36]Quentin: Yeah, so I would say you should find what the minimum number of GPUs is to just fit your model first. The memory bottleneck is your biggest problem if you have a sizable model. If it's a small model, nobody cares. But most models that people care about will need to be split across multiple GPUs. So find the minimum number of GPUs to just fit your one instance of your model and then calculate how long that's going to take. If it's a reasonable amount of time, then you're done. If it takes too long, then you need to start worrying about having multiple instances of that model. I always feel like you should go with the minimum number of GPUs because the more number of GPUs that you have, the more likely it is for things to break. So I would say just find out what time is reasonable for you and then fit the number of GPUs to that and no more. Because people get greedy and they say, if I have twice the GPUs, I can get this done in half the time. And then you end up taking three times the time because everything is breaking every day. And that's when I am up at midnight trying to fix your model that's broken. [00:08:34]Swyx: We had a previous guest which has invested a lot in their framework for training these things. Would there not be an equivalent open source framework you guys would have made that would help with scaling up GPUs linearly like that? Or is this an oversimplification? [00:08:50]Quentin: Okay, yeah. So maybe I should step back. Both Mosaic and us have our own sort of software stack recipe that scales well, theoretically. But I'll get to that in a minute. Mosaic is all based off optimizer sharding. So it's based off ZeRO. So you basically perfectly split your model optimizer and your parameters and your gradients across all of the different GPUs. So your aggregate memory is number of parameters divided by number of GPUs. Same thing for optimizer and so on. Whereas we at Eleuther use a Megatron deep speed based library. And for that, it's a bit more complex. So the efficiency can be a little higher, but it's more prone to failure at the same [00:09:30]Swyx: time. [00:09:30]Quentin: So you kind of have to tune it. In both cases, getting back to like the practical case, you should be able to get linear speed up by adding more GPUs. The problem is that there are hardware failures. You tend to have problems with like maybe loss will overflow if you have too many GPUs or maybe one GPU will hang. You might have software issues. You might have synchronization issues. And that's why I'm saying practically that you should take the minimum number of GPUs that you have because those are the easier cases to debug. That make sense? [00:10:00]Swyx: Yeah. [00:10:00]Quentin: Any more detail on any specific point? [00:10:02]Swyx: Not particularly, just because we haven't actually had to debug those things. But I imagine basically there's a lot of return towards encoding these knowledge into software and not repeating it again. So it makes a ton of sense. I think Alessio had more questions before we move too far into high level, more questions on just the equation itself. I think we want to spend time on essentially, this is the central equation of figuring out compute requirements. Yeah. [00:10:25]Alessio: Another thing in it is that the computer is like the forward pass and like the backwards pass and forward is 2PD, backward is 4PD. Why it's to the ratio between the two? Can you explain that? Why is it two and four? [00:10:39]Quentin: Yeah. [00:10:40]Alessio: Why is it twice the amount? [00:10:42]Quentin: Oh, okay. Intuitively for forward pass, you're just moving, you're propagating forward the inputs through the layer. And then in the backward pass, you're doing something a little more complex than that. You're doing back propagation. And I don't think I can explain it intuitively enough to go into more detail on the exact [00:10:58]Swyx: numbers. Yeah. [00:10:58]Quentin: That's okay. [00:10:59]Swyx: I feel like you want to get out a whiteboard and start drawing like, you know. [00:11:02]Quentin: That's what I would normally do. [00:11:03]Swyx: Tangents and gradients. It's actually surprisingly low to do the back propagation. Honestly, that's one of the fundamental things I love about the math of deep learning so far that as I've explored it, which is, it's surprisingly efficient as compared to other, I guess, numerical methods you might be exposed to and, you know, college calculus. Yeah. [00:11:22]Alessio: And I think the other thing is that things sound simple, you know, when people go on Twitter and say, Oh, 20 is like the optimal ratio. And it's like, then it's like, well, why is that the number? And the answer is usually much, much harder, like what we're seeing right now. So I think it's a, it's a good reminder that the numbers are simple, like all the best and most popular, like math equations are like, so elegant. Obviously the proof behind that is, it's not that easy. That's always a good reminder. [00:11:52]Swyx: I want to put this equation to the test a little bit. We can do this from either GPT-3's perspective or GPT-NeoX, whatever you're more comfortable with. You have this distinction of actual flops versus theoretical flops. And a lot of times when people report the flops it took to train a model, like we just saw one in Lama 2 where the estimate is something that the amount of flops and that's, that's what we go with. So GPT-3 took a 3.14 times 10 to the power 23 flops. That is the theoretical flops. I want to get to a point where I can sort of work out if a number passes the smell test. And I wonder how to do that because I should be able to plug in this equation, right? I know that GPT-3 was trained on 300 billion tokens. I know the parameter size of 175. Is it, is it just like a 6 times 175 times 300? Like I haven't done the math, but what are the nuances here that you might want to call out? [00:12:42]Quentin: Theoretical flops is usually given from, you have a given set of hardware and this is what you expect your hardware to get. The problem is that in practice, full utilization, that's the key word, right? Because in practice, there are a lot of cases where like you're spending time waiting on data movement from like the GPU to CPU. Or for example, you might be waiting to synchronize across the different GPUs. So there's a lot of idle time basically that you're going to be spending during training. [00:13:05]Swyx: Smell tests. [00:13:06]Quentin: I don't know if I have a smell test myself, to be honest, like maybe I'll look at like what sort of flops, what you would expect on like an A100. There's sort of just an expected flops for a given GPU that everyone sort of knows what you should expect. So like for an A100, that number is somewhere between 100 and 180. T flops is what you would expect to see on an A100. For a V100, like an older GPU, it's something more like 40 to 30. So people sort of know, given the kernels that we're running for a deep learning, what sort of flops you expect. And then you sort of compare that to the theory, to the theoretical flops that people are reporting and see if that matches your expectations. [00:13:47]Swyx: Yeah. [00:13:47]Alessio: And in the article you mentioned for the A100, like if you're seeing below 115 teraflops a second, there's something wrong with your model or hardware. How did you get to 115? Is it just, you know, production observability and like you've seen over months and months and months that like that's the baseline or how do you come up with the numbers like that? Yeah. [00:14:08]Quentin: For a number like that, we basically, we compared a lot of different frameworks. So like I mentioned before, Mosaic has their own framework and we have our own framework. They all have their own flop counters too, right? And we saw across a bunch of different hardware configurations that if you tune things correctly, you should be getting above 115 in pretty much all cases. So like there are some cases where things are tuned poorly or your system is a little weird, but we've never been able to get a new system and not been able to get above [00:14:35]Swyx: 115. [00:14:35]Quentin: If something is below 115, you have something really wrong in your software. But that's really all it is, is just comparing across software stacks and hardware systems. [00:14:44]Alessio: What about different GPUs? We had George Hotz on the podcast and he talked about AMD cards and how in theory their flops should be much better than some Nvidia cards, but the reality is like the CUDA runtime makes up for it. How should people think about improving that? You know, like do you see, okay, the A100 is like 115 teraflops. I'd rather just stick with this than try and figure out all the kinks of like a better AMD card or any thoughts there? [00:15:10]Swyx: Right. [00:15:10]Quentin: Well, that's sort of touching on developer time, right? And which ends up being more expensive because at the end of the day, the AMD and Rockham software stack has a long way to go. I would say most things run there, not particularly efficiently, but you're going to have weird bugs that no one has encountered before. One of the big pluses of going with the Nvidia and PyTorch stack is that there are thousands of GitHub issues with everyone facing the same problem as you and resolving them quickly and in an open source way is probably the biggest benefit of going with the Nvidia software stack right now. AMD has about the same hardware, software, not so much. And they haven't quite got the momentum in the open source realm, for example, to get close. Like something, for example, like Flash Attention, it's spread to more Nvidia GPU types than it has like to AMD at all. And waiting on those latest and greatest features to reach AMD is something that's prohibitive to a lot of people, but it's getting there. I'm running a lot of experiments on AMD right now because it's sort of reached the government lab supercomputers now. And so a lot of experiments are going there and it will catch up, I'd say within a few [00:16:14]Swyx: years. [00:16:14]Quentin: Awesome. [00:16:15]Swyx: Maybe just talk about what's available from the government labs and I heard the original, the origin of Eluther started with a grant for TPUs. Is that right? [00:16:24]Quentin: Yes, that was a little before me, but there was a lot of just like getting a grabbing a Google Cloud or TPU pod or something like that is a lot of the original TPU work on Mesh TensorFlow, which is like now like an ancient distributed deep learning library. [00:16:36]Quentin: Eluther got a grant, an insight grant with Oak Ridge last year, and we got quite a bit of Summit Compute. So Summit is a V100 based supercomputer. It's got some weirdness to it. So there's six V100 GPUs per node. And we did a lot of experiments there. It's a challenging system to scale to because your interconnect across nodes is kind of slow in comparison to within a node, which I think we'll get to later. But now Oak Ridge has moved to AMD. So the next grant that we're trying to work towards is on Frontier, which has four AMD GPUs per node and again has a slower interconnect across nodes. So we get all of those new challenges again to try and overlap things. But that's just like you have Oak Ridge, you have Lawrence Livermore. There's a lot of government supercomputers that you can apply for compute towards like open researchers too. It's sort of a new thing. I think we're one of the first like us and like Lion, for example, is another organization that's getting compute from government providers and such. They're all moving to AMD as well. And we look forward to exploring that with them. [00:17:42]Swyx: Yeah. [00:17:43]Alessio: The computing is definitely, it used to be easy to find the GPU. Now, not as much. So you got to find them anywhere. [00:17:49]Swyx: Yes. [00:17:49]Alessio: Let's talk about memory requirements a little bit. So you touched on this a little bit before and just before this, we had a trade out on the pockets from FlashAttention and memory speed was one of our main focuses, but this time we're being bound by actually memory size, like the VRAM itself, when it comes to model weights and parameters and optimizer states and all that fun stuff. Let's go through this and Sean, we can, we can take turns. There's a lot to cover here, but maybe we can start from model weights. So one topic we covered a lot in the past is precision and quantization. That's one of the obviously main driver of memory. You mentioned most of, in the article, most transformers are mixed precision, like FP16 plus FP32 or BF16 FP32, and they can be cast down. And you mentioned up to like INT8 without a lot of performance hit. So let's start there and maybe run people through some of the maths and like the byte per parameter ratio and different precision. [00:18:50]Swyx: Sure. [00:18:51]Quentin: So when I started deep learning, it was all FP32. You have 32 bits, four bytes per parameter. Things were pretty simple. You didn't have to do any loss scaling at all. But the problem was that you didn't get a whole lot of flops once NVIDIA moved to V100s and introduced Tensor cores. So Tensor cores do all of their computation at FP16 precision. So you're kind of throwing all of those away if you're doing things in FP32. So once the hardware moved to V100, the software moved to like mixed precision and APEX and AMP and such. And one counterintuitive part of mixed precision is that you actually require more memory when you're trained because you need an FP16 copy of the weights and an FP32 copy of the weights. The FP16 copy is where you're doing like your actual computation on the Tensor cores. So you get maybe it's not uncommon to get double the throughput that you would see before in FP32. And then you at each step update that FP32 copy with the FP16 update. So both need to be stored in memory. The problem with that is that FP16 is very precise but doesn't have a whole lot of range, [00:19:55]Swyx: dynamic range. [00:19:55]Quentin: So you have a really big mantissa if you're thinking in terms of like floating point representations, not a whole lot of exponent. So BF16 puts more of the bits from the mantissa back to the exponent. So you have a much higher range and a lower precision. And that gets rid of all of this instability problem and loss scaling and such that anyone familiar with debugging knows how unstable it can be, especially for large scale training. And BF16 does away with a lot of that, but it's only supported on A100s. So you see the back and forth between hardware and software. So every time NVIDIA introduces some new Tensor cores or BF16 support or something like that, the software adapts to support it and then training adapts. And then now you mentioned like Ind8 and such. Now we're seeing that you have some model that's been trained in FP16, FP32, whatever else. And then now you want to, with minimal loss and accuracy, quantize that model into a smaller representation like Ind8 and now like Ind4 and things like that and see what you can get away with. And then since deep learning is such like a stochastic problem that a lot of those last bits of precision don't really matter is what we're finding. And I expect that to continue. [00:21:06]Alessio: And so just to put some numbers to it, when you have a FP32, you need four bytes per parameter at inference time to load it in memory. If you have a eight bits model quantized down, you need one byte per parameter. So for example, in an H100, which is 80 gigabyte of memory, you could fit a 70 billion parameters in eight, you cannot fit a FP32 because you will need like 280 gigabytes of memory. So how much does that play into it? Like you mentioned it was all FP32 when you first started. Is it just like a development complexity thing, like going down to FP16 and then Ind8? Or if they could get a GPU with like a terabyte of VRAM, will people just load this memory as like FP32 weights or would they still want to quantize them to make them more efficient? Right. [00:22:00]Quentin: I would say even if you had infinite VRAM, you would still want a quantized model, just a bigger model that's quantized is what I would say. And that's because like I was mentioning there at the end, how like deep learning is very stochastic and a lot, you could have all the precision in the world, but ultimately it's meaningless when you still depend so much like on what the input is. And you depend so much on little variations and maybe a few more samples of training data would matter more. A lot of that precision in a nutshell doesn't really matter in deep learning. All that matters is the big picture. What is that neuron actually saying? And not the tiny details of what it might be thinking. Oh, I also wanted to mention that even if you have an A100, the actual model size is quite a bit smaller that you could load than what you mentioned. That's because of the KV cache. So the KV cache intuitively during inference, it only matters during inference and think intuitively if you're writing a paragraph, you want to remember every single previous word that you've written before you write the next word. So like what is autoregressive language modeling? It's filling in the next word, the next token. So if I say like the dog went to the, and I need to write the next word, I would say park or something. Before I write the next word, my memory is wiped and I have to read the whole thing again. That is life without a KV cache. And a KV cache says, remember everything that I've generated before, as well as all the context before what I've generated. But the memory overhead for a KV cache commonly is either comparable or larger than the model in some cases, if you have a really long context. And I think the exact equation is something like, oh, it's like two times the number of layers, times the number of heads, times the dimension of each head. And then there's two of those. You have one for K, one for V. But that was just a quick aside. Yeah. [00:23:44]Alessio: I know this is Transformers math, but do you think one of the interesting things about RNNs too, it's like moving away from this, like KV cache, the scales with the sequence length and having like a fixed sequence pass. I know those are some of the things that people are working on. [00:24:00]Swyx: Yeah. [00:24:00]Quentin: So there's a paper that I was involved with called RWKV that I would recommend people read. It is answering this exact question. So how do you get Transformers quality without this quadratic attention overhead that Transformers requires? So it is interesting. I don't know if I can really dive too deep into the technical details there. I'd recommend people read the paper. But yeah. [00:24:23]Swyx: Yeah. [00:24:23]Alessio: It's interesting to see if attention is all you need, or maybe attention is all we need, but we need better ways to make it infer in a good way. [00:24:33]Swyx: We've actually done an unreleased episode with one of the RWKV core members and they call it soft attention or light attention. I forget what they call it, but yeah, just ways to approximate it such that it's linear and not quadratic. That's great. Yeah. [00:24:47]Quentin: I didn't know that you were involved. [00:24:48]Swyx: That's great. How did you get involved? Is it just because like everyone just hangs out in Discord and talks about the future of Transformers? Oh yeah. [00:24:55]Quentin: I mean, the RWKV people specifically are in Eleuther all the time. Like they're very close collaboration with us. And my contribution was we have all of these experiments done by all of these people on RNNs and how they relate to Transformers and how do we turn that into a paper and disseminate that digestibly so that people don't have to read through like a Discord log from a year ago to understand what's going on. [00:25:16]Swyx: Oh my God. [00:25:16]Quentin: Just read this paper. So that took some work, but I wasn't a core contributor. So that's why I don't want to go into like the technical details. But yeah, that's how I did. [00:25:24]Swyx: We'll try to get that RWKV episode out. It seems like there's increasing mentions of it and they are doing pretty important work as far as scaling these models are concerned. Okay. So we discussed inference type quantization and memory requirements. And then you also had a section on training with a lot of stuff I think mentioned. I think we probably want to spend the most of our time on optimizer states and the Atom optimizer. Yeah. What are your takes on it and what should people keep in mind when they deal with these optimizers? Okay. [00:25:57]Quentin: I would say the Atom optimizer is good at what it does. It's sort of a broad question. So let me think. You have the copy of the weights and then you have your momentum and your variance that [00:26:08]Swyx: you store. [00:26:08]Quentin: And like, okay, maybe an intuitive explanation for momentum is that like, let's say you have a canyon and you're trying to get to the bottom. And if you're just doing basic SGD, then every step is going to be an equal size. Whereas if you're using something like Atom with the momentum term, then your steps should be progressively larger because you can see, oh, the general trend is we're heading downwards very quickly. But stepping back from that, since you have all of these extra terms in Atom, you require a lot more memory to store it. Like three times as much memory as SGD. And if you have all of this memory being spent on your optimizer states, then how do you distribute it across GPUs? Because you'll find that what ends up being your bottleneck more than just raw compute, raw flops on a given GPU is your parallelism. And that falls back onto how much model you can fit on a single GPU before you need to split it up across a bunch of GPUs. And then you end up spending time, more time with them talking to each other than actually making progress. So that's why all of this time in the blog post is spent on how do you distribute your model? What are all those different distributed strategies look like? Which ones are more efficient? And given that a lot of your memory is being spent optimizers, how do you distribute that optimizer specifically? Because a lot of people, when they talk about parallelism, they talk about model parallelism, the parameters themselves. In actuality, when you're training, a good portion of your memory is actually spent on optimizer states. So what specific part of that would you like to go into? Would you like to go into like zero or sharded optimizers? [00:27:36]Swyx: I think the sharded optimizer stuff is really interesting, but I think we're kind of leaving that towards the end, right? Because that's the maybe more advanced distributed sections. Here, I think we're just going for rough intuition for people who've maybe are familiar with the ideas of these optimizers, but haven't actually had to implement them yet. They read your code, but they don't really understand the intuition behind the code. I see. [00:28:00]Alessio: And Quentin, when you say in the blog post, it says, Adam is magic. How much of it is like actual magic, even to like people like you that are pretty close to the metal, so to speak? Are some of these things just come as gospel? It's like, I know this works, like I'm not touching it. I'm just leveraging it. How much of it are you actually thinking about improving on in your day-to-day work? I see. [00:28:22]Quentin: So I'm a systems guy. I'm an engineer. And a lot of these things come to me as magic. Adam comes to me as magic. I see it from the gods. I say, this is how a deep learning model is trained. And this is how the next step is calculated. And then I say, okay, how do I make that fast? I would say I do look at ways to improve upon it using things like second order optimizers. So there's a lot of research on there because they're hard to distribute. But the core contribution for me always comes down to someone else has done like some deep learning optimization and I need to make it run fast. So I can't really speak to the motivation of why Adam came about other than like simple, intuitive things like I mentioned with like the momentum. But what matters to me is that Adam takes more memory than SGD, specifically three times. And all of that memory needs to go somewhere and it needs to be split efficiently. [00:29:14]Swyx: Yeah. [00:29:14]Alessio: So when you add them all up, you got 12 bytes per parameter with vanilla Adam. [00:29:20]Swyx: Yeah. [00:29:20]Alessio: And then you still get the model parameters and memory too. So as you mentioned, you need to keep a copy of both for like a FB32, FB16 mixed, a copy of both quantization levels. So there's precision levels. So it's six bytes per parameter. Right. [00:29:36]Quentin: Taking a step back again, is that like, okay, most people think of your model getting big. So you need to split with model parallelism purely, something like tensor parallelism. But we can see that the model only takes like two bytes per parameter if we're doing FB16. Whereas the optimizer itself requires four bytes per parameter for the model states, four bytes for momentum, four bytes for variance. So what matters more is how do you split your optimizer efficiently and how do you store it efficiently? And something like bits and bytes, where the optimizer, you got like eight bit Adam, where those optimizer states is only one byte per parameter instead of four or something like that. That is going to give you a much better return on your model training and on your memory overhead required than if you were to, for example, quantize your pure like FB16 model weights down to int8 or something. So for training specifically, your optimizer memory matters a lot. The most in most cases. [00:30:31]Swyx: Well, yeah. [00:30:31]Alessio: And before we dive into zero, just to wrap up the items that you're going to shard later. So you have the parameters, you have the optimizer states, and then you have the gradients. Just maybe touch a little bit on that. And then we can talk about how to efficiently load them in GPUs. [00:30:48]Quentin: So the parameters are the FP32 copies of the parameters. We include them in the optimizer discussion. Some people don't, but just for clarity, it's 12 bytes per param for the optimizer states and four of them are for that FP32 copy of the weights. Four of them are for the momentum. I already went into why it's important to store momentum, but that's also per parameter. You need to store where that parameter is going and where it's been going in the past. You also need to know, okay, we know where it's going, but there's going to be bumps on this canyon that we're going down. So we need to store its variance. How often are those bumps? Should we be focusing more on the momentum? Or is this parameter just kind of jumping around everywhere? Those are all important answers that we need the optimizer to store, and it's per parameter. So that's where all three of those terms come from. And we also include some competing bits and bytes, for example, an SGD to show that depending on your optimizer, you may store all or none of these and in different representations. [00:31:50]Alessio: I'm looking at the total training memory. You essentially have model memory, optimizer memory, gradient memory, and activation memory. I think that's one of the last discussed things. So maybe just give people a little bit of a view. [00:32:03]Swyx: Yeah, this is completely new to me. [00:32:05]Alessio: Active, you know, recomputation, checkpointing, and all of that. [00:32:08]Swyx: Right. [00:32:09]Quentin: So, okay. So to summarize before activation checkpointing, which will be complicated, you have your model params, like I mentioned before, they used to be FP32. Now they're probably BF16, maybe FP16 if it's an older GPU. Then you have your optimizer. That's where a lot of the memory is going. And it's your high precision, usually FP32, copy of the weights. So that's four bytes per param. And then you have, optionally, a couple more terms like we just discussed, like momentum or variance or whatever else, depending on what your optimizer is. Then you have your gradients. So your gradients is what is the gradient update that we get after running the forward pass on the model. And that's going to be whatever your low precision copy of the weights is. So like two bytes per param, if you're using FP16 or BF16. And all of those are sort of set in stone. And that overhead is not going to go away for the duration of training. Your gradients might get cleared after you back propagate them, but your optimizer states and your model states aren't going away. That memory overhead will be there. Activation recomputation and activation memory is dynamic. So some people will come and have this problem where the model loads fine for training. But then when you actually run your first iteration, or you run some future iteration or something like that, you run out of memory, seemingly at random. And it's because of these activations that you're computing on the fly. Good summary, or do you want to get into activation recomputation now, or do you want me to touch on anything else? [00:33:35]Alessio: Yeah, I was going to say, when is the recomputation happening? How does it decide between recomputing versus storing? And talk a bit more about that, maybe. [00:33:47]Quentin: Yeah, okay. So there's a lot of different ways to do this, but I would say there are a few main ones. First is a very simple scheme. You recompute everything. Every single activation that you calculate is just going to be either used or thrown away until the end. So in that case, you care very much about memory. You care very little about compute. Maybe this would be a case where you have to distribute across a lot of different GPUs, for example. And your communication speed is really low. Then that might be a good case for you to just recompute everything. It happens rarely, but it happens. Next up would be something like selective recomputation. So in selective recomputation, which Megatron has a good paper on, and I believe the figure that we have in our blog post is from, in that case, you sort of do a weighted decision for each activation. So for really big activation tensors, you decide, is this going to be more expensive to save in terms of memory or to recompute in terms of compute? So that's sort of the smart scheme that Megatron implements. And there's a lot of different heuristics they use. It's probably not worth mentioning off this super long equation on a pod, but you should go and read that paper if you're interested on selective recomputation. And then a really stupid scheme that most people go with, including NeoX, would be something like, instead of doing all of these heuristics, you just say, if my tensor is bigger than X, I throw it away. And you set X to some static number, and that's it. And that is good enough for a lot of cases. [00:35:18]Swyx: Why is it good enough? [00:35:20]Quentin: You don't want to store more than, you know, X-sized tensor. And some fall above that, some fall below it. And you're not trying to squeeze. You care more about getting something close enough to what the actual heuristic should be without actually computing the heuristic because you don't want to spend the time writing that heuristic code. [00:35:37]Swyx: Cool. I think that does take us on a grand tour of the memory math. Is there any sort of high-level takeaway before we go into the distributed stuff? Zero and all that. Perhaps more detail than most people have ever encountered. And so I'll repeat the equation that Alessio mentioned again, which is total training memory now has all these components that you've mapped out for the first time as far as we're concerned. Model memory, optimizer memory, activation memory, gradient memory. We covered quite a few algorithms as to the choices you can make there. Anything else that you want to mention about just memory math? I don't think so. [00:36:11]Quentin: I think that about covers it. I will say that it's a very different scheme for training and inference. It's common for people to say, oh, BF16 is the best. Done. Whereas a more correct take is that during training, precision matters a bit more. So BF16 will be around longer for training than it will for inference, in which case your model is sort of already baked. And it definitely doesn't need some of those last bits of precision so you can get away much easier with going to int8 for inference rather than training. So everything that you learn for training has to be relearned for inference and vice versa. [00:36:44]Swyx: There's a third category. You're talking about training versus inference. This third category is emerging with regards to fine-tuning and perhaps parameter-efficient methods of fine-tuning. The naive way to implement fine-tuning is just to do more training. But I don't know if you've developed any intuitions over fine-tuning that's worth inserting here. Any intuitions? If you were to write fine-tuning math, what would go in there? That might be an interesting diff to training math. [00:37:10]Quentin: I think there's a lot of questions that are unanswered for fine-tuning. For example, we know scaling laws for training. And some people have done scaling laws for fine-tuning. But how does a model that's already been trained on one domain transfer to another in terms of fine-tuning size? How many tokens per parameter should you have for your fine-tuning dataset? Maybe I'm ignorant, but I feel like a lot of those sort of practical questions on how a model can transfer and how a model can learn or grok some new ability that wasn't in its original training dataset is something that I would definitely put inside a fine-tuning blog post. [00:37:45]Swyx: Something related to perplexity and, I guess, diversity of the tokens that you get. [00:37:49]Quentin: Yeah, sort of dataset transfer is something that I would be curious in. Learning rate transfer is another one. So your model has some decayed learning rate over the course of training. How does that change for fine-tuning? Things like that. [00:38:00]Swyx: All right, cool. Thanks for indulging that stuff. Sure. Yeah. [00:38:03]Alessio: I think after all of this, you can quickly do the math and see that training needs to be distributed to actually work because we just don't have hardware that can easily run this. So let's talk a bit about that. So zero is one of the first things that you mentioned here, which is focused on sharded optimizers. Maybe run people through that and how to think about it. [00:38:25]Swyx: Sure. [00:38:25]Quentin: So zero is centered around two communication operations. And the first is scatter. And people should be looking at the zero figure that I think we have. [00:38:35]Swyx: Yeah. [00:38:36]Quentin: So there's a figure in the paper with parameters, gradients, and optimizer states that people should be looking at when I'm talking about this. Every GPU is going to get its own equal portion of the slice. And if we're doing... There are different stages of zero, but let's just start off with assuming that it's an equal slice of the optimizer states, gradients, and parameters. That would be zero three, stage three in that case. And we do that with a scatter. And the scatter takes, say, one over end GPUs, plus this offset of that slice goes to that GPU. Now all of the GPUs have an equal slice that's in its rank order. And then during each training step, that GPU is going to wait for all of the other slices to communicate so that we now have a whole pie on that GPU, that single GPU. Once we have that whole pie, we do the forward pass on it. And then we distribute that forward pass to all of the others using a gather. So it's a scatter, reduced scatter specifically, and then a gather back to all the others. And you do that each step. So the point of it is that you're sharding these states across GPUs. And with the different stages, you'll see in that figure that the optimizer state is taking the most proportion, which is because of what I mentioned before. We're including the FP32 copy and we're doing atom. So we need those four bytes per param for momentum and for variance. And then zero stage one, which is the most common one, is just optimizer. Zero stage two is optimizer plus gradients. And zero stage three is optimizer gradients and model parameters. But it all comes back to this splitting up and then gathering together back and forth over and over. So you get a lot of communication overhead from zero. But the plus part of that is that you can overlap a lot of that movement with computation. [00:40:23]Alessio: How do you get the optimal number of GPUs to do this on? Is there a way to shard too much as well and put too much overhead? [00:40:31]Quentin: It depends more on what your interconnect is. Taking a step back, there is synchronization that's required, a lot of it, across all of these GPUs. And those tend to be cumulative. So if you go to too many GPUs on an interconnect that's too slow, then you're going to end up spending more time synchronizing. And that magic number where you spend more time synchronizing is going to be different depending on what your fabric is and what your GPU memory is specifically. Just how small of a slice is each GPU getting? I can't, for example, for Summit, that number comes out to be about 20 billion parameters. Now you have 20 billion parameters, and then your magic number of GPUs for that is going to be something like 100 to 200 scale. Beyond that, you're just going to end up spending more time communicating. And the actual flops dipping below some predetermined number by you is going to be whatever your sweet spot ends up being. [00:41:24]Alessio: And then, so this one was like hard for me to go through, so I'm excited to have you run through it, which is a 3D parallelism. [00:41:33]Swyx: It's fancy, it's cutting edge. [00:41:35]Alessio: Yeah, let's talk a bit more about that and some of the work. [00:41:38]Quentin: Okay, 3D parallelism. So what is each dimension? First is the really basic one. That's data parallelism. And data parallelism is you have a copy of the model. Let's say for simplicity, one copy fits on one GPU perfectly. Data parallelism is that now you have two GPUs, so you have one copy on GPU one, one copy on GPU two. Both of them do the forward and backward pass and then synchronize and average the gradients. And then that's a step. Data parallelism for 3D parallelism is actually zero. So it's, you're sharding the optimizer states across all of your different GPUs. Next up is tensor parallelism. Tensor parallelism is you split your model. Like say, if you have two GPUs, you split your model down the middle and each GPU on its tensor specifically is going to do its forward or backward operation on its tensor. And then only when necessary, it'll synchronize that tensor operation with the other GPU. It's a bit more complex than something like pipeline parallelism, which is the third dimension. In pipeline parallelism, let's say you have four layers in your model. And you have four GPUs. You put one layer on each GPU and then GPU one does the forward pass and then sends the output of its activations to GPU two. It does the forward pass, sends activations to three, and you're just moving down a line. That is a naive scheme in that all of the other GPUs are doing nothing while a single GPU is doing its forward or backward pass. So the reason it's called pipeline parallelism is because you're splitting your mini batch into micro batches. So GPU one will do the forward pass on micro batch one and then send to GPU two. And then while GPU two is running on that first micro batch, GPU one is working on the next micro batch. And so you're sort of pipelining the movement and computation of each micro batch. The problem with that is that you need a really big batch size in order to split it up into both mini batches and micro batches. So combining all three of those together, you get a 3D mesh of where each parameter and optimizer state and so on maps to each GPU. And that's 3D parallelism. So let's start diving into details on what have that made sense, what should I jump into more on? [00:43:55]Alessio: I think the main question is, do you need all of the GPUs to be the same to do this? Or can you have mismatching GPUs as well? [00:44:03]Quentin: Okay, two things matter. If there's a difference in VRAM for the two different kinds of GPUs, then you're going to be bottlenecked by whichever GPU has the lower amount of VRAM because it's going to run out of memory. And then you can't like whatever's left on the larger GPUs is going to be empty. As far as I'm aware, there's no like GPU single GPU aware memory overhead scheme that would account for that. The second problem is that let's say all of your GPUs have the same amount of VRAM, but half of them are really slow. And the problem with that is that those synchronizations that I mentioned earlier are going to kill you. So you're going to move as quickly as your slowest GPU in that case. So in both cases, you end up regressing to your slowest or smallest GPU. So you might as well have the same GPUs for all of them. Otherwise, you're wasting the nicer ones. And that also goes to your CPUs and your interconnect. So going back to the 20 billion parameter model that Eleuther was training, that was on a cluster that was sort of Frankenstein made during COVID when there was all of that shortage of network switches and such like that. So every node had a different network switch. And so you ended up moving at the speed of the slowest switch and getting everything tuned properly so that it's not worse than the slowest switch was challenging and is like a real world problem that sometimes comes up. [00:45:28]Alessio: Is this work widely accepted? Like I hadn't learned about this before studying for this episode. Is this something that people are still trying and researching? Or is everybody just aware of this and running this in production? [00:45:43]Quentin: What is this specifically? [00:45:44]Alessio: Like the sharded optimizers plus the 3D parallelism, bringing the two things together and having this kind of mesh strategy. [00:45:51]Quentin: I would say that a lot of major GPT-based models use this scheme. A lot of them now are sort of going with just a pure zero scheme. So just a pure sharded. You just shard everything. And then since that's so easy, everyone gets an equal slice. There's no such thing as a pipeline stage. There's no such thing as what tensor should go on which GPU. Instead, we shard everything equally and treat everything equally. It's a much easier problem to debug, to checkpoint, to run training on than it is with this 3D parallel scheme. I say 3D parallel gives you the most control and also the most ways to go wrong. And depending on whether you have more engineers or whether you have more GPUs, that should decide which of these you go with. [00:46:35]Swyx: It's also not too hard, right? You've basically outlined the five or six different numbers that you need to keep in your head. And it doesn't feel impossible that if you need to achieve that level of control, you've given everybody the main levers to do it with. And that's wonderful. Definitely. [00:46:51]Quentin: The problem that comes up is like, say, like, okay, GPT-4 came out. Now we have VLLMs. [00:46:57]Swyx: Whoa, what are VLLMs? Oh, okay. Virtual LLMs, like the Metro of Expert things? No, like visual. [00:47:03]Quentin: So now you have like multimodal models and such. How do you distribute that? Do you distribute it in a pipeline stage? And do you just shard it? Do you split the tensor and make a tensor parallel? It's sort of hard to change your model and add new features and such when you have this 3D parallel scheme. That's when I say hard. I mean, it's hard to sort of adapt and modify it to new features. [00:47:26]Alessio: I know we're at the hour mark, and I think we put our listeners through a very intense class today. So this was great, Quentin. And we're going to definitely link the article so that people can read it and follow along. Any other research that you're working on in this space that you want to shout out? I know one of our usual, I mean, wrong question is, what's the most interesting unsolved question in AI? So curious to hear if you think it's still on the training inference, math optimization, or are there more areas that people should pay attention to? [00:47:58]Quentin: I think in my area of research, there are two things that I think people should really care about. And the first is multimodal parallelism and RLHF. You were seeing more and more reinforcement learning and coming into the training loop. And so how do you split that some model or some GPUs are working on inference and some GPUs are working on training? And like I mentioned before, you have to relearn everything and they have very unique challenges. How do you split up a KV cache during training, for example? Those are challenges that are not well studied, I don't think. And then multimodal, you have like maybe a vision transformer and a text transformer. How do you split those up? Do you split them up equally? Do you put them on separate GPUs or do you just shard everything? And just maybe one GPU will have some vision, some text parameters. And then the second case I would say is that communication is very often a bottleneck. So we talk about 3D parallelism, but a lot of those like, for example, tensor parallelism, you can't go across nodes with. You'll just get killed in communication. So what I'm getting to is how should you compress your communication before it happens? So on the fly compression, you have some buffer that needs to be communicated. You compress it with a GPU kernel, then you send it across the network and then you decompress it, something like that. Making people spend less money on communication fabrics and more on GPUs as intended is sort of a thing that people need to explore. I think those are my two. [00:49:26]Alessio: Sean, you went over the other half of the lightning round before we wrap it up. [00:49:30]Swyx: That's a good brain dump. Cool. Yeah, I have so many more questions on the multimodal stuff, but that should be for another time. Acceleration, what has already happened in AI that you thought would take much longer? [00:49:42]Quentin: I would say flash attention. Guys, just talk to Tree. And flash attention is just sort of a really great set of kernels that I thought would take a while to get to us. [00:49:51]Alessio: Well, Quentin, thank you very much, man. This was super informative and I think hopefully helps demystify a little bit the blog post. I think people open it and it's like a lot of math on it. And I think you walking them through it was super helpful. So thank you so much for coming on. [00:50:07]Swyx: Of course. [00:50:08]Quentin: And I'm happy to answer any questions that people have offline if they have them. I do read my email. [00:50:13]Swyx: Email and Discord. Of course, yeah. [00:50:15]Quentin: Discord I'm even faster on. [00:50:16]Alessio: Thank you, everyone. [00:50:18]Swyx: Thanks, Quentin. [00:50:19] Get full access to Latent Space at www.latent.space/subscribe

La Diez Capital Radio
Informativa (19-04-2023)

La Diez Capital Radio

Play Episode Listen Later Apr 19, 2023 19:57


Informativo de primera hora de la mañana, en el programa El Remate de la Diez Capital Radio. Hoy se cumplen un año y 55 días del cruel ataque e invasión de Rusia a Ucrania. Hoy es Miércoles 19 de abril de 2023. Buenos días Ucrania. Día mundial de los Simpsons. Hace más de 35 años, un 19 de abril, se emitió el primer episodio de Los Simpson y desde entonces, los miembros de la familia más famosa de América y sus amigos, conocidos, vecinos y vecinillos, se han convertido ya en parte del imaginario colectivo. Precisamente por su treinta aniversario, se inauguró en 2017 el Día Mundial de Los Simpson, que se celebra cada año el 19 de abril. Fue la agencia de comunicación española PR Garage la que decidió lanzar la propuesta en Change.org, consiguiendo el objetivo de firmas propuestas. Atresmedia se sumó a la celebración con una programación especial en Neox y el uso del hashtag #TheSimpsonsDay #DíaMundiadeLosSimpson, y también FOX España, que suele emitir un maratón de la serie. Curiosidades infinitas en los Simpsons -En 1998, la revista Time eligió a Bart Simpson como una de las personas más influyentes del siglo XX. -El mismísimo Michel Jackson puso voz a un personaje que se hacía pasar por él en un manicomio. -Los Simpson tiene un Récord Guinness como el show televisivo con más estrellas invitadas de la historia, con más de 600. 1528 En las Cortes de Madrid, el príncipe Felipe II es jurado heredero de los reinos de España. 1692 en Salem (Massachusetts) comienza el juicio inquisitorio de brujas. 1775 En Estados Unidos comienza la Guerra de la Independencia contra el Imperio británico. 1898 El Gobierno de Estados Unidos envía al de España un ultimátum para que abandone en 48 horas la isla de Cuba. 1904 En Canadá, la ciudad de Toronto es destruida por un incendio. 1928 La aviación militar española adquiere su primer avión de bombardeo. 1949 Estados Unidos destina 5430 millones de dólares al programa de ayuda a Europa. 1971 La Unión Soviética lanza la Salyut 1, primera estación espacial controlada por el hombre. Santos Rufo, Expedito, Dionisio, Cayo, Vicente, Timón y Galacio. Rusia-Ucrania | El presidente Putin viajó a Jersón y Lugansk para entrevistarse con los militares. El G7 refuerza su apoyo a Ucrania y lanza advertencias a Rusia, China y Sudán. Mayor presencia en la costa y solo el 15% en capitales: así se reparten las 50.000 viviendas para alquiler del banco malo. Feijóo propone una ayuda de emancipación de 1.000 euros y avalar el 15% de la compra de una primera vivienda. El TC avala la 'ley Celaá' y señala que la Constitución no fija una proporción del castellano en el sistema educativo. La vacuna contra el cáncer estará en breve y abrirá una nueva era en el tratamiento de enfermedades”. La Fiscalía Europea investigará al exgeneral del caso Mediador por cuatro contratos en el Sahel por 263.093 euros. Francisco Espinosa, el único investigado de la causa que permanece en prisión, podría haber recibido “dádivas, regalos o pagos” cuando era responsable del Proyecto GAR-SI Sahel, financiado por la Comisión Europea. El turismo sigue despegando en Canarias y crece un 15% en marzo. Más de 1,3 millones de pasajeros internacionales eligieron nuestras Islas para descansar el pasado mes. Canarias, en emergencia burocrática. Fepeco inicia una batalla contra el caos de la Administración pública y exige acabar con el teletrabajo de los funcionarios e implantar el silencio administrativo positivo para “salvar” al sector. Ocho de cada diez empresas en Canarias tienen problemas para encontrar a los profesionales que necesitan. Supone un incremento de 7 puntos porcentuales frente a 2022; el mayor crecimiento del conjunto de España. Tras 35 años ininterrumpidos de presentaciones en Broadway, El Fantasma de la Ópera anunció que cerró sus puertas este pasado domingo. Según un reporte de The New York Post, el icónico musical de Andrew Lloyd Webber tuvo su última presentación este 16 de abril de 2023, al menos en Nueva York.

La Diez Capital Radio
El Remate desde Madrid (19-04-2023)

La Diez Capital Radio

Play Episode Listen Later Apr 19, 2023 146:59


Programa de actualidad con información, formación y entretenimiento conectando directamente con los oyentes, presentado y dirigido por Miguel Ángel González Suárez. www.ladiez.es - Informativo de primera hora de la mañana, en el programa El Remate de la Diez Capital Radio. Hoy se cumplen un año y 55 días del cruel ataque e invasión de Rusia a Ucrania. Hoy es Miércoles 19 de abril de 2023. Buenos días Ucrania. Día mundial de los Simpsons. Hace más de 35 años, un 19 de abril, se emitió el primer episodio de Los Simpson y desde entonces, los miembros de la familia más famosa de América y sus amigos, conocidos, vecinos y vecinillos, se han convertido ya en parte del imaginario colectivo. Precisamente por su treinta aniversario, se inauguró en 2017 el Día Mundial de Los Simpson, que se celebra cada año el 19 de abril. Fue la agencia de comunicación española PR Garage la que decidió lanzar la propuesta en Change.org, consiguiendo el objetivo de firmas propuestas. Atresmedia se sumó a la celebración con una programación especial en Neox y el uso del hashtag #TheSimpsonsDay #DíaMundiadeLosSimpson, y también FOX España, que suele emitir un maratón de la serie. Curiosidades infinitas en los Simpsons -En 1998, la revista Time eligió a Bart Simpson como una de las personas más influyentes del siglo XX. -El mismísimo Michel Jackson puso voz a un personaje que se hacía pasar por él en un manicomio. -Los Simpson tiene un Récord Guinness como el show televisivo con más estrellas invitadas de la historia, con más de 600. 1528 En las Cortes de Madrid, el príncipe Felipe II es jurado heredero de los reinos de España. 1692 en Salem (Massachusetts) comienza el juicio inquisitorio de brujas. 1775 En Estados Unidos comienza la Guerra de la Independencia contra el Imperio británico. 1898 El Gobierno de Estados Unidos envía al de España un ultimátum para que abandone en 48 horas la isla de Cuba. 1904 En Canadá, la ciudad de Toronto es destruida por un incendio. 1928 La aviación militar española adquiere su primer avión de bombardeo. 1949 Estados Unidos destina 5430 millones de dólares al programa de ayuda a Europa. 1971 La Unión Soviética lanza la Salyut 1, primera estación espacial controlada por el hombre. Santos Rufo, Expedito, Dionisio, Cayo, Vicente, Timón y Galacio. Rusia-Ucrania | El presidente Putin viajó a Jersón y Lugansk para entrevistarse con los militares. El G7 refuerza su apoyo a Ucrania y lanza advertencias a Rusia, China y Sudán. Mayor presencia en la costa y solo el 15% en capitales: así se reparten las 50.000 viviendas para alquiler del banco malo. Feijóo propone una ayuda de emancipación de 1.000 euros y avalar el 15% de la compra de una primera vivienda. El TC avala la 'ley Celaá' y señala que la Constitución no fija una proporción del castellano en el sistema educativo. La vacuna contra el cáncer estará en breve y abrirá una nueva era en el tratamiento de enfermedades”. La Fiscalía Europea investigará al exgeneral del caso Mediador por cuatro contratos en el Sahel por 263.093 euros. Francisco Espinosa, el único investigado de la causa que permanece en prisión, podría haber recibido “dádivas, regalos o pagos” cuando era responsable del Proyecto GAR-SI Sahel, financiado por la Comisión Europea. El turismo sigue despegando en Canarias y crece un 15% en marzo. Más de 1,3 millones de pasajeros internacionales eligieron nuestras Islas para descansar el pasado mes. Canarias, en emergencia burocrática. Fepeco inicia una batalla contra el caos de la Administración pública y exige acabar con el teletrabajo de los funcionarios e implantar el silencio administrativo positivo para “salvar” al sector. Ocho de cada diez empresas en Canarias tienen problemas para encontrar a los profesionales que necesitan. Supone un incremento de 7 puntos porcentuales frente a 2022; el mayor crecimiento del conjunto de España. Tras 35 años ininterrumpidos de presentaciones en Broadway, El Fantasma de la Ópera anunció que cerró sus puertas este pasado domingo. Según un reporte de The New York Post, el icónico musical de Andrew Lloyd Webber tuvo su última presentación este 16 de abril de 2023, al menos en Nueva York. - Sección de actualidad informativa con Humor inteligente en el programa El Remate de La Diez Capital radio con el periodista socarrón y palmero, José Juan Pérez Capote, El Nº 1. - Sección de información de aquella otra manera… en el programa El Remate de La Diez capital radio, con los hermanos Pinzones: Antonio Molano y Francisco Pallero. - Entrevista en el programa El Remate de La diez Capital radio al candidato de 2Ahora Canarias” a la presidencia del gobierno de Canarias; Matias Hernández.

Papers Read on AI
GPT-NeoX-20B: An Open-Source Autoregressive Language Model

Papers Read on AI

Play Episode Listen Later Jun 10, 2022 25:25


In this work, we describe GPT-NeoX-20B's architecture and training, and evaluate its performance. We open-source the training and evaluation code, as well as the model weights. A 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. 2022: Sid Black, Stella Rose Biderman, Eric Hallahan, Quentin G. Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, M. Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, J. Tow, Ben Wang, Samuel Weinbach Ranked #7 on Multi-task Language Understanding on MMLU https://arxiv.org/pdf/2204.06745v1.pdf

El Chiringuito de Jugones
El Chiringuito de Jugones (20/03/2022) en Mega

El Chiringuito de Jugones

Play Episode Listen Later Mar 21, 2022 150:22


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

A.I. Police Department
First day with NovelAI's Krake! (NeoX 20B)

A.I. Police Department

Play Episode Listen Later Mar 14, 2022 64:55


A live adventure story written by AI and humans — and YOU! GPT Model used this episode: NovelAI's Krake V1 (NeoX 20B) Join our Discord and submit your own story prompts that might be used in a future episode: https://discord.gg/7r7sgKZ ★ Support AIPDcast: https://anchor.fm/aipd/support ♦ Follow on Twitch: https://twitch.tv/aipd ♦ Subscribe on YouTube: https://youtube.com/aipd69 ♦ Join our Discord: https://discord.gg/7r7sgKZ ♦ Like us on Facebook: https://facebook.com/aipd69 Music ◇ J

Lo que tú digas, con Álex Fidalgo

¡Escucha GRATIS el episodio completo en Podimo! Jordi Cruz es un presentador español especialmente conocido y recordado por su papel al frente del mítico espacio televisivo 'Art Attack'. Antes condujo otro programa que forma parte de la historia de la pequeña pantalla nacional, 'Club Disney'. Ha sido actor de doblaje, locutor en Cadena 100 y en la actualidad presenta 'Top Gamers Academy' en Neox y el podcast '¿Sigues ahí?' para Netflix. Acaba de publicar su primer libro, 'Mejor no te lo creas'.

Yannic Kilcher Videos (Audio Only)
GPT-NeoX-20B - Open-Source huge language model by EleutherAI (Interview w/ co-founder Connor Leahy)

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later Feb 16, 2022 20:05


#eleuther #gptneo #gptj EleutherAI announces GPT-NeoX-20B, a 20 billion parameter open-source language model, inspired by GPT-3. Connor joins me to discuss the process of training, how the group got their hands on the necessary hardware, what the new model can do, and how anyone can try it out! OUTLINE: 0:00 - Intro 1:00 - Start of interview 2:00 - How did you get all the hardware? 3:50 - What's the scale of this model? 6:00 - A look into the experimental results 11:15 - Why are there GPT-Neo, GPT-J, and GPT-NeoX? 14:15 - How difficult is training these big models? 17:00 - Try out the model on GooseAI 19:00 - Final thoughts Read the announcement: https://blog.eleuther.ai/announcing-20b/ Try out the model: https://goose.ai/ Check out EleutherAI: https://www.eleuther.ai/ Read the code: https://github.com/EleutherAI/gpt-neox Hardware sponsor: https://www.coreweave.com/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Let's Talk AI
IBM Watson's and ZIllow's AI Failures, DeepMind's Alphacode, GPT-NeoX-20B, AI Valentine's Cards

Let's Talk AI

Play Episode Listen Later Feb 10, 2022 30:51


Our 85th episode with a summary and discussion of last week's big AI news! Subscribe: RSS | Apple Podcasts | Spotify | YouTube Outline: (00:00) Intro (01:20) Machine learning the hard way: IBM Watson's fatal misdiagnosis (05:20) How homeowners defeated Zillow's AI, which led to Zillow Offers' demise (09:00) DeepMind claims its new code-generating system is competitive with human programmers (14:00) Announcing GPT-NeoX-20B (16:35) AI Insurance Company Faces Class Action for Use of Biometric Data (19:24) Democratic lawmakers take another stab at AI bias legislation (22:55) AI-generated Valentine's Cards (26:10) Tell this AI your story's themes, and it'll write the first paragraph for you (28:50) Outro

Humor
El peligro de cenar con los 'cuñaos'

Humor

Play Episode Listen Later Dec 30, 2021 47:32


Nuestros cómicos Carlos Latre, Goyo Jiménez y Leo Harlem hacen sus parodias e imitaciones. Hablamos además con Bertín Osborne, Putin y un doctor de la universidad de Sebastopol para hablarnos del peligro de cenar con los cuñados, ya que producen mucho malestar. Además hablamos con Miki Nadal del décimo aniversario de Neox

Más de uno
El peligro de cenar con los 'cuñaos'

Más de uno

Play Episode Listen Later Dec 30, 2021 47:32


Nuestros cómicos Carlos Latre, Goyo Jiménez y Leo Harlem hacen sus parodias e imitaciones. Hablamos además con Bertín Osborne, Putin y un doctor de la universidad de Sebastopol para hablarnos del peligro de cenar con los cuñados, ya que producen mucho malestar. Además hablamos con Miki Nadal del décimo aniversario de Neox

Stevy Vee's Podcast
I Wanna (Radio Edit) - Stevy Vee x NeoX

Stevy Vee's Podcast

Play Episode Listen Later Aug 29, 2021 2:53


Released on Juiced Music Records. Collaboration by Stevy Vee and NeoX. Pumpy energy house track - Main room

Los Mediatizados Xtra
El Medioinformativo PLAS 3x02 - Bloqueados por la audiencia

Los Mediatizados Xtra

Play Episode Listen Later Jul 15, 2021 64:27


Vuelve la edición veraniega de El Medioinformativo recordando los siguientes programas: - Mediatizados 237: (sinto: Tracky Birthday - Balla) AUDIOS: denuncia en Madrid Directo que no puede 'espatarrarse para lavarse el chichi', anuncio de Canal Almería 'Taberna flamenca Er Chi-Chi' La semana pasada creíamos que no tendríamos contenidos para esta semana, y lo que pasó a continuación os sorprenderá DAZN presenta su nuevo documental HEXCLUSIBA: Tras Canal Orange, Vodafone también lanzará un canal propio Nuevos horarios de Masterchef (otra vez) Audiencias de mierda: Neox con la película 'Batman la Lego película' y 'Top Gamers Academy' (sinto: bensound.com - Psychedelic) - Mediatizados 238: La edad de oro del periodismo (sinto: bensound.com - Funky suspense) AUDIOS: Ana Rosa consigue mezclar la aparición de vida en Venus con el pacto con Bildu, dice "estoy negra con los chinos" y recordamos el mítico "¿es que somos negros?" Titular Classic: Mundo Deportivo en los años 70, "Dos conguitos para el Askatuak" Recopilación de errores de geografía en los informativos Reporteros en riesgo por los mosquitos del virus del Nilo - Mediatizados 239: el verdadero significado del "LTA" de la nueva campaña de Antena 3 - Mediatizados 240: "Bloqueados por la audiencia" - Mediatizados 241: los nuevos dispositivos inteligentes de Vodafone y un formato innovador en Televisión La Roda - Mediatizados 242: protestas en la sede de Ten, el black friday de las plataformas, Cadena Dial ficha a Cruz y Raya

Los Mediatizados Xtra
El Medioinformativo PLAS 3x02 - Bloqueados por la audiencia

Los Mediatizados Xtra

Play Episode Listen Later Jul 15, 2021 64:27


Vuelve la edición veraniega de El Medioinformativo recordando los siguientes programas: - Mediatizados 237: (sinto: Tracky Birthday - Balla) AUDIOS: denuncia en Madrid Directo que no puede 'espatarrarse para lavarse el chichi', anuncio de Canal Almería 'Taberna flamenca Er Chi-Chi' La semana pasada creíamos que no tendríamos contenidos para esta semana, y lo que pasó a continuación os sorprenderá DAZN presenta su nuevo documental HEXCLUSIBA: Tras Canal Orange, Vodafone también lanzará un canal propio Nuevos horarios de Masterchef (otra vez) Audiencias de mierda: Neox con la película 'Batman la Lego película' y 'Top Gamers Academy' (sinto: bensound.com - Psychedelic) - Mediatizados 238: La edad de oro del periodismo (sinto: bensound.com - Funky suspense) AUDIOS: Ana Rosa consigue mezclar la aparición de vida en Venus con el pacto con Bildu, dice "estoy negra con los chinos" y recordamos el mítico "¿es que somos negros?" Titular Classic: Mundo Deportivo en los años 70, "Dos conguitos para el Askatuak" Recopilación de errores de geografía en los informativos Reporteros en riesgo por los mosquitos del virus del Nilo - Mediatizados 239: el verdadero significado del "LTA" de la nueva campaña de Antena 3 - Mediatizados 240: "Bloqueados por la audiencia" - Mediatizados 241: los nuevos dispositivos inteligentes de Vodafone y un formato innovador en Televisión La Roda - Mediatizados 242: protestas en la sede de Ten, el black friday de las plataformas, Cadena Dial ficha a Cruz y Raya

Spoiler CUAC FM
Spoiler S08E08 – Schitt’s Creek

Spoiler CUAC FM

Play Episode Listen Later May 29, 2021 57:00


Schitt’s Creek es una comedia canadiense producida por Not a Real Company Productions, que se estrenó el 13 de enero de 2015 en la cadena CBC. En España, comenzará a emitirse próximamente en abierto en Neox.En ella, durante las 6 temporadas que lleva en antena, se narra la vida de la acaudalada familia Rose – el magnate de […]

Los mejores momentos de Anda Ya!
Anda Ya - Nacho G. Hermosura se sumerge en las aguas de Love Island

Los mejores momentos de Anda Ya!

Play Episode Listen Later May 14, 2021 5:53


Nacho G. Hermosura se adentra en las profundidades del nuevo reality de Neox.

Los Mediatizados: emisiones regulares
Mediatizados 264 - Audiencias de abril 2021: la decadencia de La 1 y Neox

Los Mediatizados: emisiones regulares

Play Episode Listen Later May 6, 2021 66:58


Los Mediatizados nº264 (06/05/2021) 0:00 Saludo y noticias (bensound.com - Funky element) 7:25 Audiencias de abril: el público se concentra en Telecinco y Antena 3. Mínimo histórico de La 1. 23:32 Agenda de Neeo (bensound.com - Jazz comedy) 26:22 Más audiencias: la decadencia de Neox 39:02 Pausa 39:25 Nuevo capítulo del "dazonazo": ahora adquirir derechos no significa explotarlos. Movistar renueva la Bundesliga, pero ¿la puede explotar DAZN? 49:07 Agenda deportiva (bensound.com - Extreme action sport) 51:45 Esto NO es el Medioinformativo: Blinding Lights sale de la lista de Los40, la coña del "quiebra Mediapro" está cerca (Tracky Birthday - Balla) 54:33 El Pifia-informativo: a Los40 Classic se le cuela una intervención grabada de otro día, cortes de cuajo en Energy y FDF 1:00:00 Notas de prensa de mierda feat. La edad de oro del periodismo: La Sexta con las elecciones catalanas de Madrid (bensound.com - Funky suspense) 1:03:44 Carta de Radiochips: la política es un show más como Sálvame o el fútbol (De Pinnic - Canción de cuna para Samuel) 1:05:40 Despedida

Por fin no es lunes
Cristina Pedroche antes del estreno de Love Island

Por fin no es lunes

Play Episode Listen Later Apr 11, 2021 6:48


Cristina Pedroche ha estado en Por Fin no es Lunes hablando de Love Island, el esperado estreno de esta noche a las 21:00 en Neox.

RforReview
Vol.06x2 - Un viaje por la nostalgia

RforReview

Play Episode Listen Later Mar 21, 2021 85:27


En el podcast de hoy hablamos de nuestras anécdotas en el cine, películas y series que nos han marcado, y volvemos a nuestra infancia para recordar las películas y series que nos han acompañado desde pequeños (Club Super 3, Disney Channel, Nickelodeon, Neox...).

PipeBomb Podcast
Episodio 1x19: Sebastián Martínez

PipeBomb Podcast

Play Episode Listen Later Dec 5, 2020 96:13


En el nuevo episodio de PipeBomb Podcast, hablamos con Sebastián Martínez (@sebasdrop), fundador de Solowrestling.com y excomentarista de WWE en Neox y GOL. Hablamos acerca de sus primeros años con el proyecto de SoloWrestling, así como de su tiempo en televisión y del estado actual de la lucha libre nacional y del periodismo de wrestling en España. Por otro lado, hablamos del debut de Sting en AEW, repasamos la cartelera de NXT TakeOver: WarGames y homenajeamos la vida y carrera del fallecido Pat Patterson.

Popap
Popap, de 13 a 14 h - 04/12/2020

Popap

Play Episode Listen Later Dec 4, 2020 46:03


Al "Popap", Mariola Dinar

Audiovisualeros
Audiovisualeros 4x02 (Parte 1) - Mom

Audiovisualeros

Play Episode Listen Later Nov 22, 2020 32:52


Es este episodio Dekard nos habla de la serie de Chuck Lorre (Big Bang Theory, Dos Hombres y Medio) Mom, una comedia de situacion "de las de Neox" pero que trae un contrapunto de humor negro. No olvidéis seguirnos en nuestras redes: Twitter- @podcast_av Instagram - @audiovisualeros Youtube - Audivisualeros Podcast Spotify - Audiovisualeros Cuentas de Twitter personales: Cuouz: @TarsInTime Sergio: @sergitomaracas

Los Mediatizados: emisiones regulares
Mediatizados 237 - Llega Pluto TV; Kantar y trocear, todo es trampear; Movistar renueva la F1

Los Mediatizados: emisiones regulares

Play Episode Listen Later Oct 15, 2020 59:45


Los Mediatizados nº237 (15/10/2020) 0:00 Saludo y noticias (sinto: bensound.com - Funky element) 4:51 Llega Pluto TV a España (sinto: Italian Dub Community - Television) 14:45 Agenda de Neeo (sinto: bensound.com - Jazz comedy) 19:07 Kantar y trocear, todo es trampear: las audiencias y los programas 'previo', 'post', etc. 30:24 'Movistar Kirby': aspira todos los derechos deportivos, ahora la Fórmula 1 hasta 2023 (sinto: Italian Dub Community - Television) 38:48 Agenda deportiva (sinto: bensound.com - Extreme action sport) 42:14 El Medioinformativo (sinto: Tracky Birthday - Balla) - AUDIOS: denuncia en Madrid Directo que no puede 'espatarrarse para lavarse el chichi', anuncio de Canal Almería 'Taberna flamenca Er Chi-Chi' - 44:53 La semana pasada creíamos que no tendríamos contenidos para esta semana, y lo que pasó a continuación os sorprenderá - 46:02 DAZN presenta su nuevo documental - 48:00 HEXCLUSIBA: Tras Canal Orange, Vodafone también lanzará un canal propio - 49:20 Nuevos horarios de Masterchef (otra vez) 52:15 Audiencias de mierda: Neox con la película 'Batman la Lego película' y 'Top Gamers Academy' (sinto: bensound.com - Psychedelic) 54:45 Audiencias de mierda con Radiochips: La Pr1mera Pregunta 58:23 Despedida (sinto: De Pinnic - Canción de cuna para Samuel)

Darrer vol a Formentera
Entrevista Top Gamers Academy - Darrer vol a Formentera IB3 Ràdio

Darrer vol a Formentera

Play Episode Listen Later Jul 18, 2020 12:56


Entrevistam Núria Fonollà, directora de programes de Gestmusic, i directora del programa ‘Top Gamers Academy’, que s’estrenarà a Neox aquesta tardor.

Darrer vol a Formentera
Entrevista Top Gamers Academy - Darrer vol a Formentera IB3 Ràdio

Darrer vol a Formentera

Play Episode Listen Later Jul 18, 2020 12:56


Entrevistam Núria Fonollà, directora de programes de Gestmusic, i directora del programa ‘Top Gamers Academy’, que s’estrenarà a Neox aquesta tardor.

Darrer vol a Formentera
Darrer vol a Formentera 18/07/20

Darrer vol a Formentera

Play Episode Listen Later Jul 17, 2020 57:59


Programa del dissabte 18 de juliol. Nit friki al Darrer vol amb les darreres notícies del món del còmic, el cine i les sèries de televisió. Entrevistam Núria Fonollà, directora de programes de Gestmusic, i directora del programa ‘Top Gamers Academy’, que s’estrenarà a Neox aquesta tardor. Al ‘Cine de mitjanit’ parlem de l’estrena d’¡Scooby!’, la darrera pel·lícula d’Scooby Doo, i feim un repàs de les seves millos pel·lícules. Maitane Páez torna amb la secció ‘Otaku Nights’ per parlar del K-Pop i del grup Dreamcatcher. I finalitzem amb la secció ‘Ready Player 3’, on Pablo Morganti i Sergi Torres ens parlen de les darreres notícies i llançaments del món dels videojocs.

Darrer vol a Formentera
Darrer vol a Formentera 18/07/20

Darrer vol a Formentera

Play Episode Listen Later Jul 17, 2020 57:59


Programa del dissabte 18 de juliol. Nit friki al Darrer vol amb les darreres notícies del món del còmic, el cine i les sèries de televisió. Entrevistam Núria Fonollà, directora de programes de Gestmusic, i directora del programa ‘Top Gamers Academy’, que s’estrenarà a Neox aquesta tardor. Al ‘Cine de mitjanit’ parlem de l’estrena d’¡Scooby!’, la darrera pel·lícula d’Scooby Doo, i feim un repàs de les seves millos pel·lícules. Maitane Páez torna amb la secció ‘Otaku Nights’ per parlar del K-Pop i del grup Dreamcatcher. I finalitzem amb la secció ‘Ready Player 3’, on Pablo Morganti i Sergi Torres ens parlen de les darreres notícies i llançaments del món dels videojocs.

El Otro Día
El Otro Día x31 | Disculpa Bandersnatch - con Xavi Daura y Jairo Huemes

El Otro Día

Play Episode Listen Later Apr 8, 2020 81:48


Hoy conectamos con Xavi Daura, de Venga Monjas, que nos cuenta su daily routine que comienza con 100 flexiones y termina en el mundo neblinosos de Neox. También hablamos un ratito con Jairo, tu cómico y tete de referencia en Valencia que nos cuenta sus peleas con ancianos en el Consum por llevarse los mejores aguacates. Además, comentamos el nuevo especial de Louis C.K., Urko Vázquez te hace volar con su sección, damos consejos para la vida y ganar discusiones y bla, bla, bla. ¿Qué pasa? ¿Que no te enteras de nada? Eso es porque estás en 'El otro día', el otro programa de comedia. Presentado por Galder Varas.

Los Mediatizados: emisiones regulares
Mediatizados 214 - En cuarentena por Coronavirus

Los Mediatizados: emisiones regulares

Play Episode Listen Later Mar 5, 2020 60:00


Los Mediatizados 214 - 5/3/2020 - 0' Saludo y noticias (sinto: bensound.com - Funky element) - 6' Tertulia: tratamiento informativo del Coronavirus y su influencia en eventos deportivos - 14' Agenda Deportiva (sinto: Kevin Mac Leod - Presenterator) - 19' Audiencias de febrero: subida del pago, Cuatro se acerca a la Sexta, y Neox en crisis - 26' Agenda de Neeo.es (sinto: bensound.com - Ukulele) - 32' Sonido Histórico: 30 años de Telecinco (sinto: Imperial Tiger Orchestra - Che Belew) - 39' La edad de oro del periodismo (sinto: bensound.com - Funky suspense) - 49' Si van en chándal, ES DEPORTE -Antena 3 Deportes- (sinto: bensound.com - Extreme action) - 55' Audiencias de mierda de caballo (yasta aquí la feria edition) - 58' Carta de Radiochips y despedida (sinto: De Pinnic - Canción de cuna para Samuel)

Broderskab & Friends Radio
67 - Ado Woodz [Ghetto & Tech House]

Broderskab & Friends Radio

Play Episode Listen Later Feb 12, 2020 59:23


© Skapes, Kyle Watson, Shift K3Y, Bor & Mar, LALZIN, Westend, Jaxx da Fishworks, Kyle Walker, Mouth Water, Pando G, ONE&TWO, Soxx, Funky Codes, Example, Jay Robinson, NuKid, Choomba, Botnek, MNNR, GIANT, Tzafu, D:Tune, Shdws, Phil Gonzo, AC Slater, NuBass, Bassboy, THRILL, Salkin, Yyvng, Neox, Sammy Legs, Incognet, Contaktz, Taim, LØ, Axel Boy

Los Mediatizados: emisiones regulares
Mediatizados 209 - El futuro de las series en abierto y del deporte en pago

Los Mediatizados: emisiones regulares

Play Episode Listen Later Jan 30, 2020 59:18


Los Mediatizados 209 - 30/1/2020 - 0' Sumario y noticias (sinto: bensound.com - Funky element) - 7' El futuro de las series en abierto, ahora que las cadenas las estrenan antes en pago - 15' Agenda de Neeo.es (sinto: bensound.com - Ukulele) - 19' Disney+ y Movistar cerca del acuerdo para paquetizar la nueva plataforma dentro de los Fusión - 26' Especulaciones sobre los derechos de emisión de la NBA y la Fórmula 1 - 32' Agenda deportiva (sinto: Kevin Mac Leod - Presenterator) - 35' El criterio de TVE para pasar (o no) las finales deportivas de Tdp a La 1 - 41' El Medioinformativo (sinto: Tracky Birthday - Balla) - 50' Audiencias de mierda: Pokemon en Neox, los resúmenes de OT en Clan, y varias de Be Mad - 57' Despedida (sinto: De Pinnic - Canción de cuna para Samuel)

WE DO COKE
WE DO COKE #13 [VOWED]

WE DO COKE

Play Episode Listen Later Jan 3, 2020 57:21


WE DO COKE #13 [VOWED GUEST MIX] 1. NEOX & Kaux - Labyrinth 2. KEELD - Down & House 3. Soxx - Lapdance 4. Dr. Meaker - You & I ft. Lorna King (Hot Goods Remix) 5. LALZIN - ID 6. FineRefined - Dinner Time 7. VOWED - What Is 8. Junk That - ID 9. LALZIN & Lodgerz - ID 10. Neox - ID 11. Kaux - ID 12. BADMOOD & Green Ketchup - Telephone (ID & ID Remix) 13. VOWED - ID 14. VOWED & KnightBlock - ID 15. CastNowski & VOWED - ID 16. Mozar - ID 17. Kyle Walker - Panic 18. PAX - Boom 19. VOWED - ID 20. VOWED & Benwah - Slavic Shake (CastNowski Edit) 21. VOWED - ID 22. VOWED - Lazerbeamz 23. Seelo & Thomas Anthony - I Want To Know 24. Nostalgix - Mind Your Biz ft. SophieGrophy 25. Gorgon City - Lick Shot 26. Ekonovah - Out Of Range 27. Steji x Pomboo - Turn Off The Bass 28. DOGMA - Love & Money 29. VOWED - Meet Me (ft. Owen Danoff)

WE DO COKE
WE DO COKE #10 [NEOX]

WE DO COKE

Play Episode Listen Later Nov 16, 2019 59:30


WE DO COKE #10 [NEOX GUEST MIX] 00:00 Relique x DAAV - ID 03:00 NEOX - ID 04:15 Kaux - ID 07:34 Nostalgix & DANNY TIME - Locked & Loaded 10:06 one day one coke, Phil Gonzo - bad molly 13:25 Rique & Jc Ordonez - 305 16:42 NEOX - ID 19:45 GODAMN & THYKIER - Amped 21:31 Cloverdale - Rio Bravo 24:50 Frents - ID 27:38 Banza - ID 30:55 NEOX - ID 33:57 PEACE MAKER! & Anomon - Rime to This 37:04 13th Zodiac - Blaka 40:06 Das Kapital & MNNR - XTC 43:32 Kaux & NEOX - Jackyn G 46:43 LALZIN - ID 50:31 Kage - Mind 53:32 Keeld - Gilles 56:07 ID - ID

No Soy Freak Podcast
E4T2 - Nacho Requena y Miguel Campos Galán

No Soy Freak Podcast

Play Episode Listen Later Aug 2, 2019 102:03


Episodio 4, Temporada 2: Con Nacho Requena y Miguel Campos Galán[04:30] Detrás del arte: Se sienta con nosotros Nacho Requena (@nachoMol) para hablarnos sobre su experiencia en periodismo, freelancer en diferentes medios entre ellos Meristation y por último su nueva andadura: La publicación de la revista física Manual (@revistamanual).[1:14:00] Cultura Camina Conmigo: Por último el guionista de La Resistencia (Movistar), colaborador en Problemas del Primer Mundo (Flooxer, Neox y Youtube) y el podcast Comedia Perpetua (Ser y Youtube) Miguel Campos Galán (@mcamposgalan) nos comenta sus preferencias culturales para tener más opciones a la hora de querer consumir cultura.¡Si os ha gustado el programa no os olvidéis de darle a “me gusta” y/o compartidlo para que llegue a más oídos! y si queréis descubrir más acerca de No Soy Freak sígueme en @nosoyfreak.Hasta dentro de dos semanas.

ERA Magazine
#432 alison DARWIN, guitarras indie rock

ERA Magazine

Play Episode Listen Later May 13, 2019 26:15


Bienvenidos a ERA Magazine, el podcast de la música independiente española. En el capítulo de hoy, conoceremos el indie-rock que nos llega desde Barcelona de la la mano de alison DARWIN. Buenos días, antes de nada, dejadme hablar un poco de ERA Magazine y de la financiación de este podcast. No tenemos ninguna empresa detrás ni ningún patrocinador. Lo hacemos porque nos gusta y apasiona la música independiente de nuestro país, sus grupos, sus discográficas, sus festivales, sus salas de conciertos… ¿Y cómo pretendemos seguir? Gracias a ti, a los oyentes de ERA Magazine. Visita eramagazine.fm/mecenas, dale al botón azul que pone Apoyar, y desde solo 1,49 euros al mes, nos ayudas a que sigamos descubriendo propuestas muy interesantes. Sé un mecenas de ERA Magazine y participa en este red de podcast, que poco a poco incorpora muchos más programas. Sonidos noventeros Laura, Aleix y Josep son alison DARWIN, grupo que acaba de presentar su primer EP, que lleva por título Find Your Freedom, cinco canciones de indie-rock, sonidos noventeros con potente voz femenina. Ya han conseguido que una de sus canciones se escuche en el canal Neox y Antena 3 Internacional, además, de ganar el pasado mes de noviembre el IMB Artist Talent. Canciones frescas y directas que hoy descubriremos en profundidad. «Disconnected From Reality». «Hero». «Find Your Freedom». «Fear». Con esta canción nos despedimos por hoy. También recuerda, que si quieres ayudar a este podcast, y seguir disfrutando de la música de muchos más grupos, visita eramagazine.fm/mecenas, dale al botón Apoyar y desde 1,49 euros al mes contribuyes a que sigamos descubriendo más propuestas emergentes. Sé un mecenas de ERA Magazine. Porque recuerda: a la gente le encanta la música indie, pero todavía no lo sabe. alison DARWIN Find Your Freedom (Autoeditado, 2019)Facebook | Twitter | YouTube | Soundcloud | Instagram La entrada ERA MAGAZINE #432 alison DARWIN, guitarras indie rock se publicó primero en ERA Magazine.

Broderskab & Friends Radio
25 - Daav [Bass & Ghetto House]

Broderskab & Friends Radio

Play Episode Listen Later Apr 18, 2019 60:31


© Ibranovski, ToMix, JC Ordonez, Cashew, Fulbset, Peace Maker!, Sqwad, Steji, Pomboo, Jay Dunham, JDVL, CastNowski, Qlank, Blak Trash, Deed, Kage, Keeld, MNNR, Lozz, Zahia, Junk That, Murdr, NVLA, JOYRYDE, NEOX, Bright Lights, Habstrakt, KnightBlock, Gotlucky, Relique, Tremille, GODAMN, FVCKDIVMONDS, REHK

Los Mediatizados Xtra
Mediatizados Xtra - Sonido histórico - Héctor del Mar, "el hombre del gol"

Los Mediatizados Xtra

Play Episode Listen Later Apr 15, 2019 4:11


Héctor del Mar, un referente en la narración deportiva, falleció el 8 de abril de 2019 a la edad de 76 años víctima un infarto, según informaron a EFE fuentes de su entorno profesional. El mítico locutor, nacido en Buenos Aires, fue una de las grandes figuras del periodismo deportivo en nuestro país en la década de los 80. En España también dejó huella por sus comentarios en Telecinco en los 90' en las famosas peleas de 'Pressing Catch', deporte al que siguió ligado años más tarde con sus narraciones en 'WWE Raw' y 'WWE Smackdown' para los canales Cuatro, Marca TV y Neox. SINTONÍA CREATIVE COMMONS: Imperial Tiger Orchestra - Che Belew.

Los Mediatizados Xtra
Mediatizados Xtra - Sonido histórico - Héctor del Mar, "el hombre del gol"

Los Mediatizados Xtra

Play Episode Listen Later Apr 15, 2019 4:11


Héctor del Mar, un referente en la narración deportiva, falleció el 8 de abril de 2019 a la edad de 76 años víctima un infarto, según informaron a EFE fuentes de su entorno profesional. El mítico locutor, nacido en Buenos Aires, fue una de las grandes figuras del periodismo deportivo en nuestro país en la década de los 80. En España también dejó huella por sus comentarios en Telecinco en los 90' en las famosas peleas de 'Pressing Catch', deporte al que siguió ligado años más tarde con sus narraciones en 'WWE Raw' y 'WWE Smackdown' para los canales Cuatro, Marca TV y Neox. SINTONÍA CREATIVE COMMONS: Imperial Tiger Orchestra - Che Belew.

El Chiringuito de Jugones
El Chiringuito de Jugones (28/03/2019) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later Mar 29, 2019 148:59


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, José Damián GONZÁLEZ, Iñaki CANO, Jose Antonio LUQUE, Alfredo DURO, Jorge D'ALESSANDRO, Lobo CARRASCO, Frédéric HERMEL, Eduardo INDA, Hugo GATTI, Jose Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Carme BARCELÓ, Quim DOMÈNECH, Cristina CUBERO, Pipi ESTRADA, Rafa GUERRERO, Roberto MORALES, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', José Félix DÍAZ, José Luis SÁNCHEZ, Manu SAINZ, Oscar PEREIRO, Rafa ALMANSA, Cristóbal SORIA...

Travelling
Els ninots de Lego, la reina d'Esc

Travelling

Play Episode Listen Later Feb 8, 2019 5:57


Setmana carregada d'estrenes on destaquen "La Lego pel

El Chiringuito de Jugones
El Chiringuito de Jugones (16/07/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later Jul 17, 2018 143:32


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, José Damián GONZÁLEZ, Iñaki CANO, Jose Antonio LUQUE, Alfredo DURO, Jorge D'ALESSANDRO, Lobo CARRASCO, Frédéric HERMEL, Eduardo INDA, Hugo GATTI, Jose Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Carme BARCELÓ, Quim DOMÈNECH, Cristina CUBERO, Pipi ESTRADA, Rafa GUERRERO, Roberto MORALES, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', José Félix DÍAZ, José Luis SÁNCHEZ, Manu SAINZ, Oscar PEREIRO, Rafa ALMANSA, Cristóbal SORIA...

El Chiringuito de Jugones
El Chiringuito de Jugones (17/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 18, 2018 156:20


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (16/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 17, 2018 154:23


Final Europa League: Olympique Marsella 0 - 3 Atlético Madrid 'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (15/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 16, 2018 153:31


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (14/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 15, 2018 156:40


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (13/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 12, 2018 141:32


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (10/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 11, 2018 155:35


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (09/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 10, 2018 158:45


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (08/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 9, 2018 149:04


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (07/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 8, 2018 148:19


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (06/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 6, 2018 156:47


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (03/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 4, 2018 143:00


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (02/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 3, 2018 165:29


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

El Chiringuito de Jugones
El Chiringuito de Jugones (01/05/2018) en MEGA

El Chiringuito de Jugones

Play Episode Listen Later May 2, 2018 166:29


'El Chiringuito de Jugones' es un programa de televisión que nace tras la salida de Josep PEDREROL de la cadena Intereconomía y su anterior programa nocturno 'Punto Pelota', concretamente el 06 Enero 2014 en la cadena Nitro del grupo Atresmedia de Domingo a Jueves de 00:00 a 02:45 h, pero debido al cierre de esta cadena el programa pasa a emitirse en laSexta, posteriormente en Neox y desde el 10 Agosto 2015 a través del nuevo canal MEGA con el mismo horario. En el programa se informa y se debate principalmente acerca de temas relacionados con los equipos del fútbol español, especialmente del binomio "Madrid-Barça". El presentador conduce el programa y el debate, acompañado de 6 tertulianos que van variando a lo largo de la semana. Como colaboradores habituales cuenta con gente de la talla de: Tomás RONCERO, Alfredo DURO, José Damián GONZÁLEZ, Eduardo INDA, José Félix DÍAZ, Iñaki CANO, Hugo GATTI, José Antonio Martín Otín 'PETÓN', Paco GARCÍA CARIDAD, Pipi ESTRADA, Roberto MORALES, José Luis SÁNCHEZ, Manu SAINZ, Jorge D'ALESSANDRO, Lobo CARRASCO, Paco BUYO, Álvaro BENITO, Jose Mª Gutierrez 'GUTI', Quim DOMENECH, Carme BARCELÓ, Rafa GUERRERO, Rafa ALMANSA, Cristóbal SORIA..

InterPodcast
Neox Especial 7-7: Seres Especiales! / Por EngelCast Alive! Imita a neox.fm

InterPodcast

Play Episode Listen Later Apr 23, 2018 7:00


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive!EN: http://mx.ivoox.com/es/interpodcast-neox-especial-7-7-seres-especiales-audios-mp3_rf_25347620_1.htmlDonald Trump quiere comenzar una guerra, Elon Musk quiere tener más humanos y menos maquinas en sus ensambladoras, las jirafas se estan extinguiendo, Apple ya no tolerará más filtraciones y Uber nos cuenta un nuevo chiste.SUSCRÍBETE EN ITUNES Y DEJA 5 ESTRELLAS - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha a EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

apple elon musk uber estrellas seres especiales imita 2s interpodcast jossgreen neox htmlescucha interpodcast2019 punto primario engelcast alive podcast recopilado
InterPodcast
Neox Especial 6-7: Sobrepoblemos el Planeta! / Por EngelCast Alive! Imita a neox.fm

InterPodcast

Play Episode Listen Later Apr 23, 2018 5:49


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive!EN: http://mx.ivoox.com/es/interpodcast-neox-especial-6-7-sobrepoblemos-planeta-audios-mp3_rf_25322852_1.htmlEn Dubai están fabricando matriculas inteligentes, Emirates Airlines permite comprar boletos en ambiente VR, se desarrolla un nuevo método de reproducción humana, el iOS 11.3 convierte tu iPhone 8 en un pisapapeles, llega el nuevo invento de Uber para rentar coches.SUSCRÍBETE EN ITUNES Y DEJA 5 ESTRELLAS - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha a EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

InterPodcast
Neox Especial 5-7: Conectando con la Luna! / Por EngelCast Alive! Imita a neox.fm

InterPodcast

Play Episode Listen Later Apr 13, 2018 6:09


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive!EN: http://mx.ivoox.com/es/interpodcast-neox-especial-5-7-conectando-luna-audios-mp3_rf_25303065_1.htmlEl gobierno gringo no sabía que ya lo están espiando, Sophi, el drón marino con el que estabas soñando, red 4G en la luna y Spotify va a modificar su servicio gratuito.SUSCRÍBETE EN ITUNES Y DEJA 5 ESTRELLAS - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha a EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

spotify la luna 4g estrellas conectando imita 2s interpodcast jossgreen neox htmlescucha interpodcast2019 punto primario engelcast alive podcast recopilado
InterPodcast
Neox Especial 4-7: Qué pasó señor Facebook? / Por EngelCast Alive! Imita a neox.fm

InterPodcast

Play Episode Listen Later Apr 13, 2018 6:32


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive!EN: http://mx.ivoox.com/es/interpodcast-neox-especial-4-7-que-paso-senor-facebook-audios-mp3_rf_25268878_1.htmlZuckerberg se disculpa por no saber lo que come su monstruo, ya se puede captar energía eléctrica de la lluvia, se desarrolla una app para determinar la dislexia, unos tarados sin que hacer eliminan el video "Despacito" de Youtube y Paypal quiere ser un banco de verdad. SUSCRÍBETE EN ITUNES Y DEJA 5 ESTRELLAS - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha a EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

paypal estrellas despacito imita 2s interpodcast jossgreen neox htmlescucha interpodcast2019 punto primario engelcast alive podcast recopilado
InterPodcast
Neox Especial 3-7: China Conquistará el Mundo! / Por EngelCast Alive! Imita a neox.fm

InterPodcast

Play Episode Listen Later Apr 11, 2018 6:04


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive!EN: http://mx.ivoox.com/es/interpodcast-neox-especial-3-7-china-conquistara-mundo-audios-mp3_rf_25197236_1.htmlValve desaparece las Steam Machine, Sense Time se hace rica con el sistema Viper, China controlará la lluvia y Google es demandada por la comunidad de padres de familia.SUSCRÍBETE EN ITUNES Y DEJA 5 ESTRELLAS - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

InterPodcast
Neox Especial 2-7: Agua y Anime para Vivir por Siempre! / Por EngelCast Alive! imitando a: neox.fm

InterPodcast

Play Episode Listen Later Apr 10, 2018 6:40


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive! EN: http://mx.ivoox.com/es/interpodcast-neox-especial-2-7-agua-anime-para-audios-mp3_rf_25168843_1.htmlCALICO en la búsqueda del santo grial, Google contra Skynet, el popote del mañana, criptomoneda para borrachos y conoce el Konnichiwa Festival!SUSCRÍBETE EN ITUNES Y DEJA 5 ? - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

google anime siempre vivir agua skynet imitando 2s interpodcast jossgreen neox htmlescucha interpodcast2019 punto primario konnichiwa festival engelcast alive podcast recopilado
InterPodcast
Neox Especial 1-7: Cuando el fraude y los robots vengan por nosotros! / Por: EngelCast Alive! imita: neox.fm

InterPodcast

Play Episode Listen Later Apr 6, 2018 7:23


PODCAST RECOPILADO, EMITIDO ORIGINALMENTE POR: EngelCast Alive! EN: http://mx.ivoox.com/es/interpodcast-neox-especial-1-7-cuando-fraude-y-audios-mp3_rf_25150736_1.htmlCambridge Analytica puede estar operando en México y a nadie le importa, un robot de la capital del K-pop va a destruirnos, ¿quién fue el autor del atentado a las oficinas de Youtube y qué quería lograr?, Spotify condenado a la ruina.SUSCRÍBETE EN ITUNES Y DEJA 5 ? - https://itunes.apple.com/mx/podcast/engelcast-alive/id341007951?mt=2Síguenos en twitter por @EngelCastSíguenos en facebook por http://facebook.com/EngelCastOficialEscucha a EngelCast Alive! en: http://mx.ivoox.com/es/podcast-engelcast-alive_sq_f131646_1.htmlEscucha a neox.fm en: https://neox.fm/

spotify robots cuando nosotros fraude vengan imita 2s interpodcast jossgreen neox htmlescucha interpodcast2019 punto primario engelcast alive podcast recopilado
Via Podcast
VP086 Cómo Shanshiro Cabañas decidió el tono de su podcast

Via Podcast

Play Episode Listen Later Mar 26, 2018 42:00


Cuando comenzamos un Podcast a menudo definimos la temática de acuerdo a nuestra pasión u objetivo y luego la audiencia o persona que nos escuchará. ¿Cómo definimos el tono de nuestro podcast? ¿Cuál es el valor y el reto de ser auténtico? Sanshiro Cabañas, podcaster y empresario mexicano nos cuenta su historia. Sanshiro comenzó con una emisora de radio Online con estudio y personal que se sostenía totalmente con anuncios de Ad Sense. Hoy lidera ‘Aloja’ una agencia digital y Hittco que ayuda a emprendedores a desarrollar sus empresas con bases tecnológicas y científicas.También produce el podcast Neox.FM donde resume las noticias de tecnología y le añade comentarios humorísticos con lenguaje fuerte. Aprende cómo llevar tu podcast a un nivel superior con los tips de podcasting que te ofrecemos y las experiencias de otros Podcasters. En este capítulo aprenderás: Cómo inició la agencia de marketing digital Aloha. Por qué fundaron Hittco, una organización sin fines de lucro dirigida a ayudar a emprendedores. Qué tienen que ver los perros abandonados y los alcohólicos con un evento de tecnología. Cómo otro podcast le inspiró a regresar al Podcasting. Cómo decidió el tono de su podcast utilizando tres modelos y probándolos. Su experiencia publicando un podcast en Medium y enfatizando la suscripción del podcast por múltiples canales. Por qué tener un buen micrófono y ser uno mismo es clave para comenzar un buen podcast. Sigue a Sanshiro Cabañas Hittco (https://twitter.com/neoxfm) Mantente informado sobre el cambiante mundo del podcasting. Recibe en tu email diariamente un boletín con recursos e información que te ayudarán a hacer un mejor podcast. Los lunes recibirás tips sobre podcasting y entrevistas con podcasters que comparten experiencias. Aprenderás que funciona y cómo llevar tu podcast a un nuevo nivel. De martes a viernes te enterarás de las tendencias, los nuevos recursos y herramientas útiles para modificar tu estrategia o producción de tu podcast. No te pierdas nada.   ¡Suscríbete aquí! Si te gustó este episodio: Subscríbete por Ivoox (http://viapodcast.fm/category/podcast/rss) para recibirlo al momento que lo publiquemos. Sigue a Vía Podcast en las redes Sociales. | Twitter (https://www.facebook.com/viapodcast) Únete al grupo "Preguntas sobre Podcasting" donde podcasters contestamos tus preguntas y dialogamos sobre las nuevas tendencias.

Unión Podcastera
Sanshiro y su perra Frida

Unión Podcastera

Play Episode Listen Later Oct 9, 2017 46:42


Sanshiro es director de una empresa de Marketing como @alohacreativos y parte de una ONG llamada @hittcomx . Además es dueño del podcast @neoxfm en donde comenta los titulares tecnológicos mas recientes y le aporta sus opiniones y cuotas de humor, lo cual termina siendo un producto muy entretenido. Vive en Yucatán y tiene mas de un perro, los cuales también dejan testimonio auditivo. Recuerda ser oyente de podcasts desde los episodios debutantes de Olayo Rubio y también al Maestro Podfather Adam Curry. Nos cuenta la metamorfosis y re-invención de Neox.FM, en los comienzos relacionándose con la música rock desde la radio pública, para luego convertirse en noticias tecnológicas. Gracias por escucharnos. Ya podes seguirnos en @Spotify !! Déjanos algunas estrellas en Apple Podcast Toda la info y mucho más en unionpodcastera.com

Plan 42 Podcast
[P42 - 130] The goldbergs y remembranzas de la infancia.

Plan 42 Podcast

Play Episode Listen Later Oct 23, 2016 137:45


Esta semana nos hemos juntado para hablar sobre una serie que conocimos este verano y que nos ha enganchado . Se trata de la serie de The Goldbergs, que podéis ver casi a cualquier hora en Neox y que trata sobre Adam Goldberg y su familia. Ambientada en los 80 nos a traído a la mente muchos recuerdos de nuestra infancia, espero que tantos como a vosotros al escuchar este podcast. Y como no las noticias más frikis de este lado de jenkingtwon.

Plan 42 Podcast
[P42 - 130] The goldbergs y remembranzas de la infancia.

Plan 42 Podcast

Play Episode Listen Later Oct 23, 2016 137:45


Esta semana nos hemos juntado para hablar sobre una serie que conocimos este verano y que nos ha enganchado . Se trata de la serie de The Goldbergs, que podéis ver casi a cualquier hora en Neox y que trata sobre Adam Goldberg y su familia. Ambientada en los 80 nos a traído a la mente muchos recuerdos de nuestra infancia, espero que tantos como a vosotros al escuchar este podcast. Y como no las noticias más frikis de este lado de jenkingtwon.

Twisted's Darkside Podcast
Twisted's Darkside Podcast 256 - NEOX

Twisted's Darkside Podcast

Play Episode Listen Later Jun 8, 2016 52:23


NEOX Twisted's Darkside Podcast 256 Enzyme Records Country: Spain 01. The Melodyst - Iron Planet 02. NeoX - ID 03. TommyKnocker - Nobody Stopping This 04. GTA - Red Lip`s (NeoX Remix) 05. NeoX - Extinction 06. Miss K8 - Magnet 07. Broken Minds - Apocalyptic (NeoX Remix) 08. Yellow Claw ft. Beenie Man - Bun It Up 09. The Melodyst - Kill Mode 10. Angerfist - Choices (NeoX Refix) 11. NeoX - Insanity 12. NeoX - UnPlug 13. Angerfist - Circus Circus 14. NeoX - Lets Do This 15. Ophidian ft. Ej Grob and William F Devault - Nightfall Angel 16. Miss K8 & Radical Redemption ft Mc Noiz - Scream 17. Kasparov & Amada ft. Diesel - Ik Wil Stampen ~ARTIST LINKZ~ https://www.facebook.com/NeoX.Hardcore/ ~OTHER TWISTED SITEZ~ www.twisted.fm www.impactscotland.co.uk www.infexious.tv www.motormouthrecordz.com www.twistedartists.com www.ibizahard.com MainstreamThe Third MovementOmiDarksideIbiza goes hard

Spanish Hardcore Armada
SPANISH HC ARMADA EPISODE5: NeoX

Spanish Hardcore Armada

Play Episode Listen Later Jul 25, 2015 37:57


Spanish Hardcore Armada, presented by The Empire, is a radio show dedicated to preserving the real meaning of Hardcore through diversity, with a special focus in the spanish producers that you can find in the different Hardcore Labels around the world. REAL HARDCORE ONLY.

Spanish Hardcore Armada
SPANISH HC ARMADA EPISODE5: NeoX

Spanish Hardcore Armada

Play Episode Listen Later Jul 25, 2015 37:57


Spanish Hardcore Armada, presented by The Empire, is a radio show dedicated to preserving the real meaning of Hardcore through diversity, with a special focus in the spanish producers that you can find in the different Hardcore Labels around the world. REAL HARDCORE ONLY.

Frecuencia Digital Radio
FD Radio: Programa especial sobre el cierre de nueve canales de TDT

Frecuencia Digital Radio

Play Episode Listen Later May 5, 2014 89:38


Programa especial de Frecuencia Digital en RFC Radio dedicado al apagón de nueve canales de la TDT por una sentencia del Tribunal Supremo. En el programa hemos entrevistado a José Antonio Antón, director de las TDT's de Atresmedia Televisión, el grupo más afectado por el cierre de canales. Además, en tertulia, se ha comentado todos los datos técnicos del apagón así como la recolocación de contenidos en Neox, Nova y Divinity. Canciones del programa: - Soulbox - Tomorrow - Italian Dub Community - Television (Dub edit) - Teleidofusion - Summer Mood - Kevin Mc Leod - Presenterator - Broke For Free - Night Owl - Imperial Tiger Orchestra - Che Belew - De Pinnic - Canción de Cuna para Samuel

GX Podcast
Wrestling: Vuelve la WWE y la afición del público español al Pressing Catch - GX Podcast (Cap. 9)

GX Podcast

Play Episode Listen Later Nov 12, 2013 26:47


Bienvenidos a GX Podcast, un podcast donde hablamos de la actualidad del mundo friki. Esta semana hablamos con Carlos Gascó, responsable de la web KGB Wrestling, promotor en la federación de wrestling independiente RCW y uno de los que dotan de contenido la página dedicada a la WWE en la web de Neox, para hablar del regreso de los míticos WWE RAW y WWE Smackdown a la emisión en abierto en nuestro país. Recuerda seguirnos cada semana tanto en formato vídeo o audio. --------------- Búscanos en internet: Tienda: http://gamexploitation.es/ayuda-a-gamexploitation/ Redes sociales: http://gamexploitation.es/redes-sociales-y-rss/ Web: http://www.gamexploitation.es Nuestros twitters de esta semana: http://www.twitter.com/GameXploitation http://www.twitter.com/IsaacVianaT http://www.twitter.com/kgb1380 --------------- Serie: GX Podcast Episodio: 9 Temporada: 1 Música: There It Is y Funkorama (por Kevin MacLeod).

Detras de los micros
Detras de los micros 29.11.2012 con Ruben

Detras de los micros

Play Episode Listen Later Nov 29, 2012 63:39


En este programa hablamos de la cancelancion de "Alguien tenia que decirlo", de la aparicion en "Aida" de Jose Coronado y Antonio Resines, de "Pesadilla en la cocina", del fichaje para la nochevieja Neox de la creadora del Ecce Homo Cecilia, del concierto de Marwan en la sala "La buena ventura", del nuevo material de "My chemical romance", del la falsa noticia de separacion de "New found glory", del nuevo Cd de "Silverstein" "This Is How the Wind Shifts", del concierto de "The black Keys" en Madrid, del festival "Tribute Rock Festival", de la seccion de "Las paranoias de J Kool", del estreno de la pelicula "Invasor", y por ultimo la locura con Ruben y su guitarrica!! Esperemos que os guste a todoos!! Recordad que estamos en twitter como @detrasmicros y en nuestro blog www.detrasdelosmicros.blogspot.com !! Hasta la proximaa!!

Dj XAVIREG OFFICIAL PODCAST
XAVIЯEG 10/12 Hardcore Podcast

Dj XAVIREG OFFICIAL PODCAST

Play Episode Listen Later Oct 31, 2012 51:56


Tracklist : 1) Roland & Sherman - "Somewhere Down The Lane" 2) System Overload - "Rise Of the God" 3) Placid K - "Lose it" 4) Trypsin & Dj Only - "Re-Activate" 5) Re-Style - "Give Ya House (T-Juncion & Rudeboy Remix)" 6) Dj Mad Dog & Amnesys - "Game Over" 7) Amnesys - "Elevation" 8) Tensor & Re-Direction - "Fear (Embrionyc's Bound By Blood Remix)" 9) Mr. Sinister - "Heart Of Darkness" 10) Tha Playah - "Mastah Of Shock (Angerfist Remix)" 11) The Executer & Ofearia Vs Human Resource - "Lifebinder" 12) Javi Boss & Dj Juanma - "The Prophecy (T-Juction Remix)" 13) Tommyknocker - "T-2012" 14) Evil Activities - "It's Ok" 15) Dyprax Feat Mc Syco - "Culture Of Chaos" 16) Hellsystem - "Salvation" 17) Juanma - "Living changes" 18) Nosferatu Feat Evil Activities - "Sick Of It All" 19) NeoX - "Artz" 20) Dj Kristof & X-clusive - "Hardcore Universe Anthem" 21) Dyprax - "Dead Presidents" 22) Javi Boss - "Faka two" 23) Endymion - "To Claim The Future" 24) Anime - "A-Bomb" 25) Quitara - "Poisonous" 26) Rob Da Rhythm - "Chainsaw" Total time : 51:56

Dj XAVIREG OFFICIAL PODCAST
XAVIЯEG 09/12 Hardcore Podcast

Dj XAVIREG OFFICIAL PODCAST

Play Episode Listen Later Sep 26, 2012 51:10


Tracklist : 1) Angerfist - "The Depths Of Despair" 2) Nexes - "Playing The Cards" 3) Tommyknocker Feat. The Wishmaster - "Supernatural" 4) Brennan Heart & The Prophet - "Wake Up" 5) The Stunned Guys & Amnesys - "Symphony Of Sins" 6) Traxtorm Gangstaz Allied - "Hardcore Italia" 7) Decipher and Shinra - "Hard Attack" 8) Dyprax & Predator - "Blood Cycle" 9) Rayden - "Crucified" 10) System Shock Feat Mc Jeff - "The Adrenaline" 11) The Melodyst & NeoX - "The End Of The Road" 12) Chosen Few - "Name Of The DJ (Neophyte & Tha Playah Remix)" 13) Re-style Feat. Mercenary - "Infecting Subcultures" 14) Miss K8 - "No More Jokes" 15) Important Records Allstars feat. Mc Axys - "Moments Of Memories" 16) T-Junction & Prowler - "The Deal (Decipher & Shinra Remix)" 17) The Executer & Ofearia Vs Human Resource - "No surrender" 18) Wasted Mind - "Paradox 2.0" 19) D-Ceptor Feat Fallout – “Pain Bringer” 20) Master Of Noise - "Wreck It All" 21) Neophyte & The Viper - "Peace" 22) Masters Elite - "Tied By Sound" 23) Juanma - "Crawl" 24) Nitrogenetics - "Pledge Of Resistance (Angerfist remix)" 25) Re-style Feat. Mercenary - "We Go Back" 26) Dj Predator & Re-Style - "Broken Machine" Total Time : 51:09

Xabi Rain
Xabi Rain- Megamix dedicado a Otra Movida

Xabi Rain

Play Episode Listen Later Mar 18, 2012 10:39


Megamix dedicado al programa de "Neox" Otra Movida

Teleadictos
Teleadictos 067 - Suits, Hill Street Blues y otra ración de series nuevas

Teleadictos

Play Episode Listen Later Oct 2, 2011


CONTENIDO DEL PODCAST:1. Sección Noticias:Nitro emitirá 'Justified' en abierto a partir del 29 de septiembre. [Ver Fuente]Neox traerá a España la versión norteamericana de 'Being Human'. [Ver Fuente]Terry O’Quinn se quedará en Hawaii 5-0 durante más capítulos de lo previsto. [Ver Fuente]FOX encarga 13 episodios de 'Touch', la nueva serie del creador de Heroes y Keither Sutherland. [Ver Fuente]'Boss', la nueva sere de Kelsey Gramer, ya ha sido renovada por Starz para una 2ª temporada. [Ver Fuente]2. ¿Qué puedo ver esta noche?: Suits [Ver Trailer]3. Lo traigo calentito: Comentarios sobre 'Pan am' [Ver Trailer], 'Person of interest' [Ver Trailer], 'Terranova' [Ver Trailer], 'Hart of Dixie' [Ver Trailer] y 'Revenge' [Ver Trailer].4. La libreta de Negrillo: Decálogo para escaparse de la cárcel a lo 'Prison Break'.5. 1,2,3, Nebot otra vez: 'Canción triste de Hill Street' [Ver Trailer]6. Métodos de contacto y despedida6. Escenas eliminadasDescargar mp3Si no se descarga automáticamente pulsa sobre el enlace con el botón derecho del ratón y selecciona "guardar destino/enlace como..."