Podcasts about gni

  • 71PODCASTS
  • 230EPISODES
  • 22mAVG DURATION
  • 1WEEKLY EPISODE
  • Apr 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about gni

Latest podcast episodes about gni

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
416 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - Episódio de encerramento

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Apr 25, 2025 2:32


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 416 - Episódio de encerramentopor José Fernando M. Araújo25.04.2025

Fossil vs Future
WHAT ABOUT WAR? Essential for security or a dangerous distraction from climate action?

Fossil vs Future

Play Episode Listen Later Apr 22, 2025 37:44


War engages our fight or flight instincts. When immediate threats like conflict arise, they often overshadow slower-burning, long-term crises like climate change.In this episode, James and Daisy talk about war. How does climate change fuel conflict? How does war, in turn, hinder efforts to combat the climate crisis? How do we avoid trading one existential threat for another?SOME RECOMMENDATIONS: Conflict and Environment Observatory – CEOBS was launched in 2018 with the primary goal of increasing awareness and understanding of the environmental and derived humanitarian consequences of conflicts and military activities.The Military Emissions Gap – This site is dedicated to tracking, analysing and closing the military emissions gap, bringing together the data that governments report into one place.OTHER ADVOCATES, FACTS, AND RESOURCES:NATO (2023) – Here are some remarks by NATO Secretary General Jens Stoltenberg from the UN Climate Change Conference (COP28) in Dubai.ND-GAIN Country Index – Summarizes a country's vulnerability to climate change and other global challenges in combination with its readiness to improve resilience.United Nations – Today, of the 15 countries most vulnerable to climate change, 13 are struggling with violent conflicts.Sir Christopher John Greenwood - After being called to the Bar by Middle Temple, he became a Fellow of Magdalene in 1978 and later Professor of International Law at the London School of Economics, specialising in international humanitarian law. He was appointed Queen's Counsel in 1999 and elected by the United Nations as a Judge of the International Court of Justice in 2008. That same year, Magdalene named him an Honorary Fellow.The Third Man –A classic thriller written by Graham Greene and starring Orson Welles in which a writer sets about investigating the death of a friend in post-World War II Vienna.Stop Ecocide International – Ecocide law provides a route to justice for the worst harms inflicted upon the living world in times of both peace and conflict, whenever and wherever they are committed.CND (Campaign for Nuclear Disarmament) – CND campaigns to rid the world of nuclear weapons - the most powerful and toxic weapons ever created, threatening all forms of life.Stop the War Coalition – Stop the War was founded in September 2001 in the weeks following 9/11, when George W. Bush announced the “war on terror”. Stop the War has since been dedicated to preventing and ending the wars in Afghanistan, Iraq, Libya and elsewhere.UK Parliament (2024) – In the 2023/24 financial year, the UK spent £53.9 billion on defence.UK Parliament (2025) – The Prime Minister has committed to spend 2.5% of the UK's gross domestic product (GDP) on defence by 2027. UK Parliament (2025) - The Prime Minister said the government would “fully fund our increased investment in defence” by reducing aid spending from 0.5% of gross national income (GNI) to 0.3% in 2027.Ministry of Defence (2024) –  In 2022, total military expenditure of NATO members was $1,195bn and total worldwide military expenditure was $2,240bn, as estimated by SIPRI. The USA was the world's largest spender, accounting for 39% of the total global spending.The Week (2025) – Only 11% of people aged 18-27 say they would fight for the UK.Reuters (2025) - Poland wants to spend 5% of gross domestic product (GDP) on defence in 2026. Poland now spends a higher proportion of GDP on defence than any other NATO member, including the United States. It plans for this year's spending to hit 4.7% of GDP. Institute for Security Studies – The global military carbon footprint currently accounts for around 5.5% of global emissions – more than Africa's entire footprint.Listen to War by Edwin Starr here!Thank you for listening! Please follow us on social media to join the conversation: LinkedIn | Instagram | TikTokYou can also now watch us on YouTube.Music: “Just Because Some Bad Wind Blows” by Nick Nuttall, Reptiphon Records. Available at https://nicknuttallmusic.bandcamp.com/album/just-because-some-bad-wind-blows-3Producer: Podshop StudiosHuge thanks to Siobhán Foster, a vital member of the team offering design advice, critical review and organisation that we depend upon.Stay tuned for more insightful discussions on navigating the transition away from fossil fuels to a sustainable future.

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
415 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - GNI Diferente

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Apr 11, 2025 3:30


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 415 - GNI Diferentepor José Fernando M. Araújo11.04.2025

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
414 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - GNI Distante

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Apr 3, 2025 2:59


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 414 - GNI Distantepor José Fernando M. Araújo03.04.2025

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
413 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - GNI Neutro Racional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Mar 21, 2025 2:33


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 413 - GNI Neutro Racionalpor José Fernando M. Araújo21.03.2025

Irish Tech News Audio Articles
AI Expected to Add €250bn to Ireland's Economy by 2035, according to a report by Microsoft and Trinity College Dublin

Irish Tech News Audio Articles

Play Episode Listen Later Mar 13, 2025 12:49


AI adoption in Ireland has surged to 91%, nearly doubling from 49% in 2024, a significant leap that now puts Ireland ahead of many of its EU counterparts after previously trailing behind. Drawing on insights from 300 senior leaders across the island of Ireland, the AI Economy in Ireland 2025 report produced by Trinity College Dublin in collaboration with Microsoft Ireland, provides a comprehensive view of the current rate of AI adoption. The report reveals AI's potential to contribute at least €250 billion to Ireland's GDP by 2035 . However, this could increase by a further €60bn depending on how businesses, government, and industry leaders harness AI's capabilities and implement policies that foster responsible innovation. The report provides an index comparing adoption rates from our 2024 findings, highlighting the acceleration of AI integration, along with insights into the evolving opportunities and challenges for AI advancement in Ireland. The report highlights AI's projected economic contribution to Ireland: AI adoption is projected to add at least €250 billion to Ireland's economy (GDP) by 2035. Supportive AI policies and an enabling business environment could add an additional €60 billion by 2035. AI adoption is expected to increase Ireland's Gross National Income (GNI) by at least €130 billion by 2035. With the right policies and widespread AI adoption, GNI could be up to €86 billion higher than in a baseline scenario. On a per capita basis, Ireland's GNI could rise to €160,000 per person with optimal AI adoption and policies - €30,000 higher than a non-AI baseline scenario. "Increasingly recognised as a general-purpose technology, similar to electricity and the internet, AI is becoming a fundamental driver of economic growth, and this new report highlights its transformational impact on Ireland," said Catherine Doyle, General Manager, Microsoft Ireland. "Ireland is uniquely positioned to capitalise on AI's capabilities, thanks to its thriving tech ecosystem, skilled workforce, and forward-thinking government initiatives. With a collaborative approach across government, academia, and industry, Ireland can play a leading role in the era of AI, driving sustainable economic growth across sectors and setting the stage for global competitiveness as AI adoption continues to surge." AI Adoption & Governance Challenges While the potential for AI to drive economic growth is clear, organisations still face significant challenges in adopting AI effectively. Despite growing recognition of AI's value, with 50% of organisations (18% increase on 2024) believing AI will enhance productivity, only 8% of organisations have adopted an AI-first approach - integrating AI across all divisions. A key issue appears to be the lack of formal strategy and governance frameworks, creating gaps in secure and responsible AI implementation. In line with the vision of a thriving, competitive AI ecosystem, about half of organisations do not yet have clear AI policies, hindering their ability to manage AI usage effectively. This challenge is compounded by the persistence of a "Shadow AI Culture," where employees independently adopt AI tools without the organisation's oversight. Key findings that underscore this challenge include: 80% of organisations report employees using free AI tools without built-in enterprise security controls (45% in 2024), while enterprise-grade AI tool usage doubled (18% in 2024 to 42% in 2025). 61% of managers acknowledge AI usage even in workplaces where it is officially restricted (30% in 2024). AI Adoption Across the Island of Ireland The report also reveals significant differences in AI adoption across the island of Ireland, particularly in the public sector. These differences highlight both the challenges and opportunities that each region faces as they work toward more integrated AI systems. In Northern Ireland, 24% of public sector organisations use AI in all or most data-driven decision-making, compared to just 13% ...

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
412 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - GNI Otimista

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Mar 13, 2025 3:24


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 412 - GNI Otimistapor José Fernando M. Araújo13.03.2025

36氪·8点1氪
微信聊天记录原图可清理成普通画质丨美国法院否决马斯克针对OpenAI的初步禁令请求

36氪·8点1氪

Play Episode Listen Later Mar 6, 2025 2:51


据快科技消息,近日,微信iOS版和安卓版均出现了热更新。据页面显示,在选项上方,有一排小字:清理原图、原视频可以节省存储空间。清理后,仍可在聊天中看到普通画质的图片、视频。经网友实测,被清理的内容仍会在聊天记录中显示,但画质已降低。36氪获悉,国家发改委公告称,根据近期国际市场油价变化情况,按照现行成品油价格形成机制,自2025年3月5日24时起,国内汽、柴油价格(标准品)每吨分别降低135元和130元。据中国经济网报道,近日有传言毛京波即将卸任莲花中国总裁,调整至海外市场。莲花汽车内部人士证实了此事:“毛总已经有几天没有出现在办公室。”据新华社消息,政府工作报告说,2025年居民医保和基本公共卫生服务经费人均财政补助标准分别再提高30元和5元。稳步推动基本医疗保险省级统筹,健全基本医疗保险筹资和待遇调整机制,深化医保支付方式改革,促进分级诊疗。全面建立药品耗材追溯机制,严格医保基金监管,让每一分钱都用于增进人民健康福祉。据央视新闻报道,当地时间3月4日,美国一联邦法院否决了马斯克此前请求的初步禁令,该禁令旨在阻止ChatGPT制造商OpenAI转型为营利性公司。据报道,美国加利福尼亚州奥克兰地区法官伊冯·冈萨雷斯·罗杰斯表示,马斯克未能满足获得初步禁令所需的“高门槛”证据要求,以阻止OpenAI从非营利组织转为营利性实体。据界面新闻报道,韩国央行3月5日发布的初步核实数据显示,韩国2024年人均国民总收入为3.6624万美元,同比增长1.2%,连续两年高于日本。在总人口超过5000万的国家中,韩国人均GNI规模仅次于美国、德国、英国、法国和意大利。

Parliament Matters
International aid cuts: What is Parliament's role?

Parliament Matters

Play Episode Listen Later Feb 28, 2025 45:59


Parliament passed a law requiring the Government to spend 0.7% of Gross National Income on international aid. So, should Ministers be able to bypass that legal obligation through a ministerial statement? We also discuss Labour MP Mike Amesbury's suspended jail sentence and how a recall petition will be called if he doesn't voluntarily step down. Plus, we explore the controversy surrounding the Product Safety and Metrology Bill, which Brexiteers warn could stealthily realign Britain with the EU while handing Ministers sweeping legislative powers.Should MPs have a say on the Government's decision to cut yet more from the UK's international aid budget to fund increased defence spending? By law, the UK is committed to spending 0.7% of Gross National Income (GNI) on international aid. Yet this latest reduction does not have to be put to a vote in Parliament. With aid spending now slashed to just 0.3% of GNI, could an upcoming Estimates Day debate on Foreign Office funding give MPs a chance to raise concerns about the decision? And with the aid budget shrinking, is it time to reconsider the role of the International Development Select Committee? Meanwhile, Labour MP Mike Amesbury has had his 10-week jail sentence for assault suspended on appeal — but that may not be enough to save his Commons seat. As Ruth explains, an MP sentenced to jail — even with a suspended sentence — faces a recall petition. If 10% of voters in Runcorn and Helsby back his removal, the Government will be forced into a by-election, unless he voluntarily resigns his seat first. Also in the spotlight: the Product Safety and Metrology Bill. Ministers are keen to reassure MPs about this seemingly technical legislation, but Brexiteers suspect it's a Trojan Horse for creeping EU alignment. The bill contains sweeping "Henry VIII powers," allowing ministers to rewrite laws with minimal parliamentary oversight. Ruth and Mark ponder why governments keep reaching for these controversial powers —and what it means for democracy.

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
411 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - GNI Futurista Racional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Feb 20, 2025 3:02


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 411 - GNI Futurista Racionalpor José Fernando M. Araújo20.02.2025

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
410 - A autoconfiança e os GNI's (Grupos Naturais de Inteligência) - Episódio de Introdução (As 3 Inteligências)

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Feb 13, 2025 3:57


Série: A autoconfiança e os GNI's (Grupos Naturais de Inteligência) 410 - Episódio de Introdução (As 3 Inteligências)por José Fernando M. Araújo01302.2025

The Generations Radio Program
California Fires - Who's to Blame?

The Generations Radio Program

Play Episode Listen Later Jan 21, 2025


Who's to blame for these fires and hurricanes that are damaging our country? Reparations are running close to 2% of the GNI, just enough to tip us into recession. God brings it, but God can save us. Shall we trust insurance agencies, FEMA, princes and horses? Or better yet, shall we trust God? This program includes: 1. The World View in 5 Minutes with Adam McManus (Trump: "The golden age of America begins right now!”, Biden pardons own family, Trump released his own cryptocurrency) 2. Generations with Kevin Swanson

Generations Radio
California Fires - Who's to Blame? - God-centered vs. Man-centered Worldviews

Generations Radio

Play Episode Listen Later Jan 21, 2025 29:24


Who's to blame for these fires and hurricanes that are damaging our country? Reparations are running close to 2% of the GNI, just enough to tip us into recession. God brings it, but God can save us. Shall we trust insurance agencies, FEMA, princes and horses? Or better yet, shall we trust God?This program includes:1. The World View in 5 Minutes with Adam McManus (Trump: "The golden age of America begins right now!", Biden pardons own family, Trump released his own cryptocurrency)2. Generations with Kevin Swanson

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
409 - Como os GNI's são influenciados e quais as consequências - Episódio Final

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Jan 2, 2025 5:19


Série: Livro: Como os GNI's são influenciados e quais as consequências 409 - Episódio Final por José Fernando M. Araújo 02.01.2025

華視三國演議
2025中共衰敗的10大訊號|#宋國誠 #矢板明夫 #汪浩|@華視三國演議|20241228

華視三國演議

Play Episode Listen Later Dec 28, 2024 49:32


台灣01益生菌 三效合一NO.1 ➤排便順暢 ➤調整體質 ➤應對PM2.5 3大健康問題,台灣01,一次調整! 每條出廠菌數破千億,補充才有感! 業界第一,高達 831 項檢驗,通過才出貨 一天一包,打造順暢好體質 https://tw01.co/w2HuH -- 蘋果、Google、Amazon,這些美國頂尖企業,都從車庫誕生。他們不僅改變世界,更引領未來財富方向! 009800、中信NASDAQ,009801、中信美國創新科技,與企業龍頭並肩同行,可望駕馭美股成長潛力與多頭動能。 10元起步,009800、009801,隨美股一起榮耀,1/13~1/17速洽全台各大證券商。 https://bit.ly/4gMfIti -- 仔細聽! 科技聚落大利多! 左營市心,台積電特區,巨蛋高鐵一瞬間 【華友聯 NeXT21】校園首排、永久棟距、3房加平車1688萬起 全新完工,即刻預約 07-322-2121 https://bit.ly/4fkQmBM ----以上訊息由 SoundOn 動態廣告贊助商提供---- 2025年以後,中共的危機可以用一個「衰」字來形容,整體惡化的情勢呈現「螺旋式向下探底」的趨勢。中共危機是一種「習治中國」和「黨權衰敗」下的加重型政經危機!包括習黨內地位衰弱,對外戰略塌陷,國民財富萎縮,社會階層碎裂,財政赤字和貧富差距擴大,政權合法性流沙化,地緣政治惡質化,美中關係尖銳化以及紅色供應鏈斷裂化!精彩訪談內容,請鎖定@華視三國演議! 本集來賓:#宋國誠 #矢板明夫 主持人:#汪浩 以上言論不代表本台立場 #中國經濟 #共產黨 #戰狼外交 #躺平 電視播出時間

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
408 - Como os GNI's são influenciados e quais as consequências - GNI Diferente

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Dec 27, 2024 3:17


Série: Livro: Como os GNI's são influenciados e quais as consequências 408 - GNI Diferente por José Fernando M. Araújo 26.12.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
407 - Como os GNI's são influenciados e quais as consequências - GNI Neutro Emocional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Dec 19, 2024 3:42


Série: Livro: Como os GNI's são influenciados e quais as consequências 407 - GNI Neutro Emocional por José Fernando M. Araújo 19.12.2024

IIEA Talks
Policy to Pasture: Bridging the Gap Between Climate Targets and Irish Agricultural Realities

IIEA Talks

Play Episode Listen Later Dec 18, 2024 42:51


Ireland faces a unique challenge in reconciling its position as a major agricultural producer with increasingly ambitious climate targets at national and EU level. The agri-food sector contributes significantly to Ireland's economy, generating €17.3 billion in gross value added (6% of GNI*) and employing 173,400 people. However, it also accounts for 37.8% of national greenhouse gas emissions, creating a distinctive challenge. In this first event of a new IIEA project entitled Pathways: Ireland's Agricultural Future, Prof. Alan Matthews and Dr. Matthew O'Neill present for discussion the findings of their working paper, ahead of its publication in early 2025. The event was chaired by Dr Karen Keaveney, Head of Subject for Rural Development in the School of Agriculture and Food Science, University College Dublin. The IIEA is grateful to the European Climate Foundation for its support in establishing this project. About the Speaker: Prof Alan Matthews is Professor Emeritus of European Agricultural Policy at the University of Dublin Trinity College, Ireland, and a former President of the European Association of Agricultural Economists. His research interests include the behaviour of the Irish farm and food system, the EU's Common Agricultural Policy, the relationships between trade and food security, and WTO trade norms and disciplines. Dr Matthew O'Neill is Climate Project Lead at the IIEA, in which role he leads the Pathways: Ireland's Agricultural Future project. His research focuses on the intersection of climate policy and agricultural systems.

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
406 - Como os GNI's são influenciados e quais as consequências - GNI Continuador Emocional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Dec 12, 2024 2:48


Série: Livro: Como os GNI's são influenciados e quais as consequências 406 - GNI Continuador Emocional por José Fernando M. Araújo 12.12.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
405 - Como os GNI's são influenciados e quais as consequências - GNI Disponível

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Dec 5, 2024 5:45


Série: Livro: Como os GNI's são influenciados e quais as consequências 405 - GNI Disponível por José Fernando M. Araújo 05.12.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
404 - Como os GNI's são influenciados e quais as consequências - Inteligência Emocional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Nov 28, 2024 3:04


Série: Livro: Como os GNI's são influenciados e quais as consequências 404 - Inteligência Emocional por José Fernando M. Araújo 281.11.2024

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We have a full slate of upcoming events: AI Engineer London, AWS Re:Invent in Las Vegas, and now Latent Space LIVE! at NeurIPS in Vancouver and online. Sign up to join and speak!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!We try to stay close to the inference providers as part of our coverage, as our podcasts with Together AI and Replicate will attest: However one of the most notable pull quotes from our very well received Braintrust episode was his opinion that open source model adoption has NOT gone very well and is actually declining in relative market share terms (it is of course increasing in absolute terms):Today's guest, Lin Qiao, would wholly disagree. Her team of Pytorch/GPU experts are wholly dedicated toward helping you serve and finetune the full stack of open source models from Meta and others, across all modalities (Text, Audio, Image, Embedding, Vision-understanding), helping customers like Cursor and Hubspot scale up open source model inference both rapidly and affordably.Fireworks has emerged after its successive funding rounds with top tier VCs as one of the leaders of the Compound AI movement, a term first coined by the Databricks/Mosaic gang at Berkeley AI and adapted as “Composite AI” by Gartner:Replicating o1We are the first podcast to discuss Fireworks' f1, their proprietary replication of OpenAI's o1. This has become a surprisingly hot area of competition in the past week as both Nous Forge and Deepseek r1 have launched competitive models.Full Video PodcastLike and subscribe!Timestamps* 00:00:00 Introductions* 00:02:08 Pre-history of Fireworks and PyTorch at Meta* 00:09:49 Product Strategy: From Framework to Model Library* 00:13:01 Compound AI Concept and Industry Dynamics* 00:20:07 Fireworks' Distributed Inference Engine* 00:22:58 OSS Model Support and Competitive Strategy* 00:29:46 Declarative System Approach in AI* 00:31:00 Can OSS replicate o1?* 00:36:51 Fireworks f1* 00:41:03 Collaboration with Cursor and Speculative Decoding* 00:46:44 Fireworks quantization (and drama around it)* 00:49:38 Pricing Strategy* 00:51:51 Underrated Features of Fireworks Platform* 00:55:17 HiringTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner at CTO at Danceable Partners, and I'm joined by my co-host, Swyx founder, Osmalayar.Swyx [00:00:11]: Hey, and today we're in a very special studio inside the Fireworks office with Lin Qiang, CEO of Fireworks. Welcome. Yeah.Lin [00:00:20]: Oh, you should welcome us.Swyx [00:00:21]: Yeah, welcome. Yeah, thanks for having us. It's unusual to be in the home of a startup, but it's also, I think our relationship is a bit unusual compared to all our normal guests. Definitely.Lin [00:00:34]: Yeah. I'm super excited to talk about very interesting topics in that space with both of you.Swyx [00:00:41]: You just celebrated your two-year anniversary yesterday.Lin [00:00:43]: Yeah, it's quite a crazy journey. We circle around and share all the crazy stories across these two years, and it has been super fun. All the way from we experienced Silicon Valley bank run to we delete some data that shouldn't be deleted operationally. We went through a massive scale where we actually are busy getting capacity to, yeah, we learned to kind of work with it as a team with a lot of brilliant people across different places to join a company. It has really been a fun journey.Alessio [00:01:24]: When you started, did you think the technical stuff will be harder or the bank run and then the people side? I think there's a lot of amazing researchers that want to do companies and it's like the hardest thing is going to be building the product and then you have all these different other things. So, were you surprised by what has been your experience the most?Lin [00:01:42]: Yeah, to be honest with you, my focus has always been on the product side and then after the product goes to market. And I didn't realize the rest has been so complicated, operating a company and so on. But because I don't think about it, I just kind of manage it. So it's done. I think I just somehow don't think about it too much and solve whatever problem coming our way and it worked.Swyx [00:02:08]: So let's, I guess, let's start at the pre-history, the initial history of Fireworks. You ran the PyTorch team at Meta for a number of years and we previously had Sumit Chintal on and I think we were just all very interested in the history of GenEI. Maybe not that many people know how deeply involved Faire and Meta were prior to the current GenEI revolution.Lin [00:02:35]: My background is deep in distributed system, database management system. And I joined Meta from the data side and I saw this tremendous amount of data growth, which cost a lot of money and we're analyzing what's going on. And it's clear that AI is driving all this data generation. So it's a very interesting time because when I joined Meta, Meta is going through ramping down mobile-first, finishing the mobile-first transition and then starting AI-first. And there's a fundamental reason about that sequence because mobile-first gave a full range of user engagement that has never existed before. And all this user engagement generated a lot of data and this data power AI. So then the whole entire industry is also going through, falling through this same transition. When I see, oh, okay, this AI is powering all this data generation and look at where's our AI stack. There's no software, there's no hardware, there's no people, there's no team. I want to dive up there and help this movement. So when I started, it's very interesting industry landscape. There are a lot of AI frameworks. It's a kind of proliferation of AI frameworks happening in the industry. But all the AI frameworks focus on production and they use a very certain way of defining the graph of neural network and then use that to drive the model iteration and productionization. And PyTorch is completely different. So they could also assume that he was the user of his product. And he basically says, researchers face so much pain using existing AI frameworks, this is really hard to use and I'm going to do something different for myself. And that's the origin story of PyTorch. PyTorch actually started as the framework for researchers. They don't care about production at all. And as they grow in terms of adoption, so the interesting part of AI is research is the top of our normal production. There are so many researchers across academic, across industry, they innovate and they put their results out there in open source and that power the downstream productionization. So it's brilliant for MATA to establish PyTorch as a strategy to drive massive adoption in open source because MATA internally is a PyTorch shop. So it creates a flying wheel effect. So that's kind of a strategy behind PyTorch. But when I took on PyTorch, it's kind of at Caspo, MATA established PyTorch as the framework for both research and production. So no one has done that before. And we have to kind of rethink how to architect PyTorch so we can really sustain production workload, the stability, reliability, low latency, all this production concern was never a concern before. Now it's a concern. And we actually have to adjust its design and make it work for both sides. And that took us five years because MATA has so many AI use cases, all the way from ranking recommendation as powering the business top line or as ranking newsfeed, video ranking to site integrity detect bad content automatically using AI to all kinds of effects, translation, image classification, object detection, all this. And also across AI running on the server side, on mobile phones, on AI VR devices, the wide spectrum. So by the time we actually basically managed to support AI across ubiquitous everywhere across MATA. But interestingly, through open source engagement, we work with a lot of companies. It is clear to us like this industry is starting to take on AI first transition. And of course, MATA's hyperscale always go ahead of industry. And it feels like when we start this AI journey at MATA, there's no software, no hardware, no team. For many companies we engage with through PyTorch, we feel the pain. That's the genesis why we feel like, hey, if we create fireworks and support industry going through this transition, it will be a huge amount of impact. Of course, the problem that the industry is facing will not be the same as MATA. MATA is so big, right? So it's kind of skewed towards extreme scale and extreme optimization in the industry will be different. But we feel like we have the technical chop and we've seen a lot. We'll look to kind of drive that. So yeah, so that's how we started.Swyx [00:06:58]: When you and I chatted about the origins of fireworks, it was originally envisioned more as a PyTorch platform, and then later became much more focused on generative AI. Is that fair to say? What was the customer discovery here?Lin [00:07:13]: Right. So I would say our initial blueprint is we should build a PyTorch cloud because a PyTorch library and there's no SaaS platform to enable AI workloads.Swyx [00:07:26]: Even in 2022, it's interesting.Lin [00:07:28]: I would not say absolutely no, but cloud providers have some of those, but it's not first class citizen, right? At 2022, there's still like TensorFlow is massively in production. And this is all pre-gen AI, and PyTorch is kind of getting more and more adoption. But there's no PyTorch-first SaaS platform existing. At the same time, we are also a very pragmatic set of people. We really want to make sure from the get-go, we get really, really close to customers. We understand their use case, we understand their pain points, we understand the value we deliver to them. So we want to take a different approach instead of building a horizontal PyTorch cloud. We want to build a verticalized platform first. And then we talk with many customers. And interestingly, we started the company in September 2022, and in October, November, the OpenAI announced ChatGPT. And then boom, when we talked with many customers, they were like, can you help us work on the JNS aspect? So of course, there are some open source models. It's not as good at that time, but people are already putting a lot of attention there. Then we decided that if we're going to pick a vertical, we're going to pick JNI. The other reason is all JNI models are PyTorch models. So that's another reason. We believe that because of the nature of JNI, it's going to generate a lot of human consumable content. It will drive a lot of consumer, customer-developer-facing application and product innovation. Guaranteed. We're just at the beginning of this. Our prediction is for those kind of applications, the inference is much more important than training because inference scale is proportional to the up-limit award population. And training scale is proportional to the number of researchers. Of course, each training round could be very expensive. Although PyTorch supports both inference and training, we decided to laser focus on inference. So yeah, so that's how we got started. And we launched our public platform August last year. When we launched, it was a single product. It's a distributed inference engine with a simple API, open AI compatible API with many models. We started with LM and then we added a lot of models. Fast forward to now, we are a full platform with multiple product lines. So we love to kind of dive deep into what we offer. But that's a very fun journey in the past two years.Alessio [00:09:49]: What was the transition from you start to focus on PyTorch and people want to understand the framework, get it live. And now say maybe most people that use you don't even really know much about PyTorch at all. You know, they're just trying to consume a model. From a product perspective, like what were some of the decisions early on? Like right in October, November, you were just like, hey, most people just care about the model, not about the framework. We're going to make it super easy or was it more a gradual transition to the model librarySwyx [00:10:16]: you have today?Lin [00:10:17]: Yeah. So our product decision is all based on who is our ICP. And one thing I want to acknowledge here is the generic technology is disruptive. It's very different from AI before GNI. So it's a clear leap forward. Because before GNI, the companies that want to invest in AI, they have to train from scratch. There's no other way. There's no foundation model. It doesn't exist. So that means then to start a team, first hire a team who is capable of crunch data. There's a lot of data to crunch, right? Because training from scratch, you have to prepare a lot of data. And then they need to have GPUs to train, and then you start to manage GPUs. So then it becomes a very complex project. It takes a long time and not many companies can afford it, actually. And the GNI is a very different game right now, because it is a foundation model. So you don't have to train anymore. That makes AI much more accessible as a technology. As an app developer or product manager, even, not a developer, they can interact with GNI models directly. So our goal is to make AI accessible to all app developers and product engineers. That's our goal. So then getting them into the building model doesn't make any sense anymore with this new technology. And then building easy, accessible APIs is the most important. Early on, when we got started, we decided we're going to be open AI compatible. It's just kind of very easy for developers to adopt this new technology, and we will manage the underlying complexity of serving all these models.Swyx [00:11:56]: Yeah, open AI has become the standard. Even as we're recording today, Gemini announced that they have open AI compatible APIs. Interesting. So we just need to drop it all in line, and then we have everyone popping in line.Lin [00:12:09]: That's interesting, because we are working very closely with Meta as one of the partners. Meta, of course, is kind of very generous to donate many very, very strong open source models, expecting more to come. But also they have announced LamaStack, which is basically standardized, the upper level stack built on top of Lama models. So they don't just want to give out models and you figure out what the upper stack is. They instead want to build a community around the stack and build a new standard. I think there's an interesting dynamics in play in the industry right now, when it's more standardized across open AI, because they are kind of creating the top of the funnel, or standardized across Lama, because this is the most used open source model. So I think it's a lot of fun working at this time.Swyx [00:13:01]: I've been a little bit more doubtful on LamaStack, I think you've been more positive. Basically it's just like the meta version of whatever Hugging Face offers, you know, or TensorRT, or BLM, or whatever the open source opportunity is. But to me, it's not clear that just because Meta open sources Lama, that the rest of LamaStack will be adopted. And it's not clear why I should adopt it. So I don't know if you agree.Lin [00:13:27]: It's very early right now. That's why I kind of work very closely with them and give them feedback. The feedback to the meta team is very important. So then they can use that to continue to improve the model and also improve the higher level I think the success of LamaStack heavily depends on the community adoption. And there's no way around it. And I know the meta team would like to kind of work with a broader set of community. But it's very early.Swyx [00:13:52]: One thing that after your Series B, so you raced for Benchmark, and then Sequoia. I remember being close to you for at least your Series B announcements, you started betting heavily on this term of Compound AI. It's not a term that we've covered very much in the podcast, but I think it's definitely getting a lot of adoption from Databricks and Berkeley people and all that. What's your take on Compound AI? Why is it resonating with people?Lin [00:14:16]: Right. So let me give a little bit of context why we even consider that space.Swyx [00:14:22]: Because like pre-Series B, there was no message, and now it's like on your landing page.Lin [00:14:27]: So it's kind of very organic evolution from when we first launched our public platform, we are a single product. We are a distributed inference engine, where we do a lot of innovation, customized KUDA kernels, raw kernel kernels, running on different kinds of hardware, and build distributed disaggregated execution, inference execution, build all kinds of caching. So that is one. So that's kind of one product line, is the fast, most cost-efficient inference platform. Because we wrote PyTorch code, we know we basically have a special PyTorch build for that, together with a custom kernel we wrote. And then we worked with many more customers, we realized, oh, the distributed inference engine, our design is one size fits all. We want to have this inference endpoint, then everyone come in, and no matter what kind of form and shape or workload they have, it will just work for them. So that's great. But the reality is, we realized all customers have different kinds of use cases. The use cases come in all different forms and shapes. And the end result is the data distribution in their inference workload doesn't align with the data distribution in the training data for the model. It's a given, actually. If you think about it, because researchers have to guesstimate what is important, what's not important in preparing data for training. So because of that misalignment, then we leave a lot of quality, latency, cost improvement on the table. So then we're saying, OK, we want to heavily invest in a customization engine. And we actually announced it called FHIR Optimizer. So FHIR Optimizer basically helps users navigate a three-dimensional optimization space across quality, latency, and cost. So it's a three-dimensional curve. And even for one company, for different use cases, they want to land in different spots. So we automate that process for our customers. It's very simple. You have your inference workload. You inject into the optimizer along with the objective function. And then we spit out inference deployment config and the model setup. So it's your customized setup. So that is a completely different product. So that product thinking is one size fits all. And now on top of that, we provide a huge variety of state-of-the-art models, hundreds of them, varying from text to large state-of-the-art English models. That's where we started. And as we talk with many customers, we realize, oh, audio and text are very, very close. Many of our customers start to build assistants, all kinds of assistants using text. And they immediately want to add audio, audio in, audio out. So we support transcription, translation, speech synthesis, text, audio alignment, all different kinds of audio features. It's a big announcement. You should have heard by the time this is out. And the other areas of vision and text are very close with each other. Because a lot of information doesn't live in plain text. A lot of information lives in multimedia format, images, PDFs, screenshots, and many other different formats. So oftentimes to solve a problem, we need to put the vision model first to extract information and then use language model to process and then send out results. So vision is important. We also support vision model, various different kinds of vision models specialized in processing different kinds of source and extraction. And we're also going to have another announcement of a new API endpoint we'll support for people to upload various different kinds of multimedia content and then get the extract very accurate information out and feed that into LM. And of course, we support embedding because embedding is very important for semantic search, for RAG, and all this. And in addition to that, we also support text-to-image, image generation models, text-to-image, image-to-image, and we're adding text-to-video as well in our portfolio. So it's a very comprehensive set of model catalog that built on top of File Optimizer and Distributed Inference Engine. But then we talk with more customers, they solve business use case, and then we realize one model is not sufficient to solve their problem. And it's very clear because one is the model hallucinates. Many customers, when they onboard this JNI journey, they thought this is magical. JNI is going to solve all my problems magically. But then they realize, oh, this model hallucinates. It hallucinates because it's not deterministic, it's probabilistic. So it's designed to always give you an answer, but based on probabilities, so it hallucinates. And that's actually sometimes a feature for creative writing, for example. Sometimes it's a bug because, hey, you don't want to give misinformation. And different models also have different specialties. To solve a problem, you want to ask different special models to kind of decompose your task into multiple small tasks, narrow tasks, and then have an expert model solve that task really well. And of course, the model doesn't have all the information. It has limited knowledge because the training data is finite, not infinite. So the model oftentimes doesn't have real-time information. It doesn't know any proprietary information within the enterprise. It's clear that in order to really build a compiling application on top of JNI, we need a compound AI system. Compound AI system basically is going to have multiple models across modalities, along with APIs, whether it's public APIs, internal proprietary APIs, storage systems, database systems, knowledge to work together to deliver the best answer.Swyx [00:20:07]: Are you going to offer a vector database?Lin [00:20:09]: We actually heavily partner with several big vector database providers. Which is your favorite? They are all great in different ways. But it's public information, like MongoDB is our investor. And we have been working closely with them for a while.Alessio [00:20:26]: When you say distributed inference engine, what do you mean exactly? Because when I hear your explanation, it's almost like you're centralizing a lot of the decisions through the Fireworks platform on the quality and whatnot. What do you mean distributed? It's like you have GPUs in a lot of different clusters, so you're sharding the inference across the same model.Lin [00:20:45]: So first of all, we run across multiple GPUs. But the way we distribute across multiple GPUs is unique. We don't distribute the whole model monolithically across multiple GPUs. We chop them into pieces and scale them completely differently based on what's the bottleneck. We also are distributed across regions. We have been running in North America, EMEA, and Asia. We have regional affinity to applications because latency is extremely important. We are also doing global load balancing because a lot of applications there, they quickly scale to global population. And then at that scale, different content wakes up at a different time. And you want to kind of load balancing across. So all the way, and we also have, we manage various different kinds of hardware skew from different hardware vendors. And different hardware design is best for different types of workload, whether it's long context, short context, long generation. So all these different types of workload is best fitted for different kinds of hardware skew. And then we can even distribute across different hardware for a workload. So the distribution actually is all around in the full stack.Swyx [00:22:02]: At some point, we'll show on the YouTube, the image that Ray, I think, has been working on with all the different modalities that you offer. To me, it's basically you offer the open source version of everything that OpenAI typically offers. I don't think there is. Actually, if you do text to video, you will be a superset of what OpenAI offers because they don't have Sora. Is that Mochi, by the way? Mochi. Mochi, right?Lin [00:22:27]: Mochi. And there are a few others. I will say, the interesting thing is, I think we're betting on the open source community is going to proliferate. This is literally what we're seeing. And there's amazing video generation companies. There is amazing audio companies. Like cross-border, the innovation is off the chart, and we are building on top of that. I think that's the advantage we have compared with a closed source company.Swyx [00:22:58]: I think I want to restate the value proposition of Fireworks for people who are comparing you versus a raw GPU provider like a RunPod or Lambda or anything like those, which is like you create the developer experience layer and you also make it easily scalable or serverless or as an endpoint. And then, I think for some models, you have custom kernels, but not all models.Lin [00:23:25]: Almost for all models. For all large language models, all your models, and the VRMs. Almost for all models we serve.Swyx [00:23:35]: And so that is called Fire Attention. I don't remember the speed numbers, but apparently much better than VLM, especially on a concurrency basis.Lin [00:23:44]: So Fire Attention is specific mostly for language models, but for other modalities, we'll also have a customized kernel.Swyx [00:23:51]: And I think the typical challenge for people is understanding that has value, and then there are other people who are also offering open-source models. Your mode is your ability to offer a good experience for all these customers. But if your existence is entirely reliant on people releasing nice open-source models, other people can also do the same thing.Lin [00:24:14]: So I would say we build on top of open-source model foundation. So that's the kind of foundation we build on top of. But we look at the value prop from the lens of application developers and product engineers. So they want to create new UX. So what's happening in the industry right now is people are thinking about a completely new way of designing products. And I'm talking to so many founders, it's just mind-blowing. They help me understand existing way of doing PowerPoint, existing way of coding, existing way of managing customer service. It's actually putting a box in our head. For example, PowerPoint. So PowerPoint generation is we always need to think about how to fit into my storytelling into this format of slide one after another. And I'm going to juggle through design together with what story to tell. But the most important thing is what's our storytelling lines, right? And why don't we create a space that is not limited to any format? And those kind of new product UX design combined with automated content generation through Gen AI is the new thing that many founders are doing. What are the challenges they're facing? Let's go from there. One is, again, because a lot of products built on top of Gen AI, they are consumer-personal developer facing, and they require interactive experience. It's just a kind of product experience we all get used to. And our desire is to actually get faster and faster interaction. Otherwise, nobody wants to spend time, right? And then that requires low latency. And the other thing is the nature of consumer-personal developer facing is your audience is very big. You want to scale up to product market fit quickly. But if you lose money at a small scale, you're going to bankrupt quickly. So it's actually a big contrast. I actually have product market fit, but when I scale, I scale out of my business. So that's kind of a very funny way to think about it. So then having low latency and low cost is essential for those new applications and products to survive and really become a generation company. So that's the design point for our distributed inference engine and the file optimizer. File optimizer, you can think about that as a feedback loop. The more you feed your inference workload to our inference engine, the more we help you improve quality, lower latency further, lower your cost. It basically becomes better. And we automate that because we don't want you as an app developer or product engineer to think about how to figure out all these low-level details. It's impossible because you're not trained to do that at all. You should kind of keep your focus on the product innovation. And then the compound AI, we actually feel a lot of pain as the app developers, engineers, there are so many models. Every week, there's at least a new model coming out.Swyx [00:27:09]: Tencent had a giant model this week. Yeah, yeah.Lin [00:27:13]: I saw that. I saw that.Swyx [00:27:15]: It's like $500 billion.Lin [00:27:18]: So they're like, should I keep chasing this or should I forget about it? And which model should I pick to solve what kind of sub-problem? How do I even decompose my problem into those smaller problems and fit the model into it? I have no idea. And then there are two ways to think about this design. I think I talked about that in the past. One is imperative, as in you figure out how to do it. You give developer tools to dictate how to do it. Or you build a declarative system where a developer tells what they want to do, not how. So these are completely two different designs. So the analogy I want to draw is, in the data world, the database management system is a declarative system because people use database, use SQL. SQL is a way you say, what do you want to extract out of a database? What kind of result do you want? But you don't figure out which node is going to, how many nodes you're going to run on top of, how you redefine your disk, which index you use, which project. You don't need to worry about any of those. And database management system will figure out, generate a new best plan, and execute on that. So database is declarative. And it makes it super easy. You just learn SQL, which is learn a semantic meaning of SQL, and you can use it. Imperative side is there are a lot of ETL pipelines. And people design this DAG system with triggers, with actions, and you dictate exactly what to do. And if it fails, then how to recover. So that's an imperative system. We have seen a range of systems in the ecosystem go different ways. I think there's value of both. There's value of both. I don't think one is going to subsume the other. But we are leaning more into the philosophy of the declarative system. Because from the lens of app developer and product engineer, that would be easiest for them to integrate.Swyx [00:29:07]: I understand that's also why PyTorch won as well, right? This is one of the reasons. Ease of use.Lin [00:29:14]: Focus on ease of use, and then let the system take on the hard challenges and complexities. So we follow, we extend that thinking into current system design. So another announcement is we will also announce our next declarative system is going to appear as a model that has extremely high quality. And this model is inspired by Owen's announcement for OpenAI. You should see that by the time we announce this or soon.Alessio [00:29:46]: Trained by you.Lin [00:29:47]: Yes.Alessio [00:29:48]: Is this the first model that you trained? It's not the first.Lin [00:29:52]: We actually have trained a model called FireFunction. It's a function calling model. It's our first step into compound AI system. Because function calling model can dispatch a request into multiple APIs. We have pre-baked set of APIs the model learned. You can also add additional APIs through the configuration to let model dispatch accordingly. So we have a very high quality function calling model that's already released. We have actually three versions. The latest version is very high quality. But now we take a further step that you don't even need to use function calling model. You use our new model we're going to release. It will solve a lot of problems approaching very high OpenAI quality. So I'm very excited about that.Swyx [00:30:41]: Do you have any benchmarks yet?Lin [00:30:43]: We have a benchmark. We're going to release it hopefully next week. We just put our model to LMSYS and people are guessing. Is this the next Gemini model or a MADIS model? People are guessing. That's very interesting. We're watching the Reddit discussion right now.Swyx [00:31:00]: I have to ask more questions about this. When OpenAI released o1, a lot of people asked about whether or not it's a single model or whether it's a chain of models. Noam and basically everyone on the Strawberry team was very insistent that what they did for reinforcement learning, chain of thought, cannot be replicated by a whole bunch of open source model calls. Do you think that that is wrong? Have you done the same amount of work on RL as they have or was it a different direction?Lin [00:31:29]: I think they take a very specific approach where the caliber of team is very high. So I do think they are the domain expert in doing the things they are doing. I don't think there's only one way to achieve the same goal. We're on the same direction in the sense that the quality scaling law is shifting from training to inference. For that, I fully agree with them. But we're taking a completely different approach to the problem. All of that is because, of course, we didn't train the model from scratch. All of that is because we built on the show of giants. The current model available we have access to is getting better and better. The future trend is the gap between the open source model and the co-source model. It's just going to shrink to the point there's not much difference. And then we're on the same level field. That's why I think our early investment in inference and all the work we do around balancing across quality, latency, and cost pay off because we have accumulated a lot of experience and that empowers us to release this new model that is approaching open-ended quality.Alessio [00:32:39]: I guess the question is, what do you think the gap to catch up will be? Because I think everybody agrees with open source models eventually will catch up. And I think with 4, then with Lama 3.2, 3.1, 4.5b, we close the gap. And then 0.1 just reopened the gap so much and it's unclear. Obviously, you're saying your model will have...Swyx [00:32:57]: We're closing that gap.Alessio [00:32:58]: But you think in the future, it's going to be months?Lin [00:33:02]: So here's the thing that's happened. There's public benchmark. It is what it is. But in reality, open source models in certain dimensions are already on par or beat closed source models. So for example, in the coding space, open source models are really, really good. And in function calling, file function is also really, really good. So it's all a matter of whether you build one model to solve all the problems and you want to be the best of solving all the problems, or in the open source domain, it's going to specialize. All these different model builders specialize in certain narrow area. And it's logical that they can be really, really good in that very narrow area. And that's our prediction is with specialization, there will be a lot of expert models really, really good and even better than one-size-fits-all closed source models.Swyx [00:33:55]: I think this is the core debate that I am still not 100% either way on in terms of compound AI versus normal AI. Because you're basically fighting the bitter lesson.Lin [00:34:09]: Look at the human society, right? We specialize. And you feel really good about someone specializing doing something really well, right? And that's how our way evolved from ancient times. We're all journalists. We do everything. Now we heavily specialize in different domains. So my prediction is in the AI model space, it will happen also. Except for the bitter lesson.Swyx [00:34:30]: You get short-term gains by having specialists, domain specialists, and then someone just needs to train like a 10x bigger model on 10x more inference, 10x more data, 10x more model perhaps, whatever the current scaling law is. And then it supersedes all the individual models because of some generalized intelligence slash world knowledge. I think that is the core insight of the GPTs, the GPT-123 networks. Right.Lin [00:34:56]: But the training scaling law is because you have an increasing amount of data to train from. And you can do a lot of compute. So I think on the data side, we're approaching the limit. And the only data to increase that is synthetic generated data. And then there's like what is the secret sauce there, right? Because if you have a very good large model, you can generate very good synthetic data and then continue to improve quality. So that's why I think in OpenAI, they are shifting from the training scaling law intoSwyx [00:35:25]: inference scaling law.Lin [00:35:25]: And it's the test time and all this. So I definitely believe that's the future direction. And that's where we are really good at, doing inference.Swyx [00:35:34]: A couple of questions on that. Are you planning to share your reasoning choices?Lin [00:35:39]: That's a very good question. We are still debating.Swyx [00:35:43]: Yeah.Lin [00:35:45]: We're still debating.Swyx [00:35:46]: I would say, for example, it's interesting that, for example, SweetBench. If you want to be considered for ranking, you have to submit your reasoning choices. And that has actually disqualified some of our past guests. Cosign was doing well on SweetBench, but they didn't want to leak those results. So that's why you don't see O1 preview on SweetBench, because they don't submit their reasoning choices. And obviously, it's IP. But also, if you're going to be more open, then that's one way to be more open. So your model is not going to be open source, right? It's going to be an endpoint that you provide. Okay, cool. And then pricing, also the same as OpenAI, just kind of based on...Lin [00:36:25]: Yeah, this is... I don't have, actually, information. Everything is going so fast, we haven't even thought about that yet. Yeah, I should be more prepared.Swyx [00:36:33]: I mean, this is live. You know, it's nice to just talk about it as it goes live. Any other things that you want feedback on or you're thinking through? It's kind of nice to just talk about something when it's not decided yet. About this new model. It's going to be exciting. It's going to generate a lot of buzz. Right.Lin [00:36:51]: I'm very excited to see how people are going to use this model. So there's already a Reddit discussion about it. And people are asking very deep, mathematical questions. And since the model got it right, surprising. And internally, we're also asking the model to generate what is AGI. And it generates a very complicated DAG thinking process. So we're having a lot of fun testing this internally. But I'm more curious, how will people use it? What kind of application they're going to try and test on it? And that's where we really like to hear feedback from the community. And also feedback to us. What works out well? What doesn't work out well? What works out well, but surprising them? And what kind of thing they think we should improve on? And those kind of feedback will be tremendously helpful.Swyx [00:37:44]: Yeah. So I've been a production user of Preview and Mini since launch. I would say they're very, very obvious jobs in quality. So much so that they made clods on it. And they made the previous state-of-the-art look bad. It's really that stark, that difference. The number one thing, just feedback or feature requests, is people want control on the budget. Because right now, in 0.1, it kind of decides its own thinking budget. But sometimes you know how hard the problem is. And you want to actually tell the model, spend two minutes on this. Or spend some dollar amount. Maybe it's time you miss dollars. I don't know what the budget is. That makes a lot of sense.Lin [00:38:27]: So we actually thought about that requirement. And it should be, at some point, we need to support that. Not initially. But that makes a lot of sense.Swyx [00:38:38]: Okay. So that was a fascinating overview of just the things that you're working on. First of all, I realized that... I don't know if I've ever given you this feedback. But I think you guys are one of the reasons I agreed to advise you. Because I think when you first met me, I was kind of dubious. I was like... Who are you? There's Replicate. There's Together. There's Laptop. There's a whole bunch of other players. You're in very, very competitive fields. Like, why will you win? And the reason I actually changed my mind was I saw you guys shipping. I think your surface area is very big. The team is not that big. No. We're only 40 people. Yeah. And now here you are trying to compete with OpenAI and everyone else. What is the secret?Lin [00:39:21]: I think the team. The team is the secret.Swyx [00:39:23]: Oh boy. So there's no thing I can just copy. You just... No.Lin [00:39:30]: I think we all come from a very aligned culture. Because most of our team came from meta.Swyx [00:39:38]: Yeah.Lin [00:39:38]: And many startups. So we really believe in results. One is result. And second is customer. We're very customer obsessed. And we don't want to drive adoption for the sake of adoption. We really want to make sure we understand we are delivering a lot of business values to the customer. And we really value their feedback. So we would wake up midnight and deploy some model for them. Shuffle some capacity for them. And yeah, over the weekend, no brainer.Swyx [00:40:15]: So yeah.Lin [00:40:15]: So that's just how we work as a team. And the caliber of the team is really, really high as well. So as plug-in, we're hiring. We're expanding very, very fast. So if we are passionate about working on the most cutting-edge technology in the general space, come talk with us. Yeah.Swyx [00:40:38]: Let's talk a little bit about that customer journey. I think one of your more famous customers is Cursor. We were the first podcast to have Cursor on. And then obviously since then, they have blown up. Cause and effect are not related. But you guys especially worked on a fast supply model where you were one of the first people to work on speculative decoding in a production setting. Maybe just talk about what was the behind the scenes of working with Cursor?Lin [00:41:03]: I will say Cursor is a very, very unique team. I think the unique part is the team has very high technical caliber. There's no question about it. But they have decided, although many companies building coding co-pilot, they will say, I'm going to build a whole entire stack because I can. And they are unique in the sense they seek partnership. Not because they cannot. They're fully capable, but they know where to focus. That to me is amazing. And of course, they want to find a bypass partner. So we spent some time working together. They are pushing us very aggressively because for them to deliver high caliber product experience, they need the latency. They need the interactive, but also high quality at the same time. So actually, we expanded our product feature quite a lot as we support Cursor. And they are growing so fast. And we massively scaled quickly across multiple regions. And we developed a pretty high intense inference stack, almost like similar to what we do for Meta. I think that's a very, very interesting engagement. And through that, there's a lot of trust being built. They realize, hey, this is a team they can really partner with. And they can go big with. That comes back to, hey, we're really customer obsessed. And all the engineers working with them, there's just enormous amount of time syncing together with them and discussing. And we're not big on meetings, but we are like stack channel always on. Yeah, so you almost feel like working as one team. So I think that's really highlighted.Swyx [00:42:38]: Yeah. For those who don't know, so basically Cursor is a VS Code fork. But most of the time, people will be using closed models. Like I actually use a lot of SONET. So you're not involved there, right? It's not like you host SONET or you have any partnership with it. You're involved where Cursor is small, or like their house brand models are concerned, right?Lin [00:42:58]: I don't know what I can say, but the things they haven't said.Swyx [00:43:04]: Very obviously, the drop down is 4.0, but in Cursor, right? So I assume that the Cursor side is the Fireworks side. And then the other side, they're calling out the other. Just kind of curious. And then, do you see any more opportunity on the... You know, I think you made a big splash with 1,000 tokens per second. That was because of speculative decoding. Is there more to push there?Lin [00:43:25]: We push a lot. Actually, when I mentioned Fire Optimizer, right? So as in, we have a unique automation stack that is one size fits one. We actually deployed to Cursor earlier on. Basically optimized for their specific workload. And that's a lot of juice to extract out of there. And we see success in that product. It actually can be widely adopted. So that's why we started a separate product line called Fire Optimizer. So speculative decoding is just one approach. And speculative decoding here is not static. We actually wrote a blog post about it. There's so many different ways to do speculative decoding. You can pair a small model with a large model in the same model family. Or you can have equal pads and so on. There are different trade-offs which approach you take. It really depends on your workload. And then with your workload, we can align the Eagle heads or Medusa heads or a small big model pair much better to extract the best latency reduction. So all of that is part of the Fire Optimizer offering.Alessio [00:44:23]: I know you mentioned some of the other inference providers. I think the other question that people always have is around benchmarks. So you get different performance on different platforms. How should people think about... People are like, hey, Lama 3.2 is X on MMLU. But maybe using speculative decoding, you go down a different path. Maybe some providers run a quantized model. How should people think about how much they should care about how you're actually running the model? What's the delta between all the magic that you do and what a raw model...Lin [00:44:57]: Okay, so there are two big development cycles. One is experimentation, where they need fast iteration. They don't want to think about quality, and they just want to experiment with product experience and so on. So that's one. And then it looks good, and they want to post-product market with scaling. And the quality is really important. And latency and all the other things are becoming important. During the experimentation phase, it's just pick a good model. Don't worry about anything else. Make sure you even generate the right solution to your product. And that's the focus. And then post-product market fit, then that's kind of the three-dimensional optimization curve start to kick in across quality, latency, cost, where you should land. And to me, it's purely a product decision. To many products, if you choose a lower quality, but better speed and lower cost, but it doesn't make a difference to the product experience, then you should do it. So that's why I think inference is part of the validation. The validation doesn't stop at offline eval. The validation will go through A-B testing, through inference. And that's where we offer various different configurations for you to test which is the best setting. So this is the traditional product evaluation. So product evaluation should also include your new model versions and different model setup into the consideration.Swyx [00:46:22]: I want to specifically talk about what happens a few months ago with some of your major competitors. I mean, all of this is public. What is your take on what happens? And maybe you want to set the record straight on how Fireworks does quantization because I think a lot of people may have outdated perceptions or they didn't read the clarification post on your approach to quantization.Lin [00:46:44]: First of all, it's always a surprise to us that without any notice, we got called out.Swyx [00:46:51]: Specifically by name, which is normally not what...Lin [00:46:54]: Yeah, in a public post. And have certain interpretation of our quality. So I was really surprised. And it's not a good way to compete, right? We want to compete fairly. And oftentimes when one vendor gives out results, the interpretation of another vendor is always extremely biased. So we actually refrain ourselves to do any of those. And we happily partner with third parties to do the most fair evaluation. So we're very surprised. And we don't think that's a good way to figure out the competition landscape. So then we react. I think when it comes to quantization, the interpretation, we wrote actually a very thorough blog post. Because again, no one says it's all. We have various different quantization schemes. We can quantize very different parts of the model from ways to activation to cross-TPU communication. They can use different quantization schemes or consistent across the board. And again, it's a trade-off. It's a trade-off across this three-dimensional quality, latency, and cost. And for our customer, we actually let them find the best optimized point. And we have a very thorough evaluation process to pick that point. But for self-serve, there's only one point to pick. There's no customization available. So of course, it depends on what we talk with many customers. We have to pick one point. And I think the end result, like AA published, later on AA published a quality measure. And we actually looked really good. So that's why what I mean is, I will leave the evaluation of quality or performance to third party and work with them to find the most fair benchmark. And I think that's a good approach, a methodology. But I'm not a part of an approach of calling out specific namesSwyx [00:48:55]: and critique other competitors in a very biased way. Databases happens as well. I think you're the more politically correct one. And then Dima is the more... Something like this. It's you on Twitter.Lin [00:49:11]: It's like the Russian... We partner. We play different roles.Swyx [00:49:20]: Another one that I wanted to... I'm just the last one on the competition side. There's a perception of price wars in hosting open source models. And we talked about the competitiveness in the market. Do you aim to make margin on open source models? Oh, absolutely, yes.Lin [00:49:38]: So, but I think it really... When we think about pricing, it's really need to coordinate with the value we're delivering. If the value is limited, or there are a lot of people delivering the same value, there's no differentiation. There's only one way to go. It's going down. So through competition. If I take a big step back, there is pricing from... We're more compared with close model providers, APIs, right? The close model provider, their cost structure is even more interesting because we don't bear any training costs. And we focus on inference optimization, and that's kind of where we continue to add a lot of product value. So that's how we think about product. But for the close source API provider, model provider, they bear a lot of training costs. And they need to amortize the training costs into the inference. So that created very interesting dynamics of, yeah, if we match pricing there, and I think how they are going to make money is very, very interesting.Swyx [00:50:37]: So for listeners, opening eyes 2024, $4 billion in revenue, $3 billion in compute training, $2 billion in compute inference, $1 billion in research compute amortization, and $700 million in salaries. So that is like...Swyx [00:50:59]: I mean, a lot of R&D.Lin [00:51:01]: Yeah, so I think matter is basically like, make it zero. So that's a very, very interesting dynamics we're operating within. But coming back to inference, so we are, again, as I mentioned, our product is, we are a platform. We're not just a single model as a service provider as many other inference providers, like they're providing a single model. We have our optimizer to highly customize towards your inference workload. We have a compound AI system where significantly simplify your interaction to high quality and low latency, low cost. So those are all very different from other providers.Alessio [00:51:38]: What do people not know about the work that you do? I guess like people are like, okay, Fireworks, you run model very quickly. You have the function model. Is there any kind of like underrated part of Fireworks that more people should try?Lin [00:51:51]: Yeah, actually, one user post on x.com, he mentioned, oh, actually, Fireworks can allow me to upload the LoRa adapter to the service model at the same cost and use it at same cost. Nobody has provided that. That's because we have a very special, like we rolled out multi-LoRa last year, actually. And we actually have this function for a long time. And many people has been using it, but it's not well known that, oh, if you find your model, you don't need to use on demand. If you find your model is LoRa, you can upload your LoRa adapter and we deploy it as if it's a new model. And then you use, you get your endpoint and you can use that directly, but at the same cost as the base model. So I'm happy that user is marketing it for us. He discovered that feature, but we have that for last year. So I think to feedback to me is, we have a lot of very, very good features, as Sean just mentioned. I'm the advisor to the company,Swyx [00:52:57]: and I didn't know that you had speculative decoding released.Lin [00:53:02]: We have prompt catching way back last year also. We have many, yeah. So I think that is one of the underrated feature. And if they're developers, you are using our self-serve platform, please try it out.Swyx [00:53:16]: The LoRa thing is interesting because I think you also, the reason people add additional costs to it, it's not because they feel like charging people. Normally in normal LoRa serving setups, there is a cost to dedicating, loading those weights and dedicating a machine to that inference. How come you can't avoid it?Lin [00:53:36]: Yeah, so this is kind of our technique called multi-LoRa. So we basically have many LoRa adapters share the same base model. And basically we significantly reduce the memory footprint of serving. And the one base model can sustain a hundred to a thousand LoRa adapters. And then basically all these different LoRa adapters can share the same, like direct the same traffic to the same base model where base model is dominating the cost. So that's how we advertise that way. And that's how we can manage the tokens per dollar, million token pricing, the same as base model.Swyx [00:54:13]: Awesome. Is there anything that you think you want to request from the community or you're looking for model-wise or tooling-wise that you think like someone should be working on in this?Lin [00:54:23]: Yeah, so we really want to get a lot of feedback from the application developers who are starting to build on JNN or on the already adopted or starting about thinking about new use cases and so on to try out Fireworks first. And let us know what works out really well for you and what is your wishlist and what sucks, right? So what is not working out for you and we would like to continue to improve. And for our new product launches, typically we want to launch to a small group of people. Usually we launch on our Discord first to have a set of people use that first. So please join our Discord channel. We have a lot of communication going on there. Again, you can also give us feedback. We'll have a starting office hour for you to directly talk with our DevRel and engineers to exchange more long notes.Alessio [00:55:17]: And you're hiring across the board?Lin [00:55:18]: We're hiring across the board. We're hiring front-end engineers, infrastructure cloud, infrastructure engineers, back-end system optimization engineers, applied researchers, like researchers who have done post-training, who have done a lot of fine-tuning and so on.Swyx [00:55:34]: That's it. Thank you. Thanks for having us. Get full access to Latent Space at www.latent.space/subscribe

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
403 - Como os GNI's são influenciados e quais as consequências - GNI Intimidador

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Nov 21, 2024 3:53


Série: Livro: Como os GNI's são influenciados e quais as consequências 403 - GNI Intimidador por José Fernando M. Araújo 21.11.2024

Como lo oyes
Como lo oyes - Canciones para que nos gusten los lunes. Vientos y Metales - 18/11/24

Como lo oyes

Play Episode Listen Later Nov 18, 2024 58:45


Aquí hay funk, swing y voz crooner, lounge-pop, coros zulúes y rumba congoleña o marfileña, soul magno, pop sesentero, aires jazz-blues de Nueva Orleans… Y todo con vientos y metales, incluido un bombardino o eufonio es un instrumento perteneciente a la familia del viento-metal, con tubería cónica y con voz en la extensión de barítono-tenor. Disfrutemos. DISCO 1 CARLO COUPÉ Madrid-París-BerlínDISCO 2 TERRY CALLIER Do It AgainDISCO 3 WONDER 45 Make It happenDISCO 4 ODYSSEY Don’t Tell Me Tell HerDISCO 5 SI CRANSTOUN Around The MidnightDISCO 6 GEORGE MICHAEL RoxanneDISCO 7 LOUIS JORDAN & HIS TYMPANY FIVE Barnyard BoogieDISCO 8 LINDA RONSTADT You Took Advantage Of MeDISCO 9 OTIS REDDING My GirlDISCO 10 DOBET GNAHORÉ GniDISCO 11 EN SEPIA VísperaDISCO 12 KARINA La FiestaDISCO 13 JOHNNIE TAYLOR Stop Doggin’ MeEscuchar audio

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
402 - Como os GNI's são influenciados e quais as consequências - GNI Fazedor

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Nov 14, 2024 2:53


Série: Livro: Como os GNI's são influenciados e quais as consequências 402 - GNI Fazedor por José Fernando M. Araújo 13.11.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
401 - Como os GNI's são influenciados e quais as consequências - GNI Futurista Ativo

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Nov 7, 2024 3:25


Série: Livro: Como os GNI's são influenciados e quais as consequências 401 - GNI Futurista Ativo por José Fernando M. Araújo 07.11.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
400 - Como os GNI's são influenciados e quais as consequências - GNI Continuador Ativo

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Oct 17, 2024 2:39


Série: Livro: Como os GNI's são influenciados e quais as consequências 400 - GNI Continuador Ativo por José Fernando M. Araújo 17.10.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
399 - Como os GNI's são influenciados e quais as consequências - Inteligência Ativa

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Oct 12, 2024 4:21


Série: Livro: Como os GNI's são influenciados e quais as consequências 399 - Inteligência Ativa por José Fernando M. Araújo 12.10.2024

CAST11 - Be curious.
Goettl's High Desert Mechanical Supports Community

CAST11 - Be curious.

Play Episode Listen Later Oct 9, 2024 2:25


Goettl's High Desert Mechanical (HDM) continues to support the local community through its Good Neighbor Initiative (GNI), a program designed to assist families facing financial or medical hardships by providing much-needed HVAC or plumbing solutions free of charge. Heather, the latest GNI nominee, experienced the heartwarming impact of this initiative when she returned home from a hospital stay with her son and found her house unbearably hot. The conditions were not conducive to her son's recovery, adding extra stress to their already challenging situation. Recognizing their need, Goettl's HDM stepped in to help. On September 23, 2024, Heather received two... For the written story, read here >> https://www.signalsaz.com/articles/goettls-high-desert-mechanical-supports-community/Check out the CAST11.com Website at: https://CAST11.com Follow the CAST11 Podcast Network on Facebook at: https://Facebook.com/CAST11AZFollow Cast11 Instagram at: https://www.instagram.com/cast11_podcast_network

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
398 - Como os GNI's são influenciados e quais as consequências - GNI Distante

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Oct 3, 2024 3:33


Série: Livro: Como os GNI's são influenciados e quais as consequências 398 - GNI Distante por José Fernando M. Araújo 03.10.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
397 - Como os GNI's são influenciados e quais as consequências - GNI Otimista

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Sep 26, 2024 4:24


Série: Livro: Como os GNI's são influenciados e quais as consequências 397 - GNI Otimista por José Fernando M. Araújo 26.09.2024

The Naturist Living Show
GNI – Gay Naturists International

The Naturist Living Show

Play Episode Listen Later Sep 22, 2024 72:32


Discussing how Gay Naturists International or GNI started, what is their purpose, and what they do for naturism.

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
396 - Como os GNI's são influenciados e quais as consequências - GNI Neutro Racional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Sep 19, 2024 4:37


Série: Livro: Como os GNI's são influenciados e quais as consequências 396 - GNI Neutro Racional por José Fernando M. Araújo 19.09.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
395 - Como os GNI's são influenciados e quais as consequências - GNI Futurista Racional

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Sep 12, 2024 2:56


Série: Livro: Como os GNI's são influenciados e quais as consequências 395 - GNI Futurista Racional por José Fernando M. Araújo 12.09.2024

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo
394 - Como os GNI's são influenciados e quais as consequências - Episódio de Introdução (As 3 Inteligências)

Feliz no Trabalho, Feliz na Vida (INATHU) - na voz de José Fernando M. de Araújo

Play Episode Listen Later Sep 5, 2024 7:02


Série: Livro: Como os GNI's são influenciados e quais as consequências 394 - Episódio de Introdução (As 3 Inteligências) por José Fernando M. Araújo 05.09.2024

Girls Night In
Freaky Friday (2003) ft. Lisa & Jamieson

Girls Night In

Play Episode Listen Later Jul 21, 2024 53:40


Welcome to Season 3 of Girls Night In! Join Laney and Kira as they interview each other with lifelong best friends and fellow mother-daughter duo Lisa and Jamieson about Disney's Freaky Friday (2003)! Listen in as these two moms and daughters discuss what it would be like to swap bodies for a day, their biggest challenges, silliest punishments, and more. Follow us on Social Media!Instagram: www.instagram.com/girlsnightinthepod/GNI playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=708f0c888a954a49&nd=1&dlsi=8e77443498e54c5d 

Girls Night In
Steel Magnolias (1989) ** Special Guests Episode!**

Girls Night In

Play Episode Listen Later May 10, 2024 54:46


Listen in as the women in our family give their review of Steel Magnolias! Join Kira, Laney, Lisa, Michaela and Lila while we have a soul chat about one of our all time favorite movies. Follow us on social media!GNI Instagram: https://www.instagram.com/girlsnightinthepod/GNI Spotify Playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=533fe5855bd54363&nd=1

Girls Night In
BONUS EPISODE! Coffee Chats & Rabbit Trails

Girls Night In

Play Episode Listen Later Apr 20, 2024 26:26


Sit down, pour yourself a coffee, and chat with us on this episode where we catch up on life, missed movie mentions, books and more. Follow us on social media! GNI Instagram: https://www.instagram.com/girlsnightinthepod/

Keyword News
Keyword News 03/06/2024

Keyword News

Play Episode Listen Later Mar 6, 2024 16:33


This Morning's Headlines 1. License suspension 2. Youth support 3. Defense cost sharing 4. NK coordinator 5. GNI rebound

Girls Night In
Our Top 10 Movies You May Not Have Seen

Girls Night In

Play Episode Listen Later Feb 10, 2024 57:23


Think you know Kira & Laney's most random favorite movies? Listen in and find out as The Girls of Girls Night In share their favorite flicks you may have never seen before. Follow us on social media!GNI Instagram: https://www.instagram.com/girlsnightinthepod/GNI Spotify Playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=533fe5855bd54363&nd=1

Keyword News
Keyword News 01/26/2024

Keyword News

Play Episode Listen Later Jan 26, 2024 14:04


1. 30-minute commute2. Moonlight railroad3. Agreement failed4. GNI rebounds5. Pay raise 

The Niall Boylan Podcast
#88 €900 Million + The Cost of Compassion: Ireland's Aid to Ukrainian Refugees

The Niall Boylan Podcast

Play Episode Listen Later Oct 5, 2023 97:56


In this episode, we delve into a pressing matter that has ignited passionate discussions across Ireland – the increasing expenditure on aid for Ukrainian refugees. This topic stems from a revealing article published in The Irish Times, and here are some essential details to set the context."Ireland spent more than €900 million last year helping Ukrainian refugees," states the article, revealing that Ireland's commitment to assisting those affected by the conflict in Ukraine reached substantial figures. The Irish Aid annual report for 2022 discloses that €880 million was expended on services for Ukrainian refugees within Ireland, while an additional €53 million was channeled into bilateral assistance, including vital medical equipment directly to Ukraine.By the week ending December 11th, 2022, around 67,448 people had arrived in Ireland from Ukraine, a number that had surged to 93,810 by September 10th, 2023, according to Central Statistics Office (CSO) figures. Ireland's dedication to providing humanitarian support was swift, with a significant aid package announced on the first day of the Ukrainian conflict, eventually increasing to €20 million.The Irish Aid annual report highlights the nation's collaboration with partner EU countries, providing substantial assistance through the Union Civil Protection Mechanism (UCPM) and humanitarian aid. It was the "largest ever operation under the UCPM," reflecting Ireland's proactive role in championing Ukraine's application for EU Candidate Country status, granted in June 2022.This commitment extended to the Health Service Executive (HSE), which established a Ukraine donations coordination group. By the end of 2022, this group had delivered 16 x 40ft containers and 19 HSE ambulances to Ukraine, filled with essential medical supplies valued at €5.46 million.In addition to these humanitarian efforts, Ireland witnessed record levels of investment in its Official Development Assistance (ODA) program in 2022, totaling €1.4 billion, a 40% increase from 2021. When factoring in the funds allocated to assist Ukrainian refugees, Ireland's ODA reached €2.3 billion, equivalent to 0.63% of gross national income (GNI).As Ireland grapples with rising living costs and critical shortages in public services like housing, healthcare, and utilities, the question arises: can the nation afford to sustain this level of financial commitment?Niall opens the lines to callers, sparking a spirited debate. Some argue that Ireland has a moral obligation to assist those in need, highlighting the humanitarian significance of this aid. Conversely, many express concerns about the strain on domestic resources, with citizens facing housing crises, healthcare challenges, and financial hardships.Join the conversation as we navigate the complex terrain of compassion, commitment, and financial responsibility.

Girls Night In
The Devil Wears Prada (2006)

Girls Night In

Play Episode Listen Later Sep 17, 2023 63:53


Listen in as Kira and Laney discuss the beloved 2006 flick The Devil Wears Prada, featuring Meryl Streep and Anne Hathaway. Snuggle up and have a Girls Night In with us!Follow us on social media!GNI Instagram: https://www.instagram.com/girlsnightinthepod/GNI Spotify Playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=533fe5855bd54363&nd=1 

Girls Night In
Beaches (1988)

Girls Night In

Play Episode Listen Later Aug 27, 2023 59:46


Listen in as mother-daughter podcasting duo Laney and Kira discuss the mothership of all chick flicks - Beaches. Starring Bette Midler and Barbara Hershey, Beaches is about life, love, loss, and best friendship. Come snuggle up and have a Girls Night In with us!Follow us on social media!GNI Instagram: https://www.instagram.com/girlsnightinthepod/GNI Spotify Playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=533fe5855bd54363&nd=1 

Girls Night In
Barbie (2023)

Girls Night In

Play Episode Listen Later Aug 6, 2023 67:36


Mother / daughter podcasting-duo Laney and Kira discuss the 2023 film, Barbie! Listen in as we break down our take on feminism, femininity, the patriarchy and the amazing Greta Gerwig. As women who live to celebrate women, we are so excited to bring this episode to you. Sit back, relax and have a #girlsnightin with us!Follow us on social media!GNI Instagram: https://www.instagram.com/girlsnightinthepod/GNI Spotify Playlist: https://open.spotify.com/playlist/2X8lmI4ESRDpdpBbPMkme0?si=533fe5855bd54363 

The Nonlinear Library
LW - UK PM: $125M for AI safety by Hauke Hillebrandt

The Nonlinear Library

Play Episode Listen Later Jun 12, 2023 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UK PM: $125M for AI safety, published by Hauke Hillebrandt on June 12, 2023 on LessWrong. The UK PM recently announced $125M for a foundation model task force. While the announcement stressed AI safety, it also stressed capabilities. But this morning the PM said 'It'll be about safety' and that the UK is spending more than other countries on this and one small media outlet had already coined this the 'safer AI taskforce'. Ian Hogarth is leading the task force who's on record saying that AGI could lead to “obsolescence or destruction of the human race” if there's no regulation on the technology's progress. Matt Clifford is also advising the task force - on record having said the same thing and knows a lot about AI safety. He had Jess Whittlestone & Jack Clark on his podcast. If mainstream AI safety is useful and doesn't increase capabilities, then the taskforce and the $125M seem valuable. We should use this window of opportunity to solidify this by quoting the PM and getting '$125M for AI safety research' and 'safer AI taskforce' locked in, by writing and promoting op-eds that commend spending on AI safety and urge other countries to follow (cf. the NSF has announced a $20M for empirical AI safety research). OpenAI, Anthropic, A16z and Palantir are all joining DeepMind in setting up offices in London. This might create an AI safety race to the top as a solution to the tragedy of the commons (cf. the US has criticized Germany for not spending 2% of GDP on defence; Germany's shot back saying the US should first meet the 0.7% of GNI on aid target). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

이진우의 손에 잡히는 경제
[손경제] 3/8(수) 올해 취업문 더 좁아진다.. 외

이진우의 손에 잡히는 경제

Play Episode Listen Later Mar 7, 2023


1. 작년 GNI(1인당 국민총소득) 8% 줄어 2. 정부, LH 주식 '산업은행'에 넘기고 있다 3. 올해 취업문 더 좁아진다 출연 : 양효걸 기자, 나수지 기자, 박세훈 작가

PODS by PEI
The Brief: Paras Kharel on Nepal's 2026 Graduation from the LDC status: Its Implications and Challenges for the Future

PODS by PEI

Play Episode Listen Later Jan 10, 2023 60:14


Ep. Br#010 In 2026, Nepal will be graduating from the LDC status after meeting the graduation criteria for three consecutive UN triennials (2015, 2018, 2021) reviews conducted by the Committee for Development Policy (CDP). Nepal has been granted an additional two years to the 3-year transition period generally given by the UN, therefore, making graduation effective from 2026. However, questions have been raised on this proposal as Nepal's GNI per capita is well below the LDC graduation threshold and also below the LDC average hence, the rising doubts on whether Nepal will be able to sustain the status. Notably, Nepal will have to relinquish the International Support Mechanisms it has been receiving as an LDC. That would mean the loss of preferential market access, stringent rules of origin requirements, and possible increases in tariffs on selected goods leading to significant losses in exports. In this episode of The Brief, PEI colleague Aslesh sits with Dr. Paras Kharel, where the two talk about the rationale behind categorizing countries as an LDC and discuss Nepal's graduation from the LDC status and its implications on trade, development assistance and policy space. They then examine the failings of the export sector and the policy changes required to boost the sector. They conclude with some key takeaways from the post-graduation experiences and strategies of a few countries which have graduated and competently sustained the graduation. Paras Kharel is Executive Director at South Asia Watch on Trade, Economics and Environment (SAWTEE), a Kathmandu-based think tank. He has over 15 years of research experience in trade and development. He has a PhD in Economics (University of Melbourne) with specialization in international trade and applied microeconometrics. His publications include two edited volumes on South Asian cooperation/integration, and articles in peer-reviewed journals such as Review of International Economics, International Economics, and East Asian Economic Review.

The Nonlinear Library
EA - EA Germany's Strategy for 2023 by Sarah Tegeler

The Nonlinear Library

Play Episode Listen Later Jan 8, 2023 25:39


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Germany's Strategy for 2023, published by Sarah Tegeler on January 8, 2023 on The Effective Altruism Forum. Based on interviews with stakeholders, feedback from German community members and other national community builders, the new co-directors of EA Germany (EAD) drafted this strategy for 2023. Summary Our Vision is a diverse and resilient community of ambitious people in and from Germany who are thinking carefully about the world's biggest problems and taking impactful action to solve them. Our Mission is to serve as a central point of contact for the German EA community and to continuously improve ways to guide people to take effective action, directly or by supporting local groups. Our Values are sustainable growth, a welcoming and nurturing culture and high professional standards. Our Focus Areas: EAD aims to guide people in Germany directly and indirectly to more impactful actions: Directly, e.g. through communications and events such as an EAGxBerlin 2023, career 1-1s, fellowships or retreats. Indirectly by training community builders, e.g. through regular calls, 1-1s and German-specific resources. EAD will offer efficiency services to save time and costs for committed EAs acting as an employer of record for individual grantees and providing fiscal sponsorship for local groups. A methodology of impact estimation based on a multi-touchpoint attribution model will serve as a basis for designing and prioritising exploratory programs using a lean startup approach. Background EA Germany (EAD) In the 2020 EA Survey, 7.4% of participants were from Germany, the third largest population behind the US and the UK. Apart from the US, Germany has the largest population and GNI of the ten largest countries in the survey. Germany has about 50 volunteer community builders in 25 local / university groups. 458 people have taken the GWWC pledge, and more than 400 Germans visited EAGxBerlin in September 2022. In 2021, Effektiv Spenden raised 18.86 Mio. Euros for effective charities. The registered association EA Germany was founded in 2019 as Netzwerk für Effektiven Altruismus Deutschland e.V. (NEAD) by EAs in Germany and has a board of volunteers. In parallel, one person on a national CEA Community Building Grant (CBG) worked independently from the association from 2020-22. The German website effektiveraltruismus.de was run by the national regranting organisation Effektiv Spenden. In late 2021, NEAD started offering employer-of-record services to grantees and EA organisations as well as fiscal sponsorship for local groups and hired a part-time operations associate on a CBG. A new board was elected in May 2022 and decided to apply for three CBGs – two co-directors and one project manager, in addition to the operations associate. The co-directors started in September and November 2022, and the project manager will start in January 2023. Funding for two other roles was promised but is not finalised as of December 2022. The association was renamed Effektiver Altruismus Deutschland (EAD) e.V. in 2022, and will now also run the website effektiveraltruismus.de. Epistemic Status Sarah Tegeler and Patrick Gruban drafted this document in November 2022 after having started working together as co-directors in the same month. Both have volunteered as local community builders, but this is their first role in an EA organisation. Most of the work on this document was influenced by interviews with stakeholders, other national EA community builders and reviews of different national strategies. About 150-200 hours went into discussing and writing the strategy. While the authors are confident the strategy will help foster a healthy community, their overall epistemic status is uncertain about the organisation's and its programs' counterfactual impact. Thus, many areas are not listed under founda...

The Nonlinear Library
EA - Introducing the Center for Effective Aid Policy (CEAP) by MathiasKB

The Nonlinear Library

Play Episode Listen Later Dec 23, 2022 8:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Center for Effective Aid Policy (CEAP), published by MathiasKB on December 23, 2022 on The Effective Altruism Forum. We are incredibly excited to announce the launch of the Center for Effective Aid Policy (CEAP), a new non-profit incubated by Charity Entrepreneurship. Our mission is to improve the effectiveness of development aid. We will do so through policy advocacy to governments and INGOs. If you are unfamiliar with development aid and want to learn more, we wrote an introduction to the field which was posted to the forum last week. In short: Development aid represents one fourth of all charitable giving worldwide and is not known for its immaculate efficiency and clandestine operations. The cost-effectiveness of many aid projects can be vastly improved and we believe there are many opportunities to do so. In this post we will go over: Our near term plans. Speculative plans for the long term. How you can help! We additionally hope sharing our tentative plans can be a step towards greater organizational transparency in the effective altruism community. Some, both inside and outside this community, will disagree that our organization is a good use of resources. Our funding would most likely have gone to highly effective charities counterfactually. Being held accountable and scrutinized for our decisions, might hurt us in the short run but benefits everyone in the long run. Many parliaments are surrounded by ‘think-tanks' who seek to influence policy in directions that just so happen to benefit the industries which are funding them. Decision makers should be free to evaluate our organization's priorities and decide for themselves if they agree. Interventions we are excited about We are entering a well-established field and stand on the shoulders of giants. There are many organizations with decades of experience doing great work to improve aid effectiveness - from research institutes doing academic research, to organizations solely focused on advocacy. Our work builds upon the collective research of thousands of academics, practitioners, and policy makers who have worked tirelessly for decades to improve the quality of aid. As a new organization we have the ability to move fast and break things. Taking risks on one's own behalf is all well and good, but mistakes we make might hurt the efforts of other organizations advocating for cost-effective aid spending. When we set out to prioritize between the many possible interventions, we looked for aid policies that experts in development and policymakers alike were excited about when interviewed. We have reached three of interventions that we think look especially promising: Interventions we want to address in our first year: Advocacy for cash-benchmarking of aid projects Cash-benchmarking advocacy was the intervention with the best reception among policymakers and academics we interviewed. Despite this, there is very little information available on cash-benchmarking. It doesn't even have a wikipedia page! Google's top result for cash-benchmarking is a one page report by USAID, describing a recent successful experiment they did. The discrepancy between the possible impact of improved benchmarking, the excitement of decision-makers, and lack of high quality public material is larger than for any other improvement to aid we reviewed. To change this state of affairs we are producing a comprehensive report, which can serve as a safe point of referral for policymakers and advocates. Advocacy to affect aid cuts A recent trend in western countries is for governments to cut aid spending. In 2020, the UK government made the decision to cut its aid spending from 0.7 to 0.5% of GNI. In 2022 the newly elected Swedish government cut future spending from 1% to 0.7%. Governments are also classifying previously non-aid bud...