Podcasts about cset

  • 47PODCASTS
  • 104EPISODES
  • 35mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about cset

Latest podcast episodes about cset

The Foresight Institute Podcast
Existential Hope Podcast: Helen Toner | Who gets to decide AI's future?

The Foresight Institute Podcast

Play Episode Listen Later May 2, 2025 20:49


Who makes the rules for AI? Right now, a handful of companies and governments are shaping its trajectory – but what happens behind closed doors? Helen Toner, Director of Strategy at Georgetown's CSET and former OpenAI board member, has been inside some of the biggest AI governance conversations. In this conversation with Beatrice Erkers, she shares an insider's take on AI policy, US-China dynamics, and what's coming next in AI regulation.This interview is a guest lecture in our new online course about shaping positive futures with AI. The course is free, and available here: https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/ Hosted on Acast. See acast.com/privacy for more information.

E137: AI Safety vs Speed: Helen Toner Discusses OpenAI Board Experience, Regulatory Approaches, and Military AI [The Cognitive Revolution]

Play Episode Listen Later Apr 24, 2025 83:35


This week on Upstream, we're releasing an episode of The Cognitive Revolution. Nathan Labenz interviews Helen Toner, director at CSET, about her experiences with OpenAI, the concept of adaptation buffers for AI integration, and AI's role in military decision-making. They discuss the implications of AI development, the need for regulatory policies, and the geopolitical dynamics involving AI competition with China. —

Clearer Thinking with Spencer Greenberg
AI, US-China relations, and lessons from the OpenAI board (with Helen Toner)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 26, 2025 81:57


Read the full transcript here. Is it useful to vote against a majority when you might lose political or social capital for doing so? What are the various perspectives on the US / China AI race? How close is the competition? How has AI been used in Ukraine? Should we work towards a global ban of autonomous weapons? And if so, how should we define "autonomous"? Is there any potential for the US and China to cooperate on AI? To what extent do government officials — especially senior policymakers — worry about AI? Which particular worries are on their minds? To what extent is the average person on the street worried about AI? What's going on with the semiconductor industry in Taiwan? How hard is it to get an AI model to "reason"? How could animal training be improved? Do most horses fear humans? How do we project ourselves onto the space around us?Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne. Follow her on Twitter at @hlntnr. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

In AI We Trust?
AI Literacy Series Ep. 2 with Dewey Murdick (CSET): Centering People in AI's Progress

In AI We Trust?

Play Episode Listen Later Feb 11, 2025 40:05


Description: In this episode of EqualAI's AI Literacy Series, co-hosts Miriam Vogel and Rosalind Wiseman sit down with AI policy expert Dewey Murdick, Executive Director at Georgetown's Center for Security and Emerging Technology (CSET) who shares his hopes for AI's role in personal development and other key areas of society. From national security to education, Murdick unpacks the policies and international collaboration needed to ensure AI serves humanity first.Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.

Squaring the Circle
AI and Future Warfare with Emmy Probasco, CSET

Squaring the Circle

Play Episode Listen Later Feb 10, 2025 53:18


For more information:+ https://cset.georgetown.edu/article/mwi-podcast-the-maven-smart-system-and-the-future-of-military-ai/+ https://cset.georgetown.edu/publication/building-the-tech-coalition/+ https://cset.georgetown.edu/staff/emelia-probasco/To register for the webinar with Amos Fox:+ https://www.surveymonkey.com/r/YRRJVMF Hosted on Acast. See acast.com/privacy for more information.

ChinaTalk
AI Geopolitics in o3's Age with Chris Miller + Lennart Heim

ChinaTalk

Play Episode Listen Later Dec 23, 2024 89:35


Chris Miller of Chip War and Lennart Heim of RAND check in on the geopolitics of AI. We explore: Chinese labs' algorithmic progress (surprising to everyone but regular ChinaTalk listeners!) The geopolitical implications of scaling on test time compute What is and isn't working with US export controls And a whole lot more this was a great episode! The CSET report I referenced: https://cset.georgetown.edu/publication/chinas-sti-operations/ Chris and Lennart's ChinaTalk in early 2023 https://www.chinatalk.media/p/ai-compute-101-the-geopolitics-of Outtro music: japanese citypop producers collaborating Beijinger Cheng Fangyuan in the 80s! https://www.youtube.com/watch?v=403GCMhZ89Q&ab_channel=Heatwolves itself a cover of this Japanese track but better than the original https://www.youtube.com/watch?v=SyjnkuhRfJA&ab_channel=PopBULL Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
AI Geopolitics in o3's Age with Chris Miller + Lennart Heim

ChinaEconTalk

Play Episode Listen Later Dec 23, 2024 89:35


Chris Miller of Chip War and Lennart Heim of RAND check in on the geopolitics of AI. We explore: Chinese labs' algorithmic progress (surprising to everyone but regular ChinaTalk listeners!) The geopolitical implications of scaling on test time compute What is and isn't working with US export controls And a whole lot more this was a great episode! The CSET report I referenced: https://cset.georgetown.edu/publication/chinas-sti-operations/ Chris and Lennart's ChinaTalk in early 2023 https://www.chinatalk.media/p/ai-compute-101-the-geopolitics-of Outtro music: japanese citypop producers collaborating Beijinger Cheng Fangyuan in the 80s! https://www.youtube.com/watch?v=403GCMhZ89Q&ab_channel=Heatwolves itself a cover of this Japanese track but better than the original https://www.youtube.com/watch?v=SyjnkuhRfJA&ab_channel=PopBULL Learn more about your ad choices. Visit megaphone.fm/adchoices

China Global
Chinese Perspectives on Military Uses of AI

China Global

Play Episode Listen Later Dec 17, 2024 31:33


In China's 14th Five-Year Plan that spans from 2021 to 2025, priority was assigned to development of emerging technologies that could be both disruptive and foundational for the future. China is now a global leader in AI technology and is poised to overtake the West and become the world leader in AI in the years ahead. Importantly, there is growing evidence that AI-enabled military capabilities are becoming increasingly central to Chinese military concepts for fighting future wars.A recently released report provides insights on Chinese perspectives on military use of AI. Published by Georgetown's Center for Security and Emerging Technology (CSET), the report illustrates some of the key challenges Chinese defense experts have identified in developing and fielding AI-related technologies and capabilities. Host Bonnie Glaser is joined by the author of this report, Sam Bresnick, who is a Research Fellow at Georgetown's CSET focusing on AI applications and Chinese technology policy.  TimestampsB[00:00] Start[01:33] Impetus for the Georgetown CSET Report[03:34] China's Assessment of the Impacts of AI and Emerging Technologies[06:32] Areas of Debate Among Chinese Scholars[09:39] Evidence of Progress in the Military Application of AI[12:13] Lack of Trust Amongst Chinese Experts in Existing Technologies[14:25] Constraints in the Development and Implementation of AI[18:20] Chinese Expert Recommendations for Mitigating AI Risk[23:01] Implications Taken from Discussions on AI Risk[25:14] US-China Areas of Discussion on the Military Use of AI[28:50] Unilateral Steps Toward Risk Mitigation

The Democracy Group
Navigating Election Facts in the AI Era | Democracy Decoded

The Democracy Group

Play Episode Listen Later Oct 30, 2024 33:16


When New Hampshire voters picked up the phone earlier this year and heard what sounded like the voice of President Joe Biden asking them not to vote in that state's primary election, the stage was set for an unprecedented election year. The call was a deepfake — and the first major instance of artificial intelligence being used in the 2024 election. With the rise of AI tools that can credibly synthesize voices, images and videos, how are voters supposed to determine what they can trust as they prepare to cast their votes?To find out how lawmakers and civil society are pushing back against harmful false narratives and content, we talked with experts engaging the problem on several fronts. Stephen Richer, an elected Republican in Phoenix, posts on X (formerly Twitter) to engage misinformation head-on to protect Arizona voters. Adav Noti, the executive director of CLC, explains how good-governance advocates are hurrying to catch up with a profusion of new digital tools that make the age-old practices of misinformation and disinformation faster and cheaper than ever. And Mia Hoffman, a researcher who looks at the effects of AI on democracies, reminds voters not to panic — that bad information and malicious messaging don't always have the power to reach their audience, let alone sway people's opinions or actions.Host and Guests:Simone Leeper litigates a wide range of redistricting-related cases at CLC, challenging gerrymanders and advocating for election systems that guarantee all voters an equal opportunity to influence our democracy. Prior to arriving at CLC, Simone was a law clerk in the office of Senator Ed Markey and at the Library of Congress, Office of General Counsel. She received her J.D. cum laude from Georgetown University Law Center in 2019 and a bachelor's degree in political science from Columbia University in 2016.Stephen Richer is the 30th Recorder of Maricopa County.  He was elected in November 2020 and took office in January 2021. His office of approximately 150 employees records hundreds of thousands of public documents every year, maintains a voter registration database of 2.4 million voters -- the second largest voting jurisdiction in the United States -- and administers the mail voting component of all elections in Maricopa County. Prior to his time as Recorder, Stephen worked in various business sectors and, later, as an attorney at the law firms Steptoe & Johnson LLP and Lewis Roca LLP.  He holds a J.D. and M.A. from The University of Chicago and a B.A. from Tulane University.  He is completing his PhD at Arizona State University.Adav Noti is Executive Director at Campaign Legal Center. He has conducted dozens of constitutional cases in trial and appellate courts and the United States Supreme Court. He also advises Members of Congress and other policymakers on advancing democracy through legislation. Prior to joining CLC, Adav served for more than 10 years in nonpartisan leadership capacities within the Office of General Counsel of the Federal Election Commission, and he served as a Special Assistant United States Attorney for the District of Columbia. Adav regularly provides expert analysis for television, radio, and print journalism. He has appeared on broadcasts such as The Rachel Maddow Show, Anderson Cooper 360, PBS NewsHour, and National Public Radio's Morning Edition, and he is regularly cited in publications nationwide, including the New York Times, Washington Post, USA Today, Politico, Slate, and Reuters.Mia Hoffmann is a Research Fellow at Georgetown's Center for Security and Emerging Technology. Her research focuses on AI harm incidents, aiming to provide a deeper understanding of failure modes and the efficacy of risk mitigation practices. In recent work, she examined the uses of AI in US election administration and their risks to electoral integrity. Prior to joining CSET, Mia worked at the European Commission and as a researcher in Brussels, where she studied AI adoption and its implications. She holds a MS in Economics from Lund University and a BS in International Economics from the University of Tuebingen. Links:How Artificial Intelligence Influences Elections, and What We Can Do About It - Campaign Legal CenterHow 2024 presidential candidates are using AI inside their election campaigns - CNBCNonprofit group plans ad campaign using AI misinfo to fight AI misinfo - PoliticoCLC Op-Ed Examines Artificial Intelligence Disinformation in Elections - Campaign Legal CenterCongress should pass bipartisan bills to safeguard elections from AI - Campaign Legal CenterAdditional InformationDemocracy Decoded PodcastMore shows from The Democracy Group

The Lawfare Podcast
Lawfare Daily: Jonathan Zittrain on Controlling AI Agents

The Lawfare Podcast

Play Episode Listen Later Oct 17, 2024 48:09


Jonathan Zittrain, Faculty Director of the Berkman Klein Center at Harvard Law, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to dive into his recent Atlantic article, “We Need to Control AI Agents Now.” The pair discuss what distinguishes AI agents from current generative AI tools and explore the sources of Jonathan's concerns. They also talk about potential ways of realizing the control desired by Zittrain. For those eager to dive further into the AI agent weeds, Zittrain mentioned this CSET report, which provides a thorough exploration into the promises and perils of this new step in AI's development. You may also want to explore “Visibility into AI Agents,” penned by Alan Chan et al. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Democracy Decoded
Navigating Election Facts in the AI Era

Democracy Decoded

Play Episode Listen Later Oct 15, 2024 32:32


When New Hampshire voters picked up the phone earlier this year and heard what sounded like the voice of President Joe Biden asking them not to vote in that state's primary election, the stage was set for an unprecedented election year. The call was a deepfake — and the first major instance of artificial intelligence being used in the 2024 election. With the rise of AI tools that can credibly synthesize voices, images and videos, how are voters supposed to determine what they can trust as they prepare to cast their votes?To find out how lawmakers and civil society are pushing back against harmful false narratives and content, we talked with experts engaging the problem on several fronts. Stephen Richer, an elected Republican in Phoenix, posts on X (formerly Twitter) to engage misinformation head-on to protect Arizona voters. Adav Noti, the executive director of CLC, explains how good-governance advocates are hurrying to catch up with a profusion of new digital tools that make the age-old practices of misinformation and disinformation faster and cheaper than ever. And Mia Hoffman, a researcher who looks at the effects of AI on democracies, reminds voters not to panic — that bad information and malicious messaging don't always have the power to reach their audience, let alone sway people's opinions or actions.Host and Guests:Simone Leeper litigates a wide range of redistricting-related cases at CLC, challenging gerrymanders and advocating for election systems that guarantee all voters an equal opportunity to influence our democracy. Prior to arriving at CLC, Simone was a law clerk in the office of Senator Ed Markey and at the Library of Congress, Office of General Counsel. She received her J.D. cum laude from Georgetown University Law Center in 2019 and a bachelor's degree in political science from Columbia University in 2016.Stephen Richer is the 30th Recorder of Maricopa County.  He was elected in November 2020 and took office in January 2021. His office of approximately 150 employees records hundreds of thousands of public documents every year, maintains a voter registration database of 2.4 million voters -- the second largest voting jurisdiction in the United States -- and administers the mail voting component of all elections in Maricopa County. Prior to his time as Recorder, Stephen worked in various business sectors and, later, as an attorney at the law firms Steptoe & Johnson LLP and Lewis Roca LLP.  He holds a J.D. and M.A. from The University of Chicago and a B.A. from Tulane University.  He is completing his PhD at Arizona State University.Adav Noti is Executive Director at Campaign Legal Center. He has conducted dozens of constitutional cases in trial and appellate courts and the United States Supreme Court. He also advises Members of Congress and other policymakers on advancing democracy through legislation. Prior to joining CLC, Adav served for more than 10 years in nonpartisan leadership capacities within the Office of General Counsel of the Federal Election Commission, and he served as a Special Assistant United States Attorney for the District of Columbia. Adav regularly provides expert analysis for television, radio, and print journalism. He has appeared on broadcasts such as The Rachel Maddow Show, Anderson Cooper 360, PBS NewsHour, and National Public Radio's Morning Edition, and he is regularly cited in publications nationwide, including the New York Times, Washington Post, USA Today, Politico, Slate, and Reuters.Mia Hoffmann is a Research Fellow at Georgetown's Center for Security and Emerging Technology. Her research focuses on AI harm incidents, aiming to provide a deeper understanding of failure modes and the efficacy of risk mitigation practices. In recent work, she examined the uses of AI in US election administration and their risks to electoral integrity. Prior to joining CSET, Mia worked at the European Commission and as a researcher in Brussels, where she studied AI adoption and its implications. She holds a MS in Economics from Lund University and a BS in International Economics from the University of Tuebingen. Links:How Artificial Intelligence Influences Elections, and What We Can Do About It - Campaign Legal CenterHow 2024 presidential candidates are using AI inside their election campaigns - CNBCNonprofit group plans ad campaign using AI misinfo to fight AI misinfo - PoliticoCLC Op-Ed Examines Artificial Intelligence Disinformation in Elections - Campaign Legal CenterCongress should pass bipartisan bills to safeguard elections from AI - Campaign Legal CenterAbout CLC:Democracy Decoded is a production of Campaign Legal Center, a nonpartisan nonprofit organization which advances democracy through law at the federal, state and local levels, fighting for every American's right to responsive government and a fair opportunity to participate in and affect the democratic process. Learn more about us.Democracy Decoded is part of The Democracy Group, a network of podcasts that examines what's broken in our democracy and how we can work together to fix it.

The Road to Accountable AI
Helen Toner: AI Safety in a World of Uncertainty

The Road to Accountable AI

Play Episode Listen Later Sep 19, 2024 41:15 Transcription Available


Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner's lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment.   Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT.  Helen Toner's TED Talk: How to Govern AI, Even if it's Hard to Predict Helen Toner on the OpenAI Coup “It was about trust and accountability” (Financial Times)   Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new  Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks  

ChinaTalk
Competition Policy 2025

ChinaTalk

Play Episode Listen Later Sep 4, 2024 75:32


To discuss the post-election future of US competition policy, ChinaTalk interviewed Peter Harrell and Nazak Nikakhtar. Nazak served in the Trump administration after a long career as a civil servant, where she was instrumental in shaping the Commerce Department's work on China, first at the International Trade Administration and later leading the Bureau of Industry and Security. Peter worked in the Biden administration on the National Economic Council and National Security Council, focusing on international economics, export controls, and investment restrictions. We discuss… The role of the executive in setting the industrial policy agenda Leadership shortcomings in the Biden and Trump administrations Competition with China — bipartisan consensus, bureaucratic inertia, and strategies to stop wasting time. Advice for America's next president, from export controls to pharmaceutical decoupling and alliance management Creative approaches to supply chain resilience This is 2023 CSET report Jordan referenced (See the “Understanding the Intangibles section) Outtro Music: Jun Mayuzumi - Black Room (Youtube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
Competition Policy 2025

ChinaEconTalk

Play Episode Listen Later Sep 4, 2024 75:32


To discuss the post-election future of US competition policy, ChinaTalk interviewed Peter Harrell and Nazak Nikakhtar. Nazak served in the Trump administration after a long career as a civil servant, where she was instrumental in shaping the Commerce Department's work on China, first at the International Trade Administration and later leading the Bureau of Industry and Security. Peter worked in the Biden administration on the National Economic Council and National Security Council, focusing on international economics, export controls, and investment restrictions. We discuss… The role of the executive in setting the industrial policy agenda Leadership shortcomings in the Biden and Trump administrations Competition with China — bipartisan consensus, bureaucratic inertia, and strategies to stop wasting time. Advice for America's next president, from export controls to pharmaceutical decoupling and alliance management Creative approaches to supply chain resilience This is 2023 CSET report Jordan referenced (See the “Understanding the Intangibles section) Outtro Music: Jun Mayuzumi - Black Room (Youtube Link) Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - Cancelling GPT subscription by adekcz

The Nonlinear Library

Play Episode Listen Later May 20, 2024 4:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cancelling GPT subscription, published by adekcz on May 20, 2024 on The Effective Altruism Forum. If you are sending money to OpenAI, what would make you stop? For me, there is no ambiguity anymore. The line has been crossed. Many publicly available signals are pointing towards OpenAI's lack of enthusiasm about AI safety while being quite enthusiastic about advancing AI capabilities as far as possible, even towards AGI. There have been several waves of people with AI safety views leaving OpenAI culminating in dissolution of superalignment team and it is hard to tell if OpenAI conducts any alignment research at all. I worry about the risk of advanced AI and no longer trust OpenAI to behave well when needed. Therefore I am cancelling my subscription and think others should as well. I have basically written this post a few days ago but was waiting for Zvi to provide his summary of events in Open exodus. Predictably, he just did and it is much better summary than I would be able to provide. In the end, he writes: My small update is cancelling GPT subscription and I think as many people as possible should do that. Paying for GPT and having my beliefs was, as of last week, debatable. Now, I think it is unjustifiable. It is very simple. Sending money to a company that is trying to achieve something I worry about is not a good idea. They don't even have any plausible deniability of caring about safety now. On the other hand, they are definitely still at the leading edge of AI development. I think AI risks are non-negligible and the plausible harm they can cause is tremendous (extinction or some s-risk scenario). Many people also believe that. But we arrive to really low probabilities when thinking about how much can an individual influence those large problems spanning large futures. We get very small probabilities but very large positive values. In this case, we have really small probability that our 20 bucks/month will lead to unsafe AGI. But it definitely helps OpenAI, even though marginally. By this logic, sending money to OpenAI might be by far the most horrific act many people, including many EAs, regularly commit. Normal people don't have assumptions that would make them care. But we have them, and we should act on them. Cancel your subscription and refrain from using the free version, as your data might be even more valuable to them than the subscription money. Please spread this sentiment. My summary of recent events For the sake of completeness, if Zvi hadn't published his post, I had written this, leaving it here in a slightly draft-ish state: All the people People who are leaving OpenAI seem to have quite similar feelings about AI safety as do I and they seem to be very competent. They are insiders, they have hands-on experience with the technology and OpenAI's inner workings. Dario Amodei and around 10 other OpenAI's employees left in 2021 to found Antrophic, because they felt they can do better AI safety work together (https://youtu.be/5GtVrk00eck?si=FQSCraQ8UOtyxePW&t=39). Then we had the infamous board drama that led to Sam Altman's stronger hold over OpenAI, and the most safety-minded people are there no more. Helen Toner of CSET, Tasha McCauley, former Effective Ventures board member, and Sutskever, who initially just left the board, now he has left OpenAI altogether. And finally, over the last few months, the remaining safety people have been pushed away, fired, or resigned https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence . Walk the walk Founding the superalignment team (https://openai.com/index/introducing-superalignment/) led by Sutskever and Leike and promising 20% of compute to them sounded great. It still seems like it would much more likely happen in a worlds, where O...

Hírstart Robot Podcast
Visszalépett az egyik ellenzéki képviselőjelölt Szolnokon, és a Jobbikból is távozik

Hírstart Robot Podcast

Play Episode Listen Later May 13, 2024 4:49


Visszalépett az egyik ellenzéki képviselőjelölt Szolnokon, és a Jobbikból is távozik Telex     2024-05-13 12:56:19     Belföld Jász-Nagykun-Szolnok Szolnok Jobbik Berényi Gábor helyett már nem lehet jelölni senkit. Kilépett a Jobbik helyi elnöke, Szotyori-Lázár Zoltán is. Tiszavirág, háttérhatalom küldte, üres - ki az? 24.hu     2024-05-13 11:37:48     Belföld Fidesz A Mi Hazánk antiglobalista demonstrációján a párt vezetőit és támogatóit a Fideszhez való viszonyukról is megszólaltattuk. A Covid semmi volt ahhoz képest, amilyen veszéllyel néz most szembe az emberiség Portfolio     2024-05-13 11:22:00     Egészség Anglia Koronavírus Guardian Antibiotikum Anglia volt főorvosa, Dame Sally Davies figyelmeztetett, hogy az antibiotikumokkal szembeni növekvő rezisztencia miatt a jövőben a gyakori fertőzések milliók halálát okozhatják, ha nem sikerül időben megfékezni ezt a jelenséget. Mindez súlyosabb kihívást jelenthet az emberiség számára, mint a Covid-19 világjárvány - tudósított a The Guardian. Orbán megint pert vesztett, ezúttal a kormányközeli Index ellen Media1     2024-05-13 12:41:05     Média Orbán Viktor Bíróság Index Mészáros Lőrinc Ma, hétfőn került sor a Fővárosi Törvényszéken a budapesti Markó utcában Orbán Viktor miniszerelnök és a Mészáros Lőrinc-közeli Index.hu között bírósági eljárás első, perfelvételi tárgyalására, mely nem sokkal később ítélethirdetéssel végződött. Bár a Fővárosi Törvényszék az elmúlt napokban megpróbálta eltitkolni a nyilvános tárgyalás időpontját, m 24.hu: Csoportos létszámleépítés volt az Állami Számvevőszéknél 444.hu     2024-05-13 10:11:14     Belföld Állami Számvevőszék 42 embertől váltak meg, Windisch László egyenként tekintette át a meglévő státuszokat és munkaköröket, majd elrendelte a 10 százalékos csökkentést. Puzsér: a DK, mint brand nagyjából annyit ér, mint egy rohadt narancs? SpiritFM     2024-05-13 10:19:22     Belföld Fidesz DK Puzsér Róbert Puzsér Róbert szerint a Fidesz bukása nem képzelhető el a DK bukása nélkül, és a DK bukása sem a Fidesz bukása nélkül. Hiába a másfél milliós álomfizetés, négy hónapja nem talál megfelelő embert egy állásra a Lidl vg.hu     2024-05-13 11:38:42     Belföld Lidl Három év után az elérhető bruttó bér több mint másfél millió forint a Lidl egyik fontos pozíciójában. Ráadásul az ország négy különböző pontján is keresik a megfelelő embert. Milliárdos nyeresége volt, aztán kegyvesztett lett a kormánynál – tippelj, mi lett vele Forbes     2024-05-13 13:24:01     Cégvilág Az egykori kormány-kedvenc médiaember, Csetényi Csaba csillaga leáldozóban van 2017-es kegyvesztettsége óta. Gigászi razzia indul a magyar utakon: erre vadásznak a hatóságok egész héten, jobb lesz vigyázni Pénzcentrum     2024-05-13 11:27:00     Autó-motor Rendőrség 2024. május 13. és 19. között Magyarország egész területén fokozott ellenőrzést végez a rendőrség: célpontban a nehéz tehergépkocsik és autóbuszok. Júliustól kötelezőt kell kötni az e-rollerekre is Azenpenzem     2024-05-13 15:02:00     Gazdaság KGFB Július 16-ától bővül azoknak a járműveknek a köre, amelyekre kötelező gépjármű-felelősségbiztosítást kell kötni. Az úgynevezett mikromobilitási eszközök közül a nettó tömeg és a tervezési sebesség dönti el, hogy melyikre kell biztosítást kötni. Az USA százszázalékos vámot vet ki a kínai autókra autopro     2024-05-13 15:25:00     Autó-motor USA Kína Olcsó A lépés a Biden-kormányzat legújabb erőfeszítése a hazai ipar védelmére az olcsó versenytől. Angliában is felfigyeltek a bajnokságot nyerő magyar edzőre Sportal     2024-05-13 10:35:55     Foci Magyar foci Szabolcs-Szatmár-Bereg Nyíregyháza NB II Tímár Krisztián megnyerte az Nb Ii-es bajnokságot a Nyíregyháza labdarúgócsapatával, és ennek Angliában is megvolt a visszahangja. Újra rangja van a Magyar Kupának, ezúttal is izgalmas döntőre van kilátás Büntető.com     2024-05-13 15:08:56     Foci Paks Megismétlődik a 2022-es döntő, a két zöld-fehér csapat nem csak a bajnokságban, hanem a kupában is nagy versenyt vívhat. Közhely, de igaz: az egymeccses döntő inkább a Paksnak lehet esély, még akkor is, ha sok újat már nem tudnak mutatni egymásnak a csapatok. Az El Niño súlyos csapást mért Latin-Amerikára és a Karib-térségre 2023-ban Kiderül     2024-05-13 16:54:38     Időjárás USA Klímaváltozás El Nino WMO A Meteorológiai Világszervezet (WMO) új jelentése szerint az El Niño és a hosszú távú klímaváltozás együttesen sújtotta Latin-Amerikát és a Karib-térséget 2023-ban. A további adásainkat keresd a podcast.hirstart.hu oldalunkon.

Hírstart Robot Podcast - Friss hírek
Visszalépett az egyik ellenzéki képviselőjelölt Szolnokon, és a Jobbikból is távozik

Hírstart Robot Podcast - Friss hírek

Play Episode Listen Later May 13, 2024 4:49


Visszalépett az egyik ellenzéki képviselőjelölt Szolnokon, és a Jobbikból is távozik Telex     2024-05-13 12:56:19     Belföld Jász-Nagykun-Szolnok Szolnok Jobbik Berényi Gábor helyett már nem lehet jelölni senkit. Kilépett a Jobbik helyi elnöke, Szotyori-Lázár Zoltán is. Tiszavirág, háttérhatalom küldte, üres - ki az? 24.hu     2024-05-13 11:37:48     Belföld Fidesz A Mi Hazánk antiglobalista demonstrációján a párt vezetőit és támogatóit a Fideszhez való viszonyukról is megszólaltattuk. A Covid semmi volt ahhoz képest, amilyen veszéllyel néz most szembe az emberiség Portfolio     2024-05-13 11:22:00     Egészség Anglia Koronavírus Guardian Antibiotikum Anglia volt főorvosa, Dame Sally Davies figyelmeztetett, hogy az antibiotikumokkal szembeni növekvő rezisztencia miatt a jövőben a gyakori fertőzések milliók halálát okozhatják, ha nem sikerül időben megfékezni ezt a jelenséget. Mindez súlyosabb kihívást jelenthet az emberiség számára, mint a Covid-19 világjárvány - tudósított a The Guardian. Orbán megint pert vesztett, ezúttal a kormányközeli Index ellen Media1     2024-05-13 12:41:05     Média Orbán Viktor Bíróság Index Mészáros Lőrinc Ma, hétfőn került sor a Fővárosi Törvényszéken a budapesti Markó utcában Orbán Viktor miniszerelnök és a Mészáros Lőrinc-közeli Index.hu között bírósági eljárás első, perfelvételi tárgyalására, mely nem sokkal később ítélethirdetéssel végződött. Bár a Fővárosi Törvényszék az elmúlt napokban megpróbálta eltitkolni a nyilvános tárgyalás időpontját, m 24.hu: Csoportos létszámleépítés volt az Állami Számvevőszéknél 444.hu     2024-05-13 10:11:14     Belföld Állami Számvevőszék 42 embertől váltak meg, Windisch László egyenként tekintette át a meglévő státuszokat és munkaköröket, majd elrendelte a 10 százalékos csökkentést. Puzsér: a DK, mint brand nagyjából annyit ér, mint egy rohadt narancs? SpiritFM     2024-05-13 10:19:22     Belföld Fidesz DK Puzsér Róbert Puzsér Róbert szerint a Fidesz bukása nem képzelhető el a DK bukása nélkül, és a DK bukása sem a Fidesz bukása nélkül. Hiába a másfél milliós álomfizetés, négy hónapja nem talál megfelelő embert egy állásra a Lidl vg.hu     2024-05-13 11:38:42     Belföld Lidl Három év után az elérhető bruttó bér több mint másfél millió forint a Lidl egyik fontos pozíciójában. Ráadásul az ország négy különböző pontján is keresik a megfelelő embert. Milliárdos nyeresége volt, aztán kegyvesztett lett a kormánynál – tippelj, mi lett vele Forbes     2024-05-13 13:24:01     Cégvilág Az egykori kormány-kedvenc médiaember, Csetényi Csaba csillaga leáldozóban van 2017-es kegyvesztettsége óta. Gigászi razzia indul a magyar utakon: erre vadásznak a hatóságok egész héten, jobb lesz vigyázni Pénzcentrum     2024-05-13 11:27:00     Autó-motor Rendőrség 2024. május 13. és 19. között Magyarország egész területén fokozott ellenőrzést végez a rendőrség: célpontban a nehéz tehergépkocsik és autóbuszok. Júliustól kötelezőt kell kötni az e-rollerekre is Azenpenzem     2024-05-13 15:02:00     Gazdaság KGFB Július 16-ától bővül azoknak a járműveknek a köre, amelyekre kötelező gépjármű-felelősségbiztosítást kell kötni. Az úgynevezett mikromobilitási eszközök közül a nettó tömeg és a tervezési sebesség dönti el, hogy melyikre kell biztosítást kötni. Az USA százszázalékos vámot vet ki a kínai autókra autopro     2024-05-13 15:25:00     Autó-motor USA Kína Olcsó A lépés a Biden-kormányzat legújabb erőfeszítése a hazai ipar védelmére az olcsó versenytől. Angliában is felfigyeltek a bajnokságot nyerő magyar edzőre Sportal     2024-05-13 10:35:55     Foci Magyar foci Szabolcs-Szatmár-Bereg Nyíregyháza NB II Tímár Krisztián megnyerte az Nb Ii-es bajnokságot a Nyíregyháza labdarúgócsapatával, és ennek Angliában is megvolt a visszahangja. Újra rangja van a Magyar Kupának, ezúttal is izgalmas döntőre van kilátás Büntető.com     2024-05-13 15:08:56     Foci Paks Megismétlődik a 2022-es döntő, a két zöld-fehér csapat nem csak a bajnokságban, hanem a kupában is nagy versenyt vívhat. Közhely, de igaz: az egymeccses döntő inkább a Paksnak lehet esély, még akkor is, ha sok újat már nem tudnak mutatni egymásnak a csapatok. Az El Niño súlyos csapást mért Latin-Amerikára és a Karib-térségre 2023-ban Kiderül     2024-05-13 16:54:38     Időjárás USA Klímaváltozás El Nino WMO A Meteorológiai Világszervezet (WMO) új jelentése szerint az El Niño és a hosszú távú klímaváltozás együttesen sújtotta Latin-Amerikát és a Karib-térséget 2023-ban. A további adásainkat keresd a podcast.hirstart.hu oldalunkon.

Inside China's AI Ecosystem: A View From Beijing

Play Episode Listen Later Apr 10, 2024 96:23


In this episode, we explore the Chinese AI ecosystem with 'L-squared,' an anonymous tech worker based in Beijing. We discuss major players, model quality, public engagement, regulation, and the US 'chip ban.' Discover the similarities and differences between US and Chinese AI landscapes, and gain a nuanced perspective on the current state of AI in China. USEFUL RESOURCES: Testing Chinese models: Yi-34B-Chat (made by Kai-Fu Lee's team 01.AI) can be tried out via Replicate (https://replicate.com/01-ai/yi-34b-chat) or Hugging Face. You can also use the ChatGLM playground (https://open.bigmodel.cn/trialcenter) and Baidu's ERNIE (https://yiyan.baidu.com/) without a Chinese SIM card. Benchmarking models: SuperCLUE is one of the most prominent benchmarks - the latest results are on GitHub (https://github.com/CLUEbenchmark/SuperCLUE) and the paper explaining the methodology is here (https://arxiv.org/abs/2307.15020). Regulation: Explainer (https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117) from Matt Sheehan; piece (https://www.chinatalk.media/p/how-tight-ai-regs-hurt-chinese-firms) on how genAI regs are affecting Chinese companies. US-China competition: Jeff Ding's work (https://www.tandfonline.com/doi/full/10.1080/09692290.2023.2173633) on the diffusion deficit in S&T; Bloomberg piece (https://www.bloomberg.com/graphics/2023-china-huawei-semiconductor/) on Huawei's semiconductor development efforts. Staying up to date: Sign up to alerts from CSET's Scout tool (https://scout.eto.tech/); subscribe to Concordia AI's AI safety (https://aisafetychina.substack.com/) in China newsletter (disclaimer: I used to work at Concordia!) A 2016 profile (https://chinai.substack.com/p/chinai-37-happy-20th-anniversary) on Microsoft Research Asia by Wang Jingjing, covered in Jeff Ding's ChinAI newsletter SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, instead of...does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off http://www.omneky.com/ The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to choosesquad.com and mention “Turpentine” to skip the waitlist. Plumb is a no-code AI app builder designed for product teams who care about quality and speed. What is taking you weeks to hand-code today can be done confidently in hours. Check out https://bit.ly/PlumbTCR for early access. TIMESTAMPS: (00:00) Introduction (07:24) China's AI Ecosystem (13:40) Public AI Engagement (17:33) Sponsors : OCI / Omneky (18:50) AI Tools Comparison (35:37) Sponsors : Brave / Squad / Plumb (39:14) AI Regulatory Maze (51:02) AI Performance, Censorship (55:28) Chinese AI Regulations (01:04:37) Tech, Research Role (01:12:11) Global AI Ecosystem (01:23:22) Cultural AI Perspectives (01:29:14) AI Safety, Cooperation

In AI We Trust?
Helen Toner (CSET): How to govern AI in the face of uncertainty?

In AI We Trust?

Play Episode Listen Later Mar 13, 2024 35:04


This week Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET), joins In AI We Trust? to discuss decoding China's AI policies, AI's role in warfare, the potential impact of AI agents, challenges around regulating changing technology, and how to approach AI evaluations. ―Resources mentioned in this episode:Regulating the AI Frontier: Design Choices and ConstraintsWill China Set Global Tech Standards?The rise of artificial intelligence raises serious concerns for national security

GovCast
CyberCast - What Government CIOs Should Know About US-China Security Threats

GovCast

Play Episode Listen Later Feb 13, 2024 20:25


Cyber threats from China have been the subject of recent high-profile Congressional hearings lately. Analyst Jack Corrigan from Georgetown University's Center for Security and Emerging Technology (CSET) spoke to the U.S.-China Economic and Security Review Commission Feb. 1 about the risks that Chinese information and communications technology and services (ICTS) have on U.S. national security and critical infrastructure. Corrigan joins CyberCast to break down these threats and what they mean for government agencies, what CIOs can do about them and the role of procurement regulations in strengthening those defenses.

The Next Five
The Future of AI in Healthcare

The Next Five

Play Episode Listen Later Oct 28, 2023 29:38


The advent and speed of advancement in AI has far reaching consequences for multiple industries. This five-part miniseries will spotlight various industry sectors where AI has a significant and growing impact, with this particular episode centering on AI's role in healthcare.AI in healthcare offers life changing benefits as well as raising far reaching concerns. In the medical arena, various AI programmes like Large Language Models and Foundation Models are being used in many specialities, both in research and clinically. AI's ability to rapidly process vast amounts of data and identify subtle patterns affords unrivalled potential within medicine. It can also help save money by streamlining processes. But there are risks. As the technology advances so do concerns over inaccurate diagnoses that could exacerbate health inequalities that already exist in the system. Another area of particular focus is the transparency and trust of the AI models being built. This is where the importance of regulation comes in. We speak with Dr Alan Karthikesalingam, Senior Staff Clinician Scientist and Research Lead at Google who offers his insight into the research and clinical applications of AI in healthcare. Greg Sorensen, Lead at Aidence, shows how AI is being used in the clinical screening of lung cancer, a key prevention tool that is already saving lives. Inma Martinez, chair of the multi expert group at the global partnership on AI addresses the importance of regulation and governance of AI in healthcare and beyond.Our Sources for the show: FT Resources, CEPR, European Parliament research, CSET, BMJ, KCL.This content is paid for by Google and is produced in partnership with the Financial Times' Commercial Department. Hosted on Acast. See acast.com/privacy for more information.

CommonsCast
Episode 137: CommonsCast Episode 137-September 27, 2023

CommonsCast

Play Episode Listen Later Sep 26, 2023 23:37


On this episode of the CommonsCast Lauren hosts Dean Gresalfi in the Q&A segment, Sariha provides the details you need about events on campus this week in the Commons Calendar segment, and Cynthia sits down with Lauren Lamson for the Human of the Commons interview.  Lauren is a CSET major originally from Madison, Wisconsin.  

Inside China
2. Follow the AI money: China, the Quad and Southeast Asia

Inside China

Play Episode Listen Later Aug 18, 2023 31:04


Andrew Collier, managing director of Orient Capital Research, analyses how the world of investment reacted to the latest US investment restrictions on China’s tech industry, and the options that are left for Beijing as it aims to become the world leader in artificial intelligence. Georgetown University’s CSET research analyst Ngor Luong knows exactly who has been investing in China, and explains why she expects more money to flow from China to Southeast Asia.  

ChinaTalk
How Can the Pentagon Trust AI?

ChinaTalk

Play Episode Listen Later Aug 1, 2023 63:13


How is the DoD thinking about deploying AI? What are the challenges and opportunities involved in building out AI assurance? To discuss, I brought on Dr. Jane Pinelis, Chief AI Engineer The Johns Hopkins University Applied Physics Laboratory. She was previously the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI's Algorithmic Warfare Cross-Functional Team, better known as Project Maven. Cohosting is Karson Elmgren of CSET. Outtro music: https://www.youtube.com/watch?v=HgzGwKwLmgM Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
How Can the Pentagon Trust AI?

ChinaEconTalk

Play Episode Listen Later Aug 1, 2023 63:13


How is the DoD thinking about deploying AI? What are the challenges and opportunities involved in building out AI assurance? To discuss, I brought on Dr. Jane Pinelis, Chief AI Engineer The Johns Hopkins University Applied Physics Laboratory. She was previously the Chief of the Test, Evaluation, and Assessment branch at the Department of Defense Joint Artificial Intelligence Center (JAIC). Prior to joining the JAIC, Dr. Pinelis served as the Director of Test and Evaluation for USDI's Algorithmic Warfare Cross-Functional Team, better known as Project Maven. Cohosting is Karson Elmgren of CSET. Outtro music: https://www.youtube.com/watch?v=HgzGwKwLmgM Learn more about your ad choices. Visit megaphone.fm/adchoices

Technology and Security (TS)
Synthetic biotech, DARPA for intelligence and AI regulation with RAND CEO Jason Matheny

Technology and Security (TS)

Play Episode Listen Later Jul 20, 2023 38:53


Dr Miah Hammond-Errey is joined by Jason Matheny, CEO of RAND Corporation and founder of CSET to delve into the complexities of regulating emerging technologies — from AI to biotechnology, what the United States can learn from Australia, the opportunity a current bottleneck in compute capacity offers democracies, and his work at IARPA — ‘the DARPA of the intelligence world' — using innovative methods to solve the hard problems of policy and national security. They also discuss the role of alliances such as Five Eyes in combatting AI-generated disinformation and why standards bodies need greater support.Jason is the President and CEO of RAND Corporation. He previously led technology and national security policy for the White House in the National Security Council and the Office of Science and Technology Policy. Jason founded the Center for Security and Emerging Technology (CSET) at Georgetown University, was a Commissioner on the National Security Commission on Artificial Intelligence and the director of the Intelligence Advanced Research Projects Activity (IARPA). He has also worked at the World Bank, Oxford University, the Applied Physics Laboratory and Princeton University.Technology and Security is hosted by Dr Miah Hammond-Errey, the inaugural director of the Emerging Technology program at the United States Studies Centre, based at the University of Sydney. Miah's Twitter: https://twitter.com/Miah_HEResources mentioned in the recording:Supporting responsible AI: discussion paper (Department of Industry, Science and Resources)The Illusion of China's AI Prowess (Helen Toner, Jenny Xiao, and Jeffrey Ding, Foreign Affairs)Artificial Intelligence: Challenges and Opportunities for the Department of Defense (Jason Matheny, Senate testimony)Challenges to US National Security and Competitiveness Posed by AI (Jason Matheny, Senate testimony)Dealing with Disinformation: A Critical New Mission Area For AUSMIN (Dr Miah Hammond-Errey, USSC)RAND Truth Decay original report (Michael Rich and Jennifer Cavanaugh)RAND Truth DecayThe future of digital health with federated learning (Andrew Trask et al.) SILMARILS – Chemical residue detection (IARPA)Making great content requires fabulous teams. Thanks to the great talents of the following. Research support and assistance: Tom BarrettProduction: Elliott BrennanPodcast Design: Susan BealeMusic: Dr Paul MacThis podcast was recorded on the lands of the Ngunnawal people, and we pay our respects to their Elders past, present and emerging — here and wherever you are listening. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.

The Nonlinear Library
EA - AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms by stepanlos

The Nonlinear Library

Play Episode Listen Later Jun 29, 2023 8:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms, published by stepanlos on June 29, 2023 on The Effective Altruism Forum. Purpose of this post: The purpose of this post is three-fold: 1) highlight the importance of incident sharing and share best practices from adjacent fields to AI safety 2) collect tentative and existing ideas of implementing a widely used AI incident database and 3) serve as a comprehensive list of existing AI incident databases as of June 2023. Epistemic status: I have spent around 25+ hours researching this topic and this list is by no means meant to be exhaustive. It should give the reader an idea of relevant adjacent fields where incident databases are common practice and should highlight some of the more widely used AI incident databases which exist to date. Please feel encouraged to comment any relevant ideas or databases that I have missed, I will periodically update the list if I find anything new. Motivation for AI Incident Databases Sharing incidents, near misses and best practices in AI development decreases the likelihood of future malfunctions and large-scale risk. To mitigate risks from AI systems, it is vital to understand the causes and effects of their failures. Many AI governance organizations, including FLI and CSET, recommend creating a detailed database of AI incidents to enable information-sharing between developers, government and the public. Generally, information-sharing between different stakeholders 1) enables quicker identification of security issues and 2) boosts risk-mitigation by helping companies take appropriate actions against vulnerabilities. Best practices from other fields National Transportation Safety Board (NTSB) publishes and maintains a database of aviation accidents, including detailed reports evaluating technological and environmental factors as well as potential human errors causing the incident. The reports include descriptions of the aircraft, how it was operated by the flight crew, environmental conditions, consequences of event, probable cause of accident, etc. The meticulous record-keeping and best-practices recommendations are one of the key factors behind the steady decline in yearly aviation accidents, making air travel one of the safest form of travel. National Highway Traffic Safety Administration (NHTSA) maintains a comprehensive database recording the number of crashes and fatal injuries caused by automobile and motor vehicle traffic, detailing information about the incidents such as specific driver behavior, atmospheric conditions, light conditions or road-type. NHTSA also enforces safety standards for manufacturing and deploying vehicle parts and equipment. Common Vulnerabilities and Exposure (CVE) is a cross-sector public database recording specific vulnerabilities and exposures in information-security systems, maintained by Mitre Corporation. If a vulnerability is reported, it is examined by a CVE Numbering Authority (CNA) and entered into the database with a description and the identification of the information-security system and all its versions that it applies to. Information Sharing and Analysis Centers (ISAC). ISACs are entities established by important stakeholders in critical infrastructure sectors which are responsible for collecting and sharing: 1) actionable information about physical and cyber threats 2) sharing best threat-mitigation practices. ISACs have 24/7 threat warning and incident reporting services, providing relevant and prompt information to actors in various sectors including automotive, chemical, gas utility or healthcare. National Council of Information Sharing and Analysis Centers (NCI) is a cross-sector forum designated for sharing and integrating information among sector-based ISACs (Information Sharing an...

The Nonlinear Library
EA - Upcoming speaker series on emerging tech, national security & US policy careers by kuhanj

The Nonlinear Library

Play Episode Listen Later Jun 21, 2023 3:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming speaker series on emerging tech, national security & US policy careers, published by kuhanj on June 21, 2023 on The Effective Altruism Forum. There is an upcoming virtual speaker series on emerging tech, national security, and US policy careers and wanted to share this opportunity. I've heard some of these speakers before think the series could be really helpful for anyone interested in working on AI/bio policy in the US. I've pasted the announcement of the speaker series below. Summer webinar series on emerging technology & national security policy careers The Horizon Institute for Public Service, in collaboration with partners at the Scowcroft Center for International Affairs at the Texas A&M Bush School and SeedAI, is excited to announce an upcoming webinar series on US emerging technology policy careers to help individuals decide if they should pursue careers in this field. In line with Horizon's and our partners' focus areas, the series will focus primarily on policy opportunities related to AI and biosecurity and run from late June to early August. Sessions will not be recorded and individuals must sign up to receive event access — you can express interest in attending here. Horizon's mission is to help the US government navigate our era of rapid technological change by fostering the next generation of public servants with emerging technology expertise. The policy opportunities and challenges related to emerging technology are interdisciplinary and will require talent from a range of backgrounds and communities, including many that don't have ready access to information about what policy work is like and what a career transition might look like. As a result, this series will cover: Examples of individuals with non-traditional (e.g. technical or legal) backgrounds transitioning into emerging technology policy Examples of what a “day in a life” is like at a think tank, advocacy organization, executive agency, or in Congress Examples of ongoing policy efforts and debates related to AI or biosecurity policy All sessions will involve interactive conversations with experienced policy practitioners and opportunities for audience questions. Some of the sessions will be useful for individuals from all fields and career stages, while others are more focused — you may choose to attend all or only some of the sessions. Currently scheduled sessions include: Q&A with Jason Matheny, CEO of the RAND Corporation Q&A with Nikki Teran, Institute for Progress Fellow, on biosecurity challenges and policy careers to address them Q&A with Helen Toner‚ Director of Strategy and Foundational Research Grants at CSET, on AI challenges and policy careers to address them What's it like working in the executive branch? What's it like working in a think tank? What's it like working in Congress? Choosing graduate schools for policy careers (master's, PhD, JD) Transitioning from law to policy Transitioning from science and tech to policy Advancing in policy from underrepresented backgrounds Horizon Fellowship info session Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Diplomatic Immunity
BONUS: The New Fire: War, Peace, and Democracy in the Age of AI

Diplomatic Immunity

Play Episode Listen Later Mar 29, 2023 35:55


Bonus: What, exactly, is AI? What are its applications? Why does it matter for national security and geopolitics? Will machines rise up and destroy us all?! Fellow Hoya Andrew Imbrie discussed these questions and more in a fascinating conversation on his new book, co-authored with Ben Buchannon, titled The New Fire: War, Peace, and Democracy in the Age of AI. Come for the Terminator and Matthew Broderick references, but stay for the essential information Imbrie provides on the future of AI and national security!    Andrew Imbrie is an Associate Professor of the Practice and the Gracias Chair in Security and Emerging Technology at the School of Foreign Service at Georgetown University. He is also an Affiliate at Georgetown's Center for Security and Emerging Technology (CSET). Prior to his current role, he served as a senior advisor on cyber and emerging technology policy at the U.S. Mission to the United Nations. He worked previously as a Senior Fellow at CSET, where he focused on issues at the intersection of artificial intelligence and international security and served as an advisor to the National Security Commission on Artificial Intelligence. From 2013 to 2017, he served as a member of the policy planning staff and speechwriter to Secretary John Kerry at the U.S. Department of State. He has also worked as a professional staff member on the U.S. Senate Foreign Relations Committee and as a fellow at the Carnegie Endowment for International Peace. He received his B.A. in the humanities from Connecticut College and his M.A. from the Walsh School of Foreign Service. He holds a Ph.D. in international relations from Georgetown University. His writings have appeared in such outlets as Foreign Affairs, War on the Rocks, Lawfare, Survival, Defense One, and On Being. His first book is Power on the Precipice: The Six Choices America Faces in a Turbulent World (New Haven: Yale University Press, 2020). Andrew grew up as the son of a U.S. Foreign Service officer and now resides in Maryland with his wife Teresa Eder, a foreign policy analyst, journalist, and producer.   Buy The New Fire: War, Peace, and Democracy in the Age of AI here. (https://mitpress.mit.edu/9780262046541/the-new-fire/)   Episode recorded: December 2, 2022   Produced by Daniel Henderson   Episode Image: The New Fire: War, Peace, and Democracy in the Age of AI cover [MIT Press]   Diplomatic Immunity: Frank and candid conversations about diplomacy and foreign affairs   Diplomatic Immunity, a podcast from the Institute for the Study of Diplomacy at Georgetown University, brings you frank and candid conversations with experts on the issues facing diplomats and national security decision-makers around the world.    Funding support from the Carnegie Corporation of New York. 

@BEERISAC: CPS/ICS Security Podcast Playlist
Emilio Salabarria: Building Organizational Resilience through Comprehensive Cybersecurity Assessments for Cyber Florida

@BEERISAC: CPS/ICS Security Podcast Playlist

Play Episode Listen Later Mar 24, 2023 49:51


Podcast: The PrOTect OT Cybersecurity Podcast (LS 28 · TOP 10% what is this?)Episode: Emilio Salabarria: Building Organizational Resilience through Comprehensive Cybersecurity Assessments for Cyber FloridaPub date: 2023-03-23About Emilio Salabarria: Emilio Salabarria is a highly accomplished expert in emergency management and cybersecurity. He's been serving as the Deputy Senior Executive Advisor at Cyber Florida since July 2022. Emilio brings a wealth of knowledge and expertise to the table when it comes to cybersecurity education, research, training and development, public policies, cybersecurity-related technologies, and critical infrastructure support. He's got some serious experience under his belt too - having previously worked at Tampa Electric Company, the Tampa Port Authority, and The Depository Trust and Clearing Corporation. Emilio's career began in 1985 as a firefighter, and he worked his way up to Division Fire Chief of Special Operations at Tampa Fire Rescue. During his time there, Emilio played a key role in the planning of major events such as the Gasparilla Parades, the 2012 Republican National Convention, and Super Bowl 43. Emilio's got a wealth of experience and education to draw on, and he's making a real impact in the fields of emergency management and cybersecurity.In this episode, Aaron and Emilio Salabarria discuss:Risk assessment programs for securing Florida's critical infrastructure The importance of participating in cybersecurity risk assessments and having a plan for the implementation of recommendationsHelping small counties prevent a cyber 9/11 by training and assessing them through tabletop exercises and the CSET toolThe potential impact of a comprehensive cybersecurity assessment tool for improving organizational resilience and preparednessKey Takeaways:Florida is assessing the cybersecurity risks of its public and private entities in 16 sectors to identify weaknesses and provide solutions to enhance preparedness against cyberattacks.Participating in cybersecurity risk assessments, such as CSET, is crucial for Florida's critical infrastructure to identify risks and develop effective cybersecurity strategies, and is a low-friction and easy process.Tabletop exercises are useful for cybersecurity training, but small counties with understaffed IT departments need more support to participate in them and prevent cyber attacks.Completing the CSET tool for cybersecurity assessments to at least 90% can lead to benefits for participating organizations, regardless of whether they answer all questions, and the results from the program in Florida could be applicable to other states and industries. "What we're trying to do here at Cyber Florida, we're trying to prevent a cyber 9/11. That's what we want to avoid, and that's the reason for the risk assessment, the training, and the report to the state to see what they will do." — Emilio Salabarria Connect with Emilio Salabarria: Website: https://cyberflorida.org/Email: esalabarria@cyberflorida.orgLinkedIn: https://www.linkedin.com/in/emilio-f-salabarria-ms-cim-1816334/ and https://www.linkedin.com/company/cyberflorida/Twitter: https://twitter.com/CyberSecurityFLInstagram: https://www.instagram.com/cybersecurityfl/CyberSecureFlorida Initiative: https://cyberflorida.org/cybersecureflorida/Florida Cybersecurity Grant Program: https://digital.fl.gov/cybersecurity/Connect with Aaron:LinkedIn: https://www.linkedin.com/in/aaronccrowLearn more about Industrial Defender:Website: https://www.industrialdefender.com/podcast LinkedIn: https://www.linkedin.com/company/industrial-defender-inc/Twitter: https://twitter.com/iDefend_ICSYouTube: https://www.youtube.com/@industrialdefender7120Audio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it. The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

The PrOTect OT Cybersecurity Podcast
Emilio Salabarria: Building Organizational Resilience through Comprehensive Cybersecurity Assessments for Cyber Florida

The PrOTect OT Cybersecurity Podcast

Play Episode Listen Later Mar 23, 2023 49:51


About Emilio Salabarria: Emilio Salabarria is a highly accomplished expert in emergency management and cybersecurity. He's been serving as the Deputy Senior Executive Advisor at Cyber Florida since July 2022. Emilio brings a wealth of knowledge and expertise to the table when it comes to cybersecurity education, research, training and development, public policies, cybersecurity-related technologies, and critical infrastructure support. He's got some serious experience under his belt too - having previously worked at Tampa Electric Company, the Tampa Port Authority, and The Depository Trust and Clearing Corporation. Emilio's career began in 1985 as a firefighter, and he worked his way up to Division Fire Chief of Special Operations at Tampa Fire Rescue. During his time there, Emilio played a key role in the planning of major events such as the Gasparilla Parades, the 2012 Republican National Convention, and Super Bowl 43. Emilio's got a wealth of experience and education to draw on, and he's making a real impact in the fields of emergency management and cybersecurity.In this episode, Aaron and Emilio Salabarria discuss:Risk assessment programs for securing Florida's critical infrastructure The importance of participating in cybersecurity risk assessments and having a plan for the implementation of recommendationsHelping small counties prevent a cyber 9/11 by training and assessing them through tabletop exercises and the CSET toolThe potential impact of a comprehensive cybersecurity assessment tool for improving organizational resilience and preparednessKey Takeaways:Florida is assessing the cybersecurity risks of its public and private entities in 16 sectors to identify weaknesses and provide solutions to enhance preparedness against cyberattacks.Participating in cybersecurity risk assessments, such as CSET, is crucial for Florida's critical infrastructure to identify risks and develop effective cybersecurity strategies, and is a low-friction and easy process.Tabletop exercises are useful for cybersecurity training, but small counties with understaffed IT departments need more support to participate in them and prevent cyber attacks.Completing the CSET tool for cybersecurity assessments to at least 90% can lead to benefits for participating organizations, regardless of whether they answer all questions, and the results from the program in Florida could be applicable to other states and industries. "What we're trying to do here at Cyber Florida, we're trying to prevent a cyber 9/11. That's what we want to avoid, and that's the reason for the risk assessment, the training, and the report to the state to see what they will do." — Emilio Salabarria Connect with Emilio Salabarria: Website: https://cyberflorida.org/Email: esalabarria@cyberflorida.orgLinkedIn: https://www.linkedin.com/in/emilio-f-salabarria-ms-cim-1816334/ and https://www.linkedin.com/company/cyberflorida/Twitter: https://twitter.com/CyberSecurityFLInstagram: https://www.instagram.com/cybersecurityfl/CyberSecureFlorida Initiative: https://cyberflorida.org/cybersecureflorida/Florida Cybersecurity Grant Program: https://digital.fl.gov/cybersecurity/Connect with Aaron:LinkedIn: https://www.linkedin.com/in/aaronccrowLearn more about Industrial Defender:Website: https://www.industrialdefender.com/podcast LinkedIn: https://www.linkedin.com/company/industrial-defender-inc/Twitter: https://twitter.com/iDefend_ICSYouTube: https://www.youtube.com/@industrialdefender7120Audio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it.

ChinaTalk
Chips Act: A How To Guide

ChinaTalk

Play Episode Listen Later Jan 17, 2023 51:53


What can $52bn for semiconductors actually accomplish? To discuss the tensions and tradeoffs underlying the decisions that the US government is about to make on how to spend this money, I have on today Jacob Feldgoise, an analyst at CSET and Vishnu Kannan, who works at the Carnegie Endowment. We'll be discussing their fantastic paper entitled: "The Limits of Reshoring and Next Steps for U.S. Semiconductor Policy." Jacob and Vishnu's paper: https://carnegieendowment.org/2022/11/22/after-chips-act-limits-of-reshoring-and-next-steps-for-u.s.-semiconductor-policy-pub-88439 Subscribe to the ChinaTalk Newsletter!!!!: https://www.chinatalk.media/ Outtro Music: federal funding by Cake https://www.youtube.com/watch?v=phHe6aNcocQ Cover art: I fed midjourney a picasso portrait and told it 'semiconductor supply chain' Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
Chips Act: A How To Guide

ChinaEconTalk

Play Episode Listen Later Jan 17, 2023 51:53


What can $52bn for semiconductors actually accomplish? To discuss the tensions and tradeoffs underlying the decisions that the US government is about to make on how to spend this money, I have on today Jacob Feldgoise, an analyst at CSET and Vishnu Kannan, who works at the Carnegie Endowment. We'll be discussing their fantastic paper entitled: "The Limits of Reshoring and Next Steps for U.S. Semiconductor Policy." Jacob and Vishnu's paper: https://carnegieendowment.org/2022/11/22/after-chips-act-limits-of-reshoring-and-next-steps-for-u.s.-semiconductor-policy-pub-88439 Subscribe to the ChinaTalk Newsletter!!!!: https://www.chinatalk.media/ Outtro Music: federal funding by Cake https://www.youtube.com/watch?v=phHe6aNcocQ Cover art: I fed midjourney a picasso portrait and told it 'semiconductor supply chain' Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
EA - Reminder (Sept 15th deadline): Apply for the Open Philanthropy Technology Policy Fellowship by JoanGass

The Nonlinear Library

Play Episode Listen Later Sep 10, 2022 1:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminder (Sept 15th deadline): Apply for the Open Philanthropy Technology Policy Fellowship, published by JoanGass on September 9, 2022 on The Effective Altruism Forum. The deadline for the Open Philanthropy Technology Policy Fellowship (OPTPF) is September 15th. If you are interested in working in US policy on topics such as AI and biosecurity, we strongly encourage you to consider applying! The first cohort of 15 OPTPF fellows received 10 weeks of policy training sessions together, and 100% of them matched with a host organization. They have now started placements in the executive branch (in the Departments of Defense, Health and Human Services, and Homeland Security), congressional offices, and think tanks (CSET, NTI, CHS, CSIS, Brookings, CDT, and CEIP). To learn more about the program, you can check out the fellowship page, this past Forum post, and the fellowship FAQ. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

AI with AI
EPIC BLOOM

AI with AI

Play Episode Listen Later Aug 26, 2022 35:19


Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China's Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced's Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science. https://www.cna.org/our-media/podcasts/ai-with-ai  

The Nonlinear Library
EA - War Between the US and China: A case study for epistemic challenges around China-related catastrophic risk by Jordan Schneider

The Nonlinear Library

Play Episode Listen Later Aug 12, 2022 73:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: War Between the US and China: A case study for epistemic challenges around China-related catastrophic risk, published by Jordan Schneider on August 12, 2022 on The Effective Altruism Forum. TL;DR China has a critical role to play in addressing the 21st century's most pressing catastrophic risks. In particular, the risk of a US-China war in the coming decades is real (Metaculus gives 50/50 odds of a conflict with >100 deaths by 2050, and there's perhaps a 15% chance of a war of the scale we're considering for this post). A conventional conflict could cost over 2 billion life years in the combatant countries even before taking into account nuclear escalation. Even less horrific wartime scenarios would reduce global GDP by double digits and plunge perhaps 5% of the world's population back into extreme poverty. Analysis of modern China is very neglected relative to its scale. Only ~600 people in the United States conduct research on anything PRC-related for a living outside the US government. The US Intelligence Community does not have it covered, and a vanishing percentage of the 600 are oriented towards reducing catastrophic risk. Even more concerning is that the flow of researchers into the space has not increased even as US-China tensions have heightened (in fact, the early career talent pipeline is broken). Thankfully, there's a lot of low-hanging fruit! Funding Ideas include Money to increase the pool of early career jobs, including direct funding for early career positions at think tanks A survey that explores the state of research and talent challenges, culminating in a roadmap to improve the quality of US policy debate around China more oriented around catastrophic risks. Epistemic tools like a giant centralized repository of Chinese government documents and translation algorithms tuned for government documents Philanthropically-funded research organizations built off CSET's model that could inspire Congress to fund more impactful analysis Along with reducing the chances of a US-China war, an improved understanding of the PRC could prove invaluable for working on other catastrophic risks like AI safety and biosecurity. A Word from the Authors Jordan Schneider: I was the lead author of this report. For the past five years, I've run the ChinaTalk podcast and newsletter. I've interviewed more than 250 China-focused journalists, academics, and policymakers based outside of the PRC in recorded conversations for podcast episodes, and 150 more casual conversations. I have also worked for six years in the think tank field on China-focused work, spent two years in grad school in Beijing, and another two years in a macro-focused role at a hedge fund with significant investments in China. I have also interacted over a hundred students who have reached out to me for career advice. The claims that follow build off those conversations, my work experience, and countless hours reading China-focused academic literature as well as think tank and government reports. I have scoped this paper to focus specifically on great power war in the next fifty years as it's the topic where I have the deepest expertise. However, practically every major shortermist and longtermist risk area (biorisk, global economic growth prospects, space governance, geoengineering, AI safety) is intimately tied to China, plagued by the same analytical deficiencies in the English-speaking world, and could be served by interventions analogous to the ones proposed below. Pradyumna Prasad: I was the supporting author. I run the Bretton Goods blog and podcast, helped write the sections on estimating the mortality and economic costs of a US China war, and graduated high school last year. Summary What's the problem? In 1948, two years after Churchill made his Iron Curtain speech, the CIA had 12 Russian speakers on s...

The Nonlinear Library
EA - Some concerns about policy work funding and the Long Term Future Fund by weeatquince

The Nonlinear Library

Play Episode Listen Later Aug 12, 2022 9:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some concerns about policy work funding and the Long Term Future Fund, published by weeatquince on August 12, 2022 on The Effective Altruism Forum. Situation As far as I can tell The Long-Term Future Fund (LTFF) wants to fund work that will influence public policy. They say they are looking to fund “policy analysis, advocacy ...” work and post in policy Facebook groups asking for applications. However, as far as I can tell, in practice, the LTFF appears to never (or only very rarely) fund such projects that apply for funding. Especially new projects.My view that such funding is rare is based on the following pieces of evidence: Very few policy grants have been made. Looking at the payout reports of the LTFF, they funded a grant for policy influencing work in 2019 (to me, for a project that was not a new project). Upon asking them they say they have funded at least one more policy grant that has not been written up. Very many policy people have applied for LTFF grants. Without actively looking for it I know of: someone in an established think tank looking for funds for EA work, 3 groups wanting to start EA think tank type projects, a group wanting to do mass campaigning work. All these groups looked competent and were rejected. I am sure many others apply too. I know one of these has gone on to get funding elsewhere (FTX). Comments from staff at leading EA orgs. In January last year, a senior staff member at a leading EA institution mentioned, to my surprise, that EA funders tend not to fund any new longtermist policy projects (except perhaps with very senior trusted people like OpenPhil funding CSET). Recently I spoke to someone at CEA about this and asked if it matched their views too and they said they do think there is a problem here. Note this was about EA funders in general, not specifically the LTFF. Comments from EA policy folk looking for funding. There seems to be (at least there was in 2021) a general view from EAs working in the policy space that it has been very hard to find funding for policy work. Note this is about EA funders in general, not the LTFF. Odd lines of questioning. When I applied, the line of questioning was very odd. I faced an hour of: Why should we do any policy stuff? Isn't all policy work a waste of time? Didn't [random unconnected policy thing] not work? Etc. Of course it can be useful to check applicants have a good understanding of what they are doing, but it made me question whether they wanted to fund policy work at all. Odd feedback. Multiple applicants to the LTFF have reported receiving feedback along the lines of: We see high downside risks but cannot clarify what those risks are. Or: We want to fund you but an anonymous person vetoed you and we cannot say who or why. Or: Start-up policy projects are too risky. Or: We worry you might be successful and hard to shut down if we decided we don't like you in future. This makes me worry that the fund managers do not think through the risks and reasons for funding or not funding policy work in as much depth as I would like, and that they maybe do not fund any new/start-up policy projects. Acknowledgment that they do apply a higher bar for policy work. Staff at the LTFF have told me that they apply a higher bar for policy work than for other grants. Of course, this is all circumstantial, and not necessarily a criticism of the LTFF. The fund managers might argue they never get any policy projects worth funding and that all the promising projects I happened to hear of were actually net negative and it was good not to fund them. It is also possible that things have improved in the last year (the notes that make up this post have been sitting in an email chain for a long while now). Relevance and recommendation That said I thought it was worth me writing this up publicly as the possibilit...

The Nonlinear Library
EA - EA Probably Shouldn't Try to Exercise Direct Political Power by iamasockpuppet

The Nonlinear Library

Play Episode Listen Later Jul 21, 2022 15:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Probably Shouldn't Try to Exercise Direct Political Power, published by iamasockpuppet on July 21, 2022 on The Effective Altruism Forum. Last May, EA-aligned donors helped to make Carrick Flynn's campaign in OR-06 one of the best funded primary campaigns in US electoral history. Flynn, a researcher at FHI and CSET, lost the primary, receiving about half as many votes as the winner despite support from the EA community. This prompted several further analyses on the EA forum: Some potential lessons from Carrick's Congressional bid Early spending research and Carrick Flynn Yglesias on EA and politics Virtually all of the initial analysis has focused on ways that EA can better win future political races. I believe that it would be harmful to try; that EA as a movement attempting to hold direct political power as elected officials would be somewhere between neutral and harmful; and that seeking to influence existing non-EA elected officials would be more effective. The arguments on the EA Forum in favor of Flynn's election were wrong > even with a small chance of success ( The Biden administration released a fantastic $65 billion plan that aims to prevent future pandemics. Congress has funded practically none of it. Part of the problem is that nobody in congress has made pandemic preparedness a ‘core issue.' Congressional members don't oppose the president's plan, and there are some standout champions, but none of them are trying to get it passed with the desperation that I think the issue warrants. > My sense is if Carrick had won, he could have done a lot of good – in particular, advancing pandemic prevention (e.g., via participating in bill markups), with an outside chance of getting Biden's pandemic prevention plan enacted. These comments are incorrect; Carrick Flynn's election would likely not have had much influence on advancing the pandemic prevention plan. 538 currently forecasts an 87% chance that Republicans control the House after the 2022 elections; this would likely leave Flynn more-or-less irrelevant for the next two years. The Democrats currently hold the House, yet have not passed their own President's pandemic plan. Nobody, in any of the comments above or elsewhere, seems to have any idea why; as a result, the arguments above are remarkably vague. The absence of domain knowledge from this conversation is really bad! (I don't claim to be an expert on politics, to be clear; it is entirely possible that the explanations I offer below are wrong. But EA extensively discussed the Flynn campaign and moved significant amounts of money, seemingly without a very basic public analysis of the facts on the ground.) Each fiscal year's federal budget is (supposedly) written and passed by April 15 of that year. There is extensive advanced planning for the budget. For FY 2022, the White House released its budget request April 9th, 2021, too soon for the White House pandemic plan, released September 2021, to be included. The recently released FY 2023 budget request does include funding for the pandemic plan. Carrick Flynn would be unlikely to have much influence over the FY 2023 budget, since he wouldn't even be in the House until most of the way through the negotiations, even assuming that he won the primary, won the general, and the GOP didn't hold the House. (New representatives take office January 3rd; the House passed its FY 2022 budget March 9th of last year.) The most likely way for the budget to exclude the pandemic plan if the Democrats hold ...

The New Teacher Success Network Podcast
16. Finding Success with your teacher competency exams

The New Teacher Success Network Podcast

Play Episode Listen Later Jun 28, 2022 10:25


Oh those new teacher competency exams: the RICA, edTPAs, CSET, PRAXIS, Foundations of Reading exam… the list goes on and on! I get it... the frustration of yet again another hoop you have to jump through to finally get into the classroom. You are tired! You have taken all your courses, passed all the other testing requirements, and now you are looking down the barrel of a high stakes exam. Why is becoming a teacher this hard? We will jump head first into this topic next… To read the National Reading Panel Report (2000): https://files.eric.ed.gov/fulltext/ED489535.pdf Don't forget, if you are interested in learning more about the New Teacher Success Network membership, visit newtsn.com/webinar. Once a month I host live webinars showcasing what it inside the network. I hope you will join me! I would be honored if you would subscribe and leave a review. If you have suggestions to share with Emily, email readingdiva06@gmail.com

The Nonlinear Library
EA - What Is Most Important For Your Productivity? by lynettebye

The Nonlinear Library

Play Episode Listen Later Jun 13, 2022 14:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Is Most Important For Your Productivity?, published by lynettebye on June 13, 2022 on The Effective Altruism Forum. The Peek behind the Curtain interview series includes interviews with eleven people I thought were particularly successful, relatable, or productive. We cover topics ranging from productivity to career exploration to self-care. This third post covers “What is most important for your productivity?” You can find bios of my guests at the end of this post, and view other posts in the series here. This post is cross posted on my blog. What are the most important things you do for your productivity? Keep checking in with your high-level goal I think it really comes back down to writing down and rewriting down and thinking about the high-level goal of the project. Having something that's an easy handle, that's like a sentence or two, and just checking in with myself a lot about where I'm at with respect to the high-level goals. Writing something every month about like, "I did this last month and what was the goal? Have I moved toward the goal or not?" Ajeya Cotra Start by breaking the task down One thing that's really helpful for me is just small-scale planning. When I'm starting a task, write down a list of the things that I would probably want to do. Just having that be clear helps a lot. Daniel Ziegler Checklists I have a checklist and every day when I sign off, I will write every single one of my projects and email them to my boss letting them know status on all the projects. When I was in college, I lived on a checklist. Every day I had the checklist that needed to be completed that day. It makes it so that I don't drop the ball. Abi Olvera Decrease anxiety Decreasing anxiety over things. Because I feel like there's this vicious cycle a lot of people get sucked up in, where if you're really anxious about something, you're focused on the anxiety and assuaging the anxiety rather than on the work that you need to do. Eva Vivalt Internalize that procrastinating only makes aversive tasks worse I think I've gotten better over time at doing what needs to be done that's not easy or particularly enjoyable, and finding out again and again that it's all better if I tackle it head-on rather than trying to put it off for as long as humanly possible. Similar to being a recovering introvert who always thought going to a party was a terrible thing to do yet always, every time I went, both enjoying it and then on reflection going, "Wow. I definitely make the right call there." Regretting it when I decide not to do so. I guess over time I've learned that the sense of aversion usually fades after I actually start doing the aversive task. It's not fun, but it's okay. A sense of relief after you've done it and gotten it off your plate. It seems like worth doing rather than maybe like spending a week or two weeks or two months going, "Oh man, I need to do this. It's even worse since I haven't done it for this long." Getting stuck in that sort of cycle. Greg Lewis Financial penalties for work within your control I use Beeminder. I had to put in the initials of the person that I emailed, and it kind of just, "Why don't I aim for five emails a week?" I was trying to hit more than five. It was only counting on what I could control, which was emailing. It doesn't matter if they don't respond, the fact is I emailed. I do it as well to cap my Facebook time. I do it as well to keep my unread inbox below 75. Abi Olvera Track if your week actually moved something forward It's like, "Was there something that I was able to take the time for this week, and that moved forward significantly somehow," which can look like lots of different things. It could be strategic planning for CSET or it could be working on a specific research project and having time to write or other things like tha...

Medyascope.tv Podcast
CSET | La répression des islamistes du mouvement Furkan provoque de l'embarras dans l'AKP.

Medyascope.tv Podcast

Play Episode Listen Later Mar 26, 2022 16:45


CSET | La répression des islamistes du mouvement Furkan provoque de l'embarras dans l'AKP.

The #BruteCast
Dr. Rita Konaev, "Russia and Urban Warfare"

The #BruteCast

Play Episode Listen Later Mar 17, 2022 76:10


#Russia's invasion of #Ukraine is now in its third week, but it only took a few days of fighting for the world to see that Vladimir Putin's evident hope for rapid and generally bloodless victory had been shattered. Putin expected his soldiers to walk into Ukrainian cities and be welcomed with open arms. Instead, the Russian army is stalled along several fronts by stiff Ukrainian resistance, and many of the cities that were likely objectives for the first days of the war remain unconquered. But taking those cities remains a key goal for Putin, and the world is now watching as urban warfare takes center stage in this conflict. Cities like Mariupol and Kherson are either encircled or already at least partially occupied; others, like Kyiv and Kharkiv, are being battered in preparation for attempts to seize them; and those like Odesa are wondering how long it will be before Russian ground forces reach their outskirts. To talk us through the unique and bitter challenges of urban warfare, as well as Russia's approach to it, we are pleased to welcome Dr. Rita Konaev. Dr. Konaev is a Research Fellow at Georgetown's Center for Security and Emerging Technology (CSET) interested in military applications of AI and Russian military innovation. Previously, she was a Non-Resident Fellow with the Modern War Institute at West Point, a post-doctoral fellow at the Fletcher School of Law and Diplomacy and a post-doctoral fellow at the University of Pennsylvania's Perry World House. Before joining CSET, she worked as a Senior Principal in the Marketing and Communications practice at Gartner. Margarita's research on international security, armed conflict, non-state actors and urban warfare in the Middle East, Russia and Eurasia has been published by the Journal of Strategic Studies, the Journal of Global Security Studies, Conflict Management and Peace Science, the French Institute of International Relations, the Bulletin of the Atomic Scientists, Lawfare, War on the Rocks, Defense One, Modern War Institute, Foreign Policy Research Institute and a range of other outlets. She holds a Ph.D. in Political Science from the University of Notre Dame, an M.A. in Conflict Resolution from Georgetown University and a B.A. from Brandeis University. Intro/outro music is "Evolution" from BenSound.com (https://www.bensound.com) Follow the Krulak Center: Facebook: https://www.facebook.com/thekrulakcenter Instagram: https://www.instagram.com/thekrulakcenter/ Twitter: @TheKrulakCenter YouTube: https://www.youtube.com/channel/UCcIYZ84VMuP8bDw0T9K8S3g LinkedIn: https://www.linkedin.com/company/brute-krulak-center-for-innovation-and-future-warfare Krulak Center homepage on The Landing: https://unum.nsin.us/kcic

Medyascope.tv Podcast
CSET | Russie-Ukraine: Membre de l'OTAN mais proche de la Russie, Ankara clarifie sa position

Medyascope.tv Podcast

Play Episode Listen Later Mar 13, 2022 21:02


Dans cette semaine en Turquie nous avons parlé de la clarification progressive de la position turque vis-à-vis de la guerre en Ukraine. Au début de l'agression, Ankara s'était abstenu de voter la suspension de la Russie du Conseil de l'Europe tout en condamnant le régime de Poutine. Ankara n'applique toujours pas de restriction de son espace aérien à la Russie mais livre des drones TB2 à l'Ukraine. Nous en avons parlé avec l'ex diplomate Sinan Ülgen. Puis nous nous sommes penchés sur l'inflation avant de revenir sur la proposition d'une partie de l'opposition : le retour à un Système Parlementaire « Renforcé ».

Medyascope.tv Podcast
CSET | Ukraine-Russie: La délicate position de la Turquie avec Aydın Selcen

Medyascope.tv Podcast

Play Episode Listen Later Feb 26, 2022 23:21


Dans cette Semaine en Turquie nous avons abordé la position de la Turquie qui entretient des relations avec les deux pays belligérants. Aydın Selcen, ex diplomate a répondu à nos questions. Nous avons aussi vu l'importance symbolique de la date du 28 février qui a été choisie par l'opposition en général et par le CHP en particulier pour dévoiler leur projet du « système parlementaire renforcé ». Nous nous sommes aussi penché sur la grève victorieuses des ouvriers de Migros.

Medyascope.tv Podcast
CSET - L'opposition tente d'unir ses forces sans le HDP

Medyascope.tv Podcast

Play Episode Listen Later Feb 18, 2022 17:21


Dans Cette semaine en Turquie nous avons parlé de la tentative de l'opposition d'unir ses forces en excluant le HDP afin de revenir au système parlementaire « renforcé ». Ruşen Çakır a analysé l'instrumentalisation du pouvoir des tensions qu'il pouvait y avoir entre le HDP et le IYI parti. Puis nous nous sommes intéressés à la nouvelle chanson de Tarkan « Geççek », interprétée comme un hymne de l'opposition. Nous avons aussi vu la changement de statut des lieux de culte Alevis, qui passent de « commerce » à « résidence », pour conclure sur la récente visite d'Erdoğan aux Emirats Arabes Unis.

Medyascope.tv Podcast
CSET(112): Indignation populaire face à l'augmentation des prix de l'énergie

Medyascope.tv Podcast

Play Episode Listen Later Feb 11, 2022 15:37


Dans Cette Semaine en Turquie nous avons parlé des factures d'électricité qui ont augmenté de manière spectaculaire et des protestations que cela a engendré, de la paralysie de la vie à Isparta pendant 4 jours, de l'analyse de Ruşen Çakır sur cette séquence, et de l'assassinat de Halil Falyali en Chypre du Nord. Nous avons aussi évoqué la maladie de Recep Tayyip Erdogan et de ses répercussions judiciaires.

Medyascope.tv Podcast
CSET - Querelles de pouvoir au sommet de l'Etat : le Ministre de la Justice démissionne

Medyascope.tv Podcast

Play Episode Listen Later Feb 5, 2022 20:24


CSET - Querelles de pouvoir au sommet de l'Etat : le Ministre de la Justice démissionne

Medyascope.tv Podcast
CSET - Tempête de neige à Istanbul: opposition et pouvoir central se rejettent la responsabilité

Medyascope.tv Podcast

Play Episode Listen Later Jan 28, 2022 17:31


ans Cette semaine en Turquie nous avons parlé de : -Les dysfonctionnements dus a la tempète de neige à istanbul, où le maire s'est retrouvé au coeur d'une campagne de dénigrement -La formidable reculade d'Erdoğan face à Sezen Aksu -l'analyse de Ruşen Çakır sur le pas en arrière d'Erdoğan -L'incarcération de Sedef Kabaş pour "insulte au président" -La dernière vidéo de Kemal Kılıçdaroğlu

Medyascope.tv Podcast
CSET -- La chanteuse Sezen Aksu ciblée par les milieux nationalistes et islamistes

Medyascope.tv Podcast

Play Episode Listen Later Jan 21, 2022 10:57


Dans Cette semaine en Turquie nous avons parlé de: – La campagne de lynchage qui a visé Sezen Aksu -La tentative de changer le nom de la Turquie en Türkiye -La commémoration de Hrant Dink -Le maintien en détention d'Osman Kavala -La décision de garder le taux directeur

Medyascope.tv Podcast
CSET- Dortoirs étudiants gérés par des confréries religieuses: le débat relancé par un suicide

Medyascope.tv Podcast

Play Episode Listen Later Jan 16, 2022 16:49


CSET- Dortoirs étudiants gérés par des confréries religieuses: le débat relancé par un suicide