POPULARITY
Categories
In this special interview episode of The Box of Oddities, Jethro sits down with acclaimed science journalist Becky Ferreira—author of the new book First Contact: The Story of Our Obsession with Aliens. Together they explore humanity's oldest question: Are we alone? Ferreira, whose work has appeared in The New York Times, WIRED, Popular Science, MIT Technology Review, and NPR's Science Friday, guides us through the deep history of alien speculation—from ancient Greek and Roman philosophers to Hopi star-people traditions to the modern UAP debate. Jethro taps into his inner UFO enthusiast as they dive into:• Why ancient cultures believed the sky itself was communicating with them• The earliest “alien life” theories from Christian and Muslim scholars• The Fermi Paradox, Drake Equation, and what science gets wrong about “Where is everybody?”• Water worlds like Europa and Enceladus, and why alien life may be hiding inside dark interior oceans• Whether interdimensional phenomena at places like Skinwalker Ranch could explain UAP encounters• How humans might emotionally—and chaotically—respond if we picked up an alien signal• The surprising ways religion is preparing for extraterrestrial discovery• Whether we'll make contact in our lifetime… and what form it might take Ferreira's insights blend cutting-edge astronomy with anthropology, psychology, and the strange human tendency to project our own fears and hopes onto the stars. Equal parts science, myth, and cosmic mystery, this conversation asks why the idea of alien life has been with us since the beginning—and why we can't stop looking up. Becky Ferreira's book First Contact is available now in hardcover, ebook, and audiobook wherever books are sold. Learn more about your ad choices. Visit megaphone.fm/adchoices
Modern medicine is veering away from the traditional Hippocratic Oath that required physicians to do no harm and use their knowledge and skills solely for the purpose of healing the patient, says psychiatrist and bioethics expert Dr. Aaron Kheriaty.Now, physicians are euthanizing patients, removing healthy organs in certain transgender-related surgeries, and injecting drugs for late-term abortions even when the mother's life is not threatened.Hippocratic principles are being superseded by utilitarian ethics that prioritize the “greater good” over the well-being and rights of individual patients, Kheriaty says. That's fueling, for instance, the push to expand the dead-donor eligibility criteria for organ donations.It's also manifesting in the push to adopt technological advancements like germ-line gene editing that could be used to create “designer babies” or in vitro gametogenesis (IVG), a process that uses stem cells, such as those derived from skin cells, to create human eggs and sperm in a lab.Earlier this year, an op-ed in the MIT Technology Review argued for the creation of “spare” human bodies called “bodyoids.” These would essentially be human bodies created in laboratories from human stem cells, but without brains or consciousness. Proponents say they would revolutionize medical research and drug testing and create an unlimited supply of organs.It sounds like the stuff of science fiction. What are the true ethical implications? Is this really where we want medicine to go?Kheriaty is the director of the bioethics and American democracy program at the Ethics and Public Policy Center and former director of the medical ethics program at UCI Health.His latest book is titled “Making the Cut: How to Heal Modern Medicine.”“The biggest advance [that] medicine needs to make is to accept the limits of medicine,” he says.Views expressed in this video are opinions of the host and the guest, and do not necessarily reflect the views of The Epoch Times.
No Podcast da MIT Technology Review desta semana, Rafael Coimbra e Carlos Aros discutem como a ideia de uma Inteligência Artificial geral saiu do campo técnico e se aproximou de uma teoria da conspiração.O episódio questiona por que tratamos a AGI como um marco iminente, sem definição clara, consenso científico ou sinais concretos de que suas promessas ou ameaças estejampróximas. Entre expectativas exageradas e narrativas messiânicas, o debate acaba desviando a atenção do que realmente importa hoje: os impactos reais daIA existente, seus limites, riscos e desafios de adoção e regulação.A análise mostra como mitos e projeções influenciam decisões empresariais, políticas e sociais, obscurecendo a compreensão técnica necessária para o presente.
Gideon Rachman sits down with the FT's innovation editor John Thornhill and Caiwei Chen, China reporter for the MIT Technology Review, to discuss the race between China and the US to become the 21st-century AI superpower. The west is used to hearing about the might of the Silicon Valley giants, US cutting-edge research and chip dominance. But China has a different approach. Will its use of a cheaper and more efficient open AI model allow China to overtake the US with this era-defining technology?Want more? Join John and the FT's Chinese technology correspondent Eleanor Olcott in a live Q&A on November 13 at 1pm GMT where they will be answering your questions on the tech battle between Silicon Valley and Beijing. Submit your question: Will China win the AI race?And subscribe to a new six part newsletter series - 'The State of AI'. It's a collaboration between the FT and MIT Technology Review where writers from both publications debate the defining questions of the AI era. Sign up here More on this topic:The State of AI: is China about to win the race?China offers tech giants cheap power to boost domestic AI chipsAI pioneers claim human-level general intelligence is already hereThe AI raceWho's right about AI: economists or technologists?Follow Gideon on Bluesky or X @gideonrachman.bsky.social, @gideonrachmanSubscribe to the Rachman Review wherever you get your podcasts - please listen, rate and subscribe.The Rachman Review is presented by Gideon Rachman. Produced by Clare Williamson. The executive producer is Flo Phillips and the sound design is by Simon Panayi.Clip: AxiosRead a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.
David Ewing Duncan has spent the last 25 years being poked and prodded in the name of science. He's signed up for hundreds of tests because, as a journalist, he writes about emerging health breakthroughs. He says one recent test contains more useful data than anything he's seen to date. He talks to host Emily Kwong about his score on the Immune Health Metric, which was developed by immunologist John Tsang. Together, David and John explain why immune health is so central to overall health and how a simple blood test could one day predict disease before it starts.Learn more about the Human Immunome Project.Read David's full article about his experience with the Immune Health Metric. The piece is a collaboration between MIT Technology Review and Aventine, a non-profit research foundation that creates and supports content about how technology and science are changing the way we live.Interested in more health science? Email us your question at shortwave@npr.org.Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at plus.npr.org/shortwave.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Journalist Mike Rothschild has for the last decade studied and written about the rise and spread of conspiracy theories, hoaxes and scams. But after he lost his home in Altadena to the Eaton fire, Rothschild witnessed firsthand how conspiracies take hold of people in the throes of a traumatic event. We talk to him about why his community was vulnerable to disinformation about the causes of and responses to the fire — and why conspiracy theories spread when major disasters strike. His new piece for MIT Technology Review is “What it's like to be in the middle of a conspiracy theory (according to a conspiracy theory expert).” Guests: Mike Rothschild, journalist and expert on conspiracy theories and disinformation, author, “The Storm Is Upon Us" and “Jewish Space Lasers"; His new article in MIT Technology Review is “What it's like to be in the middle of a conspiracy theory (according to a conspiracy theory expert)” Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial intelligence isn't just transforming industries—it's redefining freedom, opportunity, and the future of human work. This week on the Let People Prosper Show, I talk with Kevin Frazier, the inaugural AI Innovation and Law Fellow at the University of Texas School of Law, where he leads their groundbreaking new AI Innovation and Law Program.Kevin's at the center of the national conversation on how to balance innovation with accountability—and how to make sure regulation doesn't crush the technological progress that drives prosperity. With degrees from UC Berkeley Law, Harvard Kennedy School, and the University of Oregon, Kevin brings both a legal and policy lens to today's most pressing questions about AI, federalism, and the economy. Before joining UT, he served as an Assistant Professor at St. Thomas University College of Law and conducted research for the Institute for Law and AI. His scholarship has appeared in the Tennessee Law Review, MIT Technology Review, and Lawfare. He also co-hosts the Scaling Laws Podcast, bridging the gap between innovation and regulation.This episode goes deep into how we can harness AI to promote human flourishing, not government dependency—how we can regulate based on reality, not fear—and how federalism can help America remain the global leader in technological innovation.For more insights, visit vanceginn.com. You can also get even greater value by subscribing to my Substack newsletter at vanceginn.substack.com. Please share with your friends, family, and broader social media network.
Welcome, my ghoulish fiends! Join me as I sink my teeth into the dark allure of vampires and the eternal obsession with youth. From Countess Elizabeth Bathory's infamous legend to Roman and early modern rituals of blood and renewal, I explore how the vampire became the ultimate beauty icon and how our own thirst for ageless perfection mirrors their seductive, deadly world. *Listener Discretion is Strongly Advised*************Sources & References:Groom, Nick. The Vampire: A New History (Oxford University Press, 2018)Auerbach, Nina. Our Vampires, Ourselves (University of Chicago Press, 1995)Sugg, Richard. Mummies, Cannibals and Vampires (Routledge, 2011)Wilde, Oscar. The Picture of Dorian Gray (1890)Polidori, John. The Vampyre (1819)Le Fanu, Sheridan. Carmilla (1872)Stoker, Bram. Dracula (1897)Historical references: Pliny the Elder, Juvenal, Marsilio Ficino, and accounts from the Bathory trial (1609–1610)Day, Doris. Commentary on PRP “vampire facial” (2015–2019)Regalado, Antonio. “Young Blood Transfusions: Silicon Valley's Obsession with Youth.” MIT Technology Review (2019)Lepore, Jill. “The Cult of Youth in Modern Science.” The New Yorker (2019)GlobalData (2027 projection for anti-aging skincare market)JAMA Facial Plastic Surgery (2018–2020) studies on “Snapchat dysmorphia”****************Leave Us a 5* Rating, it really helps the show!Apple Podcast:https://podcasts.apple.com/us/podcast/beauty-unlocked-the-podcast/id1522636282Spotify Podcast:https://open.spotify.com/show/37MLxC8eRob1D0ZcgcCorA****************Follow Us on Social Media & Subscribe to our YouTube Channel!YouTube:@beautyunlockedspodcasthourTikTok:tiktok.com/@beautyunlockedthepod****************MUSIC & SOUND FX:"Alleys of Darkness" Phoenix Tail"Rain Light 6" SFX Producer Epidemic SoundFind the perfect track on Epidemic Sound for your content and take it to the next level! See what the hype is all about!
This is a free preview of a paid episode. To hear more, visit andrewsullivan.substack.comKaren is a tech journalist and leads the Pulitzer Center's AI Spotlight Series — a program that trains journalists on how to cover AI. She was a senior editor for AI at MIT Technology Review and a reporter for the WSJ covering Chinese and US tech companies. Her first book is Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI — the most accessible and readable narrative of the rise of AI.For two clips of our convo — on the environmental impact of AI, and its threats to democracy — head to our YouTube page.Other topics: raised by two computer scientists; her mechanical engineering at MIT; the birth of AI at Dartmouth; IBM Watson on Jeopardy!; how the internet made data cheap to collect; the junk info swept into AI; massive data centers; ideology driving the AI industry more than science; ChatGPT; the networking and fundraising skills of Sam Altman; his family scandal; his near ouster at OpenAI; the AI bubble and propping up 401(k)s; the threat to white-collar jobs; the brutal conditions of AI work in developing countries; Chinese authoritarianism and DeepSeek; the illiberalizing effect of Silicon Valley; Musk and Thiel; how the IDF uses AI against Hamas; autonomous weapons; how AI has done wonders with Pharma; transhumanism; chatbot safety for kids; Pope Leo's tech warnings; and AI as the ultimate apple in the Garden of Eden.Browse the Dishcast archive for an episode you might enjoy. Coming up: David Ignatius on the Trump effect globally, Mark Halperin on the domestic front, Michel Paradis on Eisenhower, Fiona Hill on Putin's war, and Arthur Brooks on the science of happiness. As always, please send any guest recs, dissents, and other comments to dish@andrewsullivan.com.
It's World Space Week, and we're fueling up the rocket for a tour of some missions and projects that could provide insights into major space mysteries. Astrophysicist Hakeem Oluseyi joins Host Flora Lichtman to celebrate the wonders of space science, from the recently launched IMAP, which will study the solar environment, to the new Vera Rubin Observatory, and big physics projects like LIGO. Plus, the latest in climate tech: MIT Technology Review has published its annual list of climate tech companies that show great promise in work ranging from producing sodium ion batteries to recycling rare earth magnets. Host Ira Flatow talks with climate reporter Casey Crownhart about trends in climate tech and what companies she's excited about.Guests: Dr. Hakeem Oluseyi is an astrophysicist and author of the upcoming book, Why Do We Exist? The Nine Realms of the Universe That Make You Possible, and host of the video podcast “Particles of Thought.”Casey Crownhart is a senior climate reporter for MIT Technology Review based in New York City.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Arthur Kay is an entrepreneur, and advisor to organisations building solutions for sustainable cities.He is the founder of several urban design and technology companies, including the clean technology company – Bio-bean (acq. 2023), the design, technology, and development company – Skyroom, and the £100m Key Worker Homes Fund.Arthur is an advisor to organisations including Innovo Group, and serving as a board member for Transport for London, The Royal Academy of Engineering, the Museum of the Home, and Fast Forward 2030.In addition, Arthur holds academic appointments, as Associate Professor (Hon.) at UCL Institute for Global Prosperity. He has lectured on urban design at MIT, NYU, LSE and Imperial College London. He is co-author of the book Roadkill: Unveiling the True Cost of Our Toxic Relationship with Cars (Wiley, 2025), with Professor Dame Henrietta Moore.Arthur's work building solutions for sustainable cities has been recognised by the UN as a Sustainable Development Goals Pioneer, The Guardian as Sustainable Business Leader of the Year, MIT Technology Review as a 35-under-35 and Forbes as an all-star 30-under-30. His words and work have appeared in publications, including The Times, The Financial Times, The New York Times, Bloomberg, National Geographic, The New Scientist, The Wall Street Journal, The Telegraph, The Guardian, The Washington Post, Al Jazeera, CNN, CNBC, CBS, Wired, Architect's Journal, Architectural Review, and on the BBC.Arthur studied architecture at UCL Bartlett School of Architecture, and entrepreneurship at Stanford's Graduate School of Business.
In der neuen Podcast-Folge sprechen wir über ein neues KI-Modell, das Krankheiten voraussagen und 20 Jahre in die Zukunft blicken können soll.
In four years time, how might a theoretical Dem administration grapple with the expanding energy consumption and demand for AI? This is the question the second half of TRG Media and MIT Technology Review's AI and Energy Scenario Exercises seeks to explore. Leading experts come together to role play as key actors in government, private industry, and more to simulate how public policy might take shape in the coming years. This episode contains the second and final phase of the game and a brief wrap-up from the editor in chief of MIT Technology Review Mat Honan and game designer Ed McGrady. The Players: US Federal POTUS - Merici Vinton, Former Senior Advisor to IRS Commissioner Danny Werfel Security (DoD, DHS, DOS) - Mark Dalton, Senior director of technology and innovation at R Street Energy (DOE, EPA, Interior) - Wayne Brough, Former President of the Innovation Defense Foundation and senior fellow on R Street's Technology and Innovation team Red State Leadership- Soren Dayton, Director of Governance at the Niskanen Center Power generation industry Fossil - David Sandalow, Inaugural Fellow at the Center on Global Energy Policy (CGEP) at Columbia University Solar - Enock Ebban, host of “Sustainability Transformations Podcast” Nuclear [1] - Ashley Finan, Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy at Columbia University Investors in Al Domestic- Josiah Neeley, R Street Institute's Energy team advisor International - Josh Felser, CO Founder and Managing Partner at Climatic International (Middle East, EU, Russia, China, etc.) - Shaolei Ren, Associate Professor of Electrical and Computer Engineering at the University of California International (Middle East, EU, Russia, China, etc.) - Rachel Ziemba, Adjunct Senior Fellow at the Center for a New American Security (CNAS) Blue State Leadership POTUS Adam Zurofsky - former Director of State Policy and Agency Management for the State of New York Ari Peskoe - Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program Beth Garza - senior fellow with R Street's Energy & Environmental Policy Team Public interest Environmental - Brent Eubanks, founder of Eubanks Engineering Research Domestic political - Meiyi Li, Ph.D. candidate at The University of Texas at Austin Media - Jen Sidorova, policy analyst at Reason Foundation Al and other Digital Industries AI - Valerie Taylor, division director of Mathematics and Computer Science at Argonne National Laboratory Blockchain -Erica Schoder, Executive Director and co-founder of the R Street Institute Erica Schroder - Elliot David, Head of Climate Strategy at Sustainable Bitcoin Protocol Other digital systems (chips, data center operations, online gaming, streaming, etc.) [1] - Ken Briggs, Faculty Assistant at Harvard University This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
In four years time, how might a theoretical Dem administration grapple with the expanding energy consumption and demand for AI? This is the question the second half of TRG Media and MIT Technology Review's AI and Energy Scenario Exercises seeks to explore. Leading experts come together to role play as key actors in government, private industry, and more to simulate how public policy might take shape in the coming years. This episode contains the second and final phase of the game and a brief wrap-up from the editor in chief of MIT Technology Review Mat Honan and game designer Ed McGrady. The Players: US Federal POTUS - Merici Vinton, Former Senior Advisor to IRS Commissioner Danny Werfel Security (DoD, DHS, DOS) - Mark Dalton, Senior director of technology and innovation at R Street Energy (DOE, EPA, Interior) - Wayne Brough, Former President of the Innovation Defense Foundation and senior fellow on R Street's Technology and Innovation team Red State Leadership- Soren Dayton, Director of Governance at the Niskanen Center Power generation industry Fossil - David Sandalow, Inaugural Fellow at the Center on Global Energy Policy (CGEP) at Columbia University Solar - Enock Ebban, host of “Sustainability Transformations Podcast” Nuclear [1] - Ashley Finan, Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy at Columbia University Investors in Al Domestic- Josiah Neeley, R Street Institute's Energy team advisor International - Josh Felser, CO Founder and Managing Partner at Climatic International (Middle East, EU, Russia, China, etc.) - Shaolei Ren, Associate Professor of Electrical and Computer Engineering at the University of California International (Middle East, EU, Russia, China, etc.) - Rachel Ziemba, Adjunct Senior Fellow at the Center for a New American Security (CNAS) Blue State Leadership POTUS Adam Zurofsky - former Director of State Policy and Agency Management for the State of New York Ari Peskoe - Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program Beth Garza - senior fellow with R Street's Energy & Environmental Policy Team Public interest Environmental - Brent Eubanks, founder of Eubanks Engineering Research Domestic political - Meiyi Li, Ph.D. candidate at The University of Texas at Austin Media - Jen Sidorova, policy analyst at Reason Foundation Al and other Digital Industries AI - Valerie Taylor, division director of Mathematics and Computer Science at Argonne National Laboratory Blockchain -Erica Schoder, Executive Director and co-founder of the R Street Institute Erica Schroder - Elliot David, Head of Climate Strategy at Sustainable Bitcoin Protocol Other digital systems (chips, data center operations, online gaming, streaming, etc.) [1] - Ken Briggs, Faculty Assistant at Harvard University This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
No Digital de Tudo, Andre Miceli e Iago Ribeiro recebem Rafael Coimbra para falarem sobre o EmTech 2025, o fórum global da MIT Technology Review que chega ao Brasil nos dias 29 e 30 de setembro. O episódio mostra por que o evento se tornou referência ao reunir pesquisadores do MIT, executivos e especialistas para discutir IA, saúde, transição energética e novos modelos de negócio com impacto real. Ouça agora e entenda como o EmTech conecta inovação acadêmica, mercado e sociedade. Learn more about your ad choices. Visit megaphone.fm/adchoices
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
“My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.”In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.www.palumbo-liu.comhttps://speakingoutofplace.comBluesky @palumboliu.bsky.socialInstagram @speaking_out_of_place
How might this administration and future administrations approach the critical issue of AI and energy demands? This is the question the second of TRG Media and MIT Technology Review's AI Scenario Exercises tries to answer. Leading experts come together to role play as key actors in government, private industry, and more to simulate how public policy might take shape in the coming years. This first episode contains the first phase of the game and an introduction from the editor in chief of MIT Technology Review Mat Honan, as well as an overview of the game by designer Ed McGrady. The Players: US Federal POTUS - Merici Vinton, Former Senior Advisor to IRS Commissioner Danny Werfel Security (DoD, DHS, DOS) - Mark Dalton, Senior director of technology and innovation at R Street Energy (DOE, EPA, Interior) - Wayne Brough, Former President of the Innovation Defense Foundation and senior fellow on R Street's Technology and Innovation team Red State Leadership- Soren Dayton, Director of Governance at the Niskanen Center Power generation industry Fossil - David Sandalow, Inaugural Fellow at the Center on Global Energy Policy (CGEP) at Columbia University Solar - Enock Ebban, host of “Sustainability Transformations Podcast” Nuclear [1] - Ashley Finan, Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy at Columbia University Investors in Al Domestic- Josiah Neeley, R Street Institute's Energy team advisor International - Josh Felser, CO Founder and Managing Partner at Climatic International (Middle East, EU, Russia, China, etc.) - Shaolei Ren, Associate Professor of Electrical and Computer Engineering at the University of California International (Middle East, EU, Russia, China, etc.) - Rachel Ziemba, Adjunct Senior Fellow at the Center for a New American Security (CNAS) Blue State Leadership POTUS Adam Zurofsky - former Director of State Policy and Agency Management for the State of New York Ari Peskoe - Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program Beth Garza - senior fellow with R Street's Energy & Environmental Policy Team Public interest Environmental - Brent Eubanks, founder of Eubanks Engineering Research Domestic political - Meiyi Li, Ph.D. candidate at The University of Texas at Austin Media - Jen Sidorova, policy analyst at Reason Foundation Al and other Digital Industries AI - Valerie Taylor, division director of Mathematics and Computer Science at Argonne National Laboratory Blockchain -Erica Schoder, Executive Director and co-founder of the R Street Institute Erica Schroder - Elliot David, Head of Climate Strategy at Sustainable Bitcoin Protocol Other digital systems (chips, data center operations, online gaming, streaming, etc.) [1] - Ken Briggs, Faculty Assistant at Harvard University This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
How might this administration and future administrations approach the critical issue of AI and energy demands? This is the question the second of TRG Media and MIT Technology Review's AI Scenario Exercises tries to answer. Leading experts come together to role play as key actors in government, private industry, and more to simulate how public policy might take shape in the coming years. This first episode contains the first phase of the game and an introduction from the editor in chief of MIT Technology Review Mat Honan, as well as an overview of the game by designer Ed McGrady. The Players: US Federal POTUS - Merici Vinton, Former Senior Advisor to IRS Commissioner Danny Werfel Security (DoD, DHS, DOS) - Mark Dalton, Senior director of technology and innovation at R Street Energy (DOE, EPA, Interior) - Wayne Brough, Former President of the Innovation Defense Foundation and senior fellow on R Street's Technology and Innovation team Red State Leadership- Soren Dayton, Director of Governance at the Niskanen Center Power generation industry Fossil - David Sandalow, Inaugural Fellow at the Center on Global Energy Policy (CGEP) at Columbia University Solar - Enock Ebban, host of “Sustainability Transformations Podcast” Nuclear [1] - Ashley Finan, Jay and Jill Bernstein Global Fellow at the Center on Global Energy Policy at Columbia University Investors in Al Domestic- Josiah Neeley, R Street Institute's Energy team advisor International - Josh Felser, CO Founder and Managing Partner at Climatic International (Middle East, EU, Russia, China, etc.) - Shaolei Ren, Associate Professor of Electrical and Computer Engineering at the University of California International (Middle East, EU, Russia, China, etc.) - Rachel Ziemba, Adjunct Senior Fellow at the Center for a New American Security (CNAS) Blue State Leadership POTUS Adam Zurofsky - former Director of State Policy and Agency Management for the State of New York Ari Peskoe - Director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program Beth Garza - senior fellow with R Street's Energy & Environmental Policy Team Public interest Environmental - Brent Eubanks, founder of Eubanks Engineering Research Domestic political - Meiyi Li, Ph.D. candidate at The University of Texas at Austin Media - Jen Sidorova, policy analyst at Reason Foundation Al and other Digital Industries AI - Valerie Taylor, division director of Mathematics and Computer Science at Argonne National Laboratory Blockchain -Erica Schoder, Executive Director and co-founder of the R Street Institute Erica Schroder - Elliot David, Head of Climate Strategy at Sustainable Bitcoin Protocol Other digital systems (chips, data center operations, online gaming, streaming, etc.) [1] - Ken Briggs, Faculty Assistant at Harvard University This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Speaking Out of Place, investigative journalist Karen Hao explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI had a rather different subtitle in its UK edition: “Inside the reckless race of total domination.” In our conversation we flesh out the overlap between these two points of emphasis. Hao argues that in general the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources however the centralizer wants.” Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI's domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. This is a discussion everyone should be part of.Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, EMPIRE OF AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program training thousands of journalists around the world on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.
This week on The Bulletin, Russell, Mike, and Clarissa discuss the Trump administration's attack on a Venezuelan boat allegedly carrying drugs that killed 11 people, and the implications for foreign policy. Then, CT's Emily Belz joins us to discuss payday loans and predatory lending, and what Christians are doing in response. Finally, Hana Kiros from The Atlantic stops in to chat about what's happening with the items from cancelled USAID projects, including a huge fire sale held by the federal government. REFERENCED IN THE SHOW: Payday Lender Restrictions Weaken, as Christian Orgs Step In - Sebastian Rodriguez Inside the USAID Fire Sale - Hana Kiros GO DEEPER WITH THE BULLETIN: Join the conversation at our Substack. Find us on YouTube. Rate and review the show in your podcast app of choice. ABOUT THE GUESTS: Emily Belz is a staff writer with Christianity Today. She is a former senior reporter for World magazine. She is a World Journalism Institute graduate and also previously reported for the New York Daily News, The Indianapolis Star, and Philanthropy magazine. Hana Kiros is an assistant editor at The Atlantic. She covers human rights and technology, and her writing also appears in The New York Times, The Washington Post, and the MIT Technology Review. ABOUT THE BULLETIN: The Bulletin is a twice-weekly politics and current events show from Christianity Today moderated by Clarissa Moll, with senior commentary from Russell Moore (Christianity Today's editor in chief) and Mike Cosper (director, CT Media). Each week, the show explores current events and breaking news and shares a Christian perspective on issues that are shaping our world. We also offer special one-on-one conversations with writers, artists, and thought leaders whose impact on the world brings important significance to a Christian worldview, like Bono, Sharon McMahon, Harrison Scott Key, Frank Bruni, and more. The Bulletin listeners get 25% off CT. Go to https://orderct.com/THEBULLETIN to learn more. “The Bulletin” is a production of Christianity Today Producer: Clarissa Moll Associate Producer: Alexa Burke Editing and Mix: TJ Hester Graphic Design: Rick Szuecs Music: Dan Phelps Executive Producers: Erik Petrik and Mike Cosper Senior Producer: Matt Stevens Learn more about your ad choices. Visit podcastchoices.com/adchoices
Mehrere KI-Modelle zu einer „Denkmaschine“ kombinieren: Das will Christian Gilcher mit seinem Unternehmen Embraceable AI geschafft haben. Im Gespräch mit Wolfgang Stieler, Redakteur bei der MIT Technology Review, und Podast-Host Stella-Sophie Wojtczak erklärt er, wie die Technology funktioniert und wie damit auch das Problem der KI-Halluzinationen veringert werden soll. _Hinweis: Dieser Podcast wird von einem Sponsor unterstützt. Alle Infos zu unseren Werbepartnern findest du [hier](https://linktr.ee/t3npodcast)_.
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
Go behind the curtain at OpenAI as bestselling author Karen Hao shares stories of infighting, ego, and shifting agendas. Find out why even OpenAI's security had her face on alert during her investigation. Karen Hao reveals OpenAI's secretive culture and early ambitions OpenAI's shifting leadership and transparency: from nonprofit roots to Big Tech power Defining AGI: moving goalposts, internal rifts, and philosophy debates OpenAI's founders dissected: Altman, Brockman, and Sutskever's styles and motives Critiquing the AI industry's resource grabs and "AI imperialism" How commercialization narrowed AI research and the dominance of transformers China's AI threat as Silicon Valley's favorite justification, debunked Karen Hao details reporting process and boardroom chaos at OpenAI GPT-5 skepticism: raised expectations, lackluster reality, and demo fatigue Karen Hao's bottom line: AI's current trajectory isn't inevitable — pushback is needed Harper Reed shares vibe coding workflows using Claude Code AI commoditization—why all major models start to feel the same Western vs. Chinese open-source models and global AI power shifts Google antitrust ruling: AI's rise dissolves traditional search monopoly "Algorithm movies" spark debate over art, entertainment, and AI's creative impact Meta's AI talent grab backfires amid exits and cash-fueled drama Anthropic's "historic" author settlement likely cements fair use for AI training DIY facial recognition: Citizen activists unmask ICE using AI tools Picks: Byte Magazine's 50th, AI werewolf games, Berghain bouncer AI test, and arthouse film "Perfect Days" Get "Empire of AI" (Amazon Affiliate): https://amzn.to/4lRra9h Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Co-Host: Harper Reed Guest: Karen Hao Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit monarchmoney.com with code IM helixsleep.com/twit pantheon.io
What happens when we outsource aspects of our most personal moments to machines? In the second installment of our two-part series on AI and relationships, we hear from Rhiannon Williams, a reporter for MIT Technology Review who spoke to people all over the world about how they're using AI to relate to their loved ones, including a man who turns to it during marital disputes, a French mother who uses it to craft nightly tales for her son, and a nursing student who calls her AI companion her "boyfriend." Credits: This episode was produced by Grace Tatter and edited by Meg Cramer. It was co-hosted by Amory Sivertson and Ben Brock Johnson. Mix and sound design by Emily Jankowski.
On August 5, Health Secretary Robert F. Kennedy Jr. announced that the Department of Health and Human Services would terminate almost $500 million in mRNA vaccine development grants and contracts, affecting 22 projects. Biologist and mRNA researcher Jeff Coller joins Host Ira Flatow to talk about what this move means for future mRNA research in the US beyond these immediate projects.Plus, reporter Casey Crownhart joins Ira to discuss the latest in climate news, including flooding in Juneau, Alaska; how Ford is pursuing further electric vehicle manufacturing despite federal roadblocks; and a startup using Earth itself as a giant battery.Guests:Dr. Jeff Coller is the Bloomberg Distinguished Professor of RNA Biology and Therapeutics at Johns Hopkins University.Casey Crownhart is a senior climate reporter for MIT Technology Review in New York, New York.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Share your Field Stories!Welcome back to Environmental Professionals Radio, Connecting the Environmental Professionals Community Through Conversation, with your hosts Laura Thorne and Nic Frederick! On today's episode, we talk with Kendra Pierre-Louis, climate journalist about Climate Science, Journalism, and Working Backwards to get to Your Career. Read her full bio below.Help us continue to create great content! If you'd like to sponsor a future episode hit the support podcast button or visit www.environmentalprofessionalsradio.com/sponsor-form Showtimes: 1:40 - ChatGpt Weighing in 8:01 - Interview with Kendra Starts19:44 - What does it mean being a Journalist in this moment 33:19 - Accepting Supremacy of Natural Systems 35:30 - #Fieldnotes with KendraPlease be sure to ✔️subscribe, ⭐rate and ✍review. This podcast is produced by the National Association of Environmental Professions (NAEP). Check out all the NAEP has to offer at NAEP.org.Connect with Kendra Pierre-Louis at https://www.kendrawrites.com/Guest Bio: Kendra Pierre-Louis is an award-winning climate reporter. She has worked as climate reporter with Bloomberg, a senior climate reporter with the Gimlet/Spotify podcast How to Save a planet, and as a staff writer for Popular Science. She is also the author of the book, "Green Washed: Why We Can't Buy Our Way to a Green Planet." Kendra is a recipient of the 2023 New York Press Club Award for the audio story, “Sandy Was Just the Start. Is New York City Building Resiliently Enough for What's Coming Next?” and a 2022 AAAS Kavli Science Journalism Award. She received the gold award in the magazine category for her story "How rising groundwater caused by climate change could devastate coastal communities in MIT Technology Review. Kendra has an MS in Science Writing from the Massachusetts Institute of Technology, an MA in Sustainable Development from the SIT Graduate Institute and a B.A. in Economics from Cornell University.Music CreditsIntro: Givin Me Eyes by Grace MesaOutro: Never Ending Soul Groove by Mattijs MullerSupport the showThanks for listening! A new episode drops every Friday. Like, share, subscribe, and/or sponsor to help support the continuation of the show. You can find us on Twitter, Facebook, YouTube, and all your favorite podcast players.Support the showThanks for listening! A new episode drops every Friday. Like, share, subscribe, and/or sponsor to help support the continuation of the show. You can find us on Twitter, Facebook, YouTube, and all your favorite podcast players.
Dr. Rebekka Reinhard and Thomas Vasek -- the team behind human magazine -- join CognitivePath founders Greg Verdino and Geoff Livingston for a provocative conversation about why smart resilience, ethics, regulation and responsibility are essential for creating a human forward future in the age of AI. Tune in for a deep dive into the philosophical and practical implications of AI on society, democracy, and our collective future. Chapters 00:00 Introduction 03:34 Smart Resilience in the Age of AI 07:09 Navigating Crises in a Complex World 11:03 Cultural Perspectives on Resilience 12:06 Global Perspectives on AI Development 16:12 Ethics and Morality in AI Regulation 21:32 The EU AI Act and Its Implications 26:09 Power Dynamics and Global Perception 28:28 AI's Role in Democracy 32:14 AI's Impact on Human Resilience 34:38 The Dangers of AI in the Workplace 38:19 Repression and Job Replacement through AI 41:09 A Hopeful Vision for the Future About Rebekka Dr. Rebekka Reinhard is a philosopher and SPIEGEL bestselling author. It's her mission to take philosophy out of the ivory tower and put it back where it belongs: real life. The is the founder of human, the first German magazine about life and work in the AI age. Connect with her at https://linkedin.com/in/rebekkareinhard About Thomas Thomas Vasek is editor-in-chief and head of content at human. He began his journalism career as an investigative reporter at the Austrian news magazine Profil. As founding editor-in-chief, he launched the German edition of the renowned MIT Technology Review in 2003 and the philosophy magazine HOHE LUFT in 2010. From 2006 to 2010, he served as editor-in-chief of P.M. Magazin. Connect with him at https://www.linkedin.com/in/thomas-va%C5%A1ek-637b6b233/ About human Magazine human is the first magazine to take a holistic look at the impact of AI on business, politics, society, and culture – always with a focus on the human being. Issues are published in German (print/digital) and English (digital only). Learn more and subscribe: https://human-magazin.de/ Download the free “Smart Resilience” white paper: https://human-magazin.de/#consulting Learn more about your ad choices. Visit megaphone.fm/adchoices
Burhan Azeem — the youngest Cambridge City Councilor ever elected — is in good traffic this week for a dive on effective local politics, building bike infrastructure quickly in the states, and being elected to office at 24. The city — home of Harvard and MIT — is getting a ton done, and fast. Burhan's work focuses on housing, transit, and actually completing ambitious infrastructure projects for his Cambridge constituents (outcomes > process). We also underscore the powerful influence of voter participation and young leadership in driving meaningful neighborhood action.Timeline:00:00 Burhan Azeem is in good traffic.00:22 Youngest council member in Cambridge history.01:02 The first day in elected office.02:23 Housing in Cambridge.05:08 Cambridge vs. Boston: policy divergence.10:02 Bike infrastructure successes.10:45 Cambridge and Paris success stories and commonalities.20:32 Collaboration with universities like Harvard, MIT.27:18 Addressing climate change hyper-locally.30:54 Behavioral change and systemic solutions.31:23 The impact of urbanization on emissions.33:12 Policy entrepreneurship.34:18 Communicating complex topics effectively.47:45 The importance of local elections.49:48 Wrapping up.Further context:Burhan's reelection campaign.On Burhan, via MIT Technology Review.Follow Burhan, on X.
Adrian Ferrero is the CEO and Co-founder of Biome Makers, a leading agtech firm that merges soil microbiome science with AI to redefine soil health management in agriculture worldwide. Under Adrian's leadership, Biome Makers grew from a bold vision into a global powerhouse, building the world's largest soil microbiome database. His work has forged partnerships with Fortune 500 companies and earned recognition from the Spanish Government, Illumina Accelerator, and MIT Technology Review. A chance tweet led him from economics in Spain to Silicon Valley, where he set out to transform soil and crop science. Today, his pioneering efforts connect soil health to better food, higher yields, and a healthier planet — putting him at the forefront of biotechnology, AI, and agriculture. In this episode… In a world where the health of our food, planet, and bodies matters more than ever, few pause to consider the hidden life beneath our feet. What if the key to better farming, sustainability, and nutrition lies in understanding the tiny microbes in the soil? Adrian Ferrero started with vineyards and built the world's largest soil microbiome database. His team's technology goes beyond identifying microbes — it reveals how they influence plant health and crop resilience. By partnering with farmers, manufacturers, and brands, Biome Makers improves yields, quality, and sustainability worldwide. Their evolution from basic reports to detailed, map-based recommendations shows how innovation and collaboration are transforming agriculture. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz sits down with Adrian Ferrero, Co-founder and CEO of Biome Makers, a leading agtech firm, to discuss the company's journey, breakthrough soil technology, and its impact on crops from vineyards to potatoes. They explore how soil biology affects carbon emissions and food quality, and how microbiome data empowers the entire agricultural supply chain.
They say love is blind- but not when algorithms are watching.Dating apps are quietly ranking your "attractiveness" using AI and facial recognition tech, deciding who gets seen- and who gets ghosted! In this episode, we uncover the shocking truth about beauty scores, data surveillance, and the invisible code shaping your romantic fate. What if your face is being used against you... without your knowledge?Are. You. Ready?****************Further Reading & ReferencesWilliams, Apryl. “Not My Type: Automating Sexual Racism in Online Dating.” MIT Press, 2024.Korducki, Kelli María. “Tinder's Algorithm Is Changing the Way We Date.” Hao, Karen. “Tinder's Algorithm Doesn't Know What You Want.” MIT Technology Review.BBC. “Tinder Developing AI Matchmaker Based on Facial Recognition.”Parker, Christine. “Qoves Studio and the Ethics of AI Beauty Scoring.” Noble, Safiya Umoja. “Algorithms of Oppression: How Search Engines Reinforce Racism.” NYU Press, 2018.Faception. "Our Technology." ****************Leave Us a 5* Rating, it helps the show!Apple Podcast:https://podcasts.apple.com/us/podcast/beauty-unlocked-the-podcast/id1522636282Spotify Podcast:https://open.spotify.com/show/37MLxC8eRob1D0ZcgcCorA****************Follow Us on Social Media & Subscribe to our YouTube Channel!TikTok:tiktok.com/@beautyunlockedthepodYouTube:@beautyunlockedspodcasthour****************Intro/Outro Music:Music by Savvier from Fugue FAME INC
Karen Hao has been warning us about Sam Altman's OpenAI for a while now. In her bestselling Empire of AI, she argues that the Silicon Valley startup is a classic colonial power, akin to Britain's East India Company. Like those colonial merchants and policy makers who wrapped profit-seeking in civilizing missions, OpenAI cloaks its relentless scaling ambitions behind the noble goal of "ensuring artificial general intelligence benefits all of humanity." But as Hao reveals, this pursuit comes at enormous cost—environmental devastation, exploited labor, and the extraction of data from communities worldwide. The parallels are striking: a private corporation accumulating unprecedented resources and power, operating with minimal oversight while externalizing the harms of its empire-building to those least able to resist. Five Key Takeaways 1. OpenAI is a Modern Corporate Empire Hao argues OpenAI operates like the British East India Company—a private corporation wrapped in a "civilizing mission" that extracts resources globally while externalizing costs to vulnerable communities. The company's stated goal of "benefiting all humanity" serves as ideological cover for profit-driven expansion.2. AI Development Didn't Have to Be This Destructive Before OpenAI's "scaling at all costs" approach, researchers were developing smaller, more efficient AI models using curated datasets. OpenAI deliberately chose quantity over quality, leading to massive computational requirements and environmental damage that could have been avoided.3. The Climate and Social Costs Are Staggering McKinsey estimates global energy grids need to add 2-6 times California's annual consumption to support AI infrastructure expansion. This means retired coal plants staying online, new methane turbines in working-class communities, and data centers consuming public drinking water in drought-prone areas.4. The Business Model May Be Unsustainable Despite raising $40 billion (Silicon Valley's largest private investment), OpenAI hasn't demonstrated how to monetize at that scale. Subscriptions don't cover operational costs, leading to considerations of thousand-dollar monthly fees or surveillance-based advertising models.5. Resistance is Possible and Already Happening Communities worldwide are successfully pushing back—from Chilean residents stalling Google data centers for five years to artists suing over intellectual property theft. Hao argues collective action across AI's supply chain can force a shift toward more democratic, community-centered development.Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program training thousands of journalists around the world on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award and American National Magazine Award for Journalists Under 30. She received her Bachelor of Science in mechanical engineering from MIT.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting the daily KEEN ON show, he is the host of the long-running How To Fix Democracy interview series. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
BEST OF: A recent article form the New York Time compared AI to the shoggoth of H.P. Lovecraft - monstrous beings of black protoplasm, bred as slaves that eventually develop brains of their own. An article from 2017 in MIT Technology Review compared AI to the black cube of Saturn. AI is also really good at doing one specific thing and that is creating Lovecraftian monsters. In the Watchmen comicbook series, Adrian Veidt, also known as Ozymandias, attempts to unite the US and USSR against a common enemy to avoid nuclear war. As opposed to the movie version wherein Doctor Manhattan is scapegoated, in the comic Adriam Veldt used advanced genetic engineering technology to create a giant monster from outer space. The monster is a squid, and as the plan proceeds he teleports the monster through a gateway into New York City. A 2022 promotion for the show Stranger Things lit up the Empire State building, along with others around the world, with a portal to the Upside Down. The recent wildfires from Canada that dumped smoke and particulate on New York City create a background similar to the tv show promotion. In fact, the ad this time was for the game Diablo IV. It feature the Queen of Succubi, Lilith, with a caption and date that read “Welcome to Hell, New York” - 6/6/23. New York is also home to a Ruth Bader Ginsburg statue featuring Lilith's horns and tentacle arms. Lilith is the mother of all demons, the tempter of men, and aborter of children. It is therefore appropriate that NYC's One World Trade building was lit up pink to celebrate abortion rights in 2019. Lilith also wears a rainbow necklace, an outward projection of her disdain for God's promise to never flood the earth and kill innocence. In reliefs, Lilith is shown with the legs of a serpent, with two guardian owls that guard her dominion. She haunts in dreams and from the Upside Down. It is from this realm that Gordie Rose, founder of D-Wave, said that quantum computing will summon what he compared to the visions of H.P. Lovecraft: “And these things we're summoning into the world now, are not demons, they're not evil, they're more like the Lovecraftian great ‘old ones'. These entities are not necessarily going to be aligned with what we want.” The 1920 a movie ALGOL, about an alien giving advanced technology to humans, essentially became the base for modern algorithms starting with ALGOL 60 and 58. Technology that led to the atomic bomb also acts as a sort of trigger to open the gateway and summon the Old Ones. Algol is known as the blinking demon star and AI is essential this - A Eye.*The is the FREE archive, which includes advertisements. If you want an ad-free experience, you can subscribe below underneath the show description.-FREE ARCHIVE (w. ads)SUBSCRIPTION ARCHIVEX / TWITTER FACEBOOKMAIN WEBSITECashApp: $rdgable EMAIL: rdgable@yahoo.com / TSTRadio@protonmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-secret-teachings--5328407/support.
Josh Bongard is Professor of Computer Science at the University of Vermont and director of the Morphology, Evolution & Cognition Laboratory. His work involves automated design and manufacture of soft-, evolved-, and crowdsourced robots, as well as computer-designed organisms. In 2007, he was awarded a prestigious Microsoft Research New Faculty Fellowship and was named one of MIT Technology Review's top 35 young innovators under 35. In 2010 he was awarded a Presidential Early Career Award for Scientists and Engineers (PECASE) by Barack Obama at a White House ceremony. He has received funding from NSF, NASA, DARPA, ARO and the Sloan Foundation. He is the co-author of the book How The Body Shapes the Way we Think, the co-author of "Designing Intelligence: Why Brains Aren't Enough", the instructor of a reddit-based evolutionary robotics MOOC, and director of the robotics outreach program Twitch Plays Robotics. TIMESTAMPS:(0:00) - Introduction (1:22) - Life, Consciousness & Intelligence(5:14) - How The Body Shapes The Way We Think(9:18) - Evolutionary Robotics & Consciousness(17:00) - Biological Robots ("Xenobots")(24:00) - Implications of Self-Replicating Living Machines(32:00) - The Role of AI in Shaping Biology(39:00) - What is Conscious, Really?(42:00) - AI Robotics(46:00) - The Advantage of Interdisciplinary Collaborating(49:00) - Escaping Cartesian Dualism(53:00) - Meta-Materials (Groundbreaking Work!)(56:00) - Cause & Effect(1:04:48) - Expanding Morphospace in its Entirety(1:12:00) - Blurring the Lines Between Living & Non-Living (Meta-Materials Are The Future!)(1:17:14) - Non-Embodiment vs Embodiment AI(1:20:00) - Conclusion EPISODE LINKS:- Josh's Website: https://jbongard.github.io/- Josh's Lab: https://www.meclab.org/- Josh's Channel: https://youtube.com/@joshbongard3314- Josh's X: https://x.com/DoctorJosh- Josh's Publications: https://tinyurl.com/3pd4t8ff- Josh's Book: https://tinyurl.com/4wd7hw3s- Michael Levin 1: https://www.youtube.com/watch?v=v6gp-ORTBlU- Michael Levin 2: https://www.youtube.com/watch?v=kMxTS7eKkNM- Michael Levin 3: https://www.youtube.com/watch?v=1R-tdscgxu4- Michael Levin Lecture: https://www.youtube.com/watch?v=aQEX-twenkA- Michael Levin & Terrence Deacon: https://www.youtube.com/watch?v=HuWbHwPZd60- Keith Frankish: https://www.youtube.com/watch?v=QxDYG0K360E- Keith Frankish 2: https://www.youtube.com/watch?v=jTO-A1lw4JM- Keith Frankish Lecture: https://www.youtube.com/watch?v=IbjGRcqD96Q- Nicholas Humphrey: https://www.youtube.com/watch?v=SCTJb-uiQww- Nicholas Humphrey Lecture: https://www.youtube.com/watch?v=Z3cWQLUbnKs- Mark Solms: https://www.youtube.com/watch?v=qqM76ZHIR-o- Mark Solms 2: https://www.youtube.com/watch?v=rkbeaxjAZm4CONNECT:- Website: https://tevinnaidu.com - Podcast: https://creators.spotify.com/pod/show/mindbodysolution- YouTube: https://youtube.com/mindbodysolution- Twitter: https://twitter.com/drtevinnaidu- Facebook: https://facebook.com/drtevinnaidu - Instagram: https://instagram.com/drtevinnaidu- LinkedIn: https://linkedin.com/in/drtevinnaidu=============================Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
Trailblazing AI journalist Karen Hao comes all the way from Hong Kong to San Francisco to discuss the issues raised in her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Hao tackles the hard-hitting questions many people are afraid to face about the disruptive power of artificial intelligence and the forces driving its rapid ascent. A longtime AI insider with unparalleled access to OpenAI and its key players, Hao has spent years investigating the industry's hidden costs—human, environmental, and geopolitical. As a journalist whose award-winning work has been cited by Congress and featured in leading publications such as The Atlantic, The Wall Street Journal and MIT Technology Review, she has become one of the most authoritative voices on AI today. Now Hao pulls back the curtain on the tech arms race reshaping our world in real time. From the breakneck rise of OpenAI and its Faustian bargain with Microsoft to the global supply chains powering AI's insatiable appetite for data, energy and human labor, she reveals the staggering scale of the industry's ambitions—and its consequences. With exclusive behind-the-scenes insights, including the dramatic firing and reinstatement of Sam Altman (a highly public corporate drama that began just days after Altman talked AI ethics on the Commonwealth Club stage), Hao will discuss not just a corporate saga but a crucial examination of the future of power, technology and society itself. Join us for this urgent and thought-provoking conversation about the past, present, and future of AI—what's at stake, who stands to gain, and who is being left behind. Commonwealth Club World Affairs of California is a nonprofit public forum; we welcome donations made during registration to support the production of our programming. Commonwealth Club World Affairs is a public forum. Any views expressed in our programs are those of the speakers and not of Commonwealth Club World Affairs. Learn more about your ad choices. Visit megaphone.fm/adchoices
As AI models grow larger and more powerful, they promise incredible capabilities — but at what cost? Karen Hao is an AI journalist and her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is a New York Times bestseller. We discuss whether the largest AI models are worth their hefty footprint: They consume massive amounts of electricity and water and Karen argues that smaller models better balance cost vs. benefit. Karen, who has reported for The Atlantic, MIT Technology Review, and the Wall Street Journal, will also provide a view of AI from outside — far outside — Silicon Valley. She's reported on AI from across the Global South and says many there feel that AI is a new form of colonialism.We'll hear about the fight over data centers in Chile, how New Zealand's Maori people are using AI to preserve their indigenous language, and why it's a problem that AI can speak any language, but can only really be policed in a few.(Our interview was first broadcast in October, while Karen was still writing the book, so we do not discuss her deeply sourced reporting from inside OpenAI.)—CRAFTED. is produced by Modern Product Minds, where CRAFTED. host Dan Blumberg and team can help you take a new product from zero to one... and beyond. We specialize in early stage product discovery, growth, and experimentation. Learn more at modernproductminds.com Subscribe to CRAFTED., follow the show, and sign up for the newsletter
Facial recognition systems use artificial intelligence to analyze patterns in faces, and they've come under increasing scrutiny, particularly in policing. There have been multiple instances of false positives leading to the arrest and detainment of innocent people. There's no federal regulation of this technology, but at least a dozen states have laws that limit its use. So, some law enforcement authorities have turned to a new system called Track, made by a company called Veritone. It doesn't analyze faces, but looks to the rest of the body for clues — things like clothing, body type or hair — according to recent reporting by James O'Donnell for MIT Technology Review.
Facial recognition systems use artificial intelligence to analyze patterns in faces, and they've come under increasing scrutiny, particularly in policing. There have been multiple instances of false positives leading to the arrest and detainment of innocent people. There's no federal regulation of this technology, but at least a dozen states have laws that limit its use. So, some law enforcement authorities have turned to a new system called Track, made by a company called Veritone. It doesn't analyze faces, but looks to the rest of the body for clues — things like clothing, body type or hair — according to recent reporting by James O'Donnell for MIT Technology Review.
How will market uncertainty and a lack of federal support for climate efforts affect the future of clean energy in the United States? Plus, many wetlands are disappearing, but Louisiana's “accidental” Wax Lake Delta is growing—and informing coastal restoration techniques.$8 Billion Of Climate Tech Projects Were Canceled In 3 MonthsIn the first three months of the Trump administration, officials have been aggressive in cancelling climate change related efforts, from enacting layoffs at large agencies to withdrawing from the Paris Climate Agreement and rescinding federal funding for green research and infrastructure.Joining Host Flora Lichtman to break down the changes we're starting to see in climate policy and clean tech on the ground is Casey Crownhart, senior climate reporter at MIT Technology Review. They also talk about other science news of the week, including a Florida-based startup that's recycling solar panels, an update on the growing measles outbreak in the Southwest, signs of a US science brain drain, humanoid robot participants in the Beijing half marathon, and how bats manage to drink on the fly.In Louisiana, A Chance To Study A Successful, Growing WetlandAmid the rapid erosion of Louisiana's coast, something hopeful is happening where the Atchafalaya River meets the Gulf. A flow of sediment from a decades-old river diversion has accidentally given birth to new wetlands.While that small delta is dwarfed by what's washing away all around it, researchers have gained knowledge from Wax Lake Delta that could help save the rest of Louisiana's coast and contribute to a better understanding of wetland science across the globe.Read more at sciencefriday.com.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Send us a textIn this episode, Joe sits down with Dr. Lisa Feldman Barrett, a leading neuroscientist and author of How Emotions Are Made, to explore groundbreaking insights on how emotions are formed and how we can better understand and regulate them. Together, they dive deep into the science behind emotions, how the brain predicts our feelings, and how we can gain more emotional agency in our personal and professional lives. Joe and Dr. Barrett cover:The theory of constructed emotions and how emotions are predictions, not reactionsThe role of the brain in regulating the body and how it shapes emotional experiencesWhy uncertainty feels like anxiety—and how to reframe itHow emotional granularity can help us build better emotional vocabulary and controlPractical steps to improve emotional regulation and decision-makingThe impact of body budgeting on mood, stress, and overall healthWhether you're leading a team, navigating personal relationships, or simply trying to better understand your emotional life, this conversation will give you powerful tools to enhance your emotional intelligence and well-being.Lisa Feldman Barrett, PhD, is among the top 0.1% most cited scientists in the world for her revolutionary research in psychology and neuroscience. She is a University Distinguished Professor of Psychology at Northeastern University. She also holds appointments at Harvard Medical School and Massachusetts General Hospital, where she is Chief Science Officer for the Center for Law, Brain & Behavior.In addition to the books Seven and a Half Lessons About the Brain and How Emotions are Made, Dr. Barrett has published over 275 peer-reviewed, scientific papers appearing in Science, Nature Neuroscience, and other top journals in psychology and cognitive neuroscience, as well as six academic volumes published by Guilford Press. She writes regularly about science in the popular press, including The New York Times, The Guardian, Scientific American, BBC Science Focus, Popular Science, Nautilus, BigThink, Cosmopolitan, Time magazine, MIT Technology Review, and more (see full list). Her popular TED talk has been viewed over 7 million times.A Special Thanks to Our Sponsors!Veteran-founded Adyton. Step into the next generation of equipment management with Log-E by Adyton. Whether you are doing monthly inventories or preparing for deployment, Log-E is your pocket property book, giving real-time visibility into equipment status and mission readiness. Learn more about how Log-E can revolutionize your property tracking process here!Meet ROGER Bank—a modern, digital bank built for military members, by military members. With early payday, no fees, high-yield accounts, and real support, it's banking that gets you. Funds are FDIC insured through Citizens Bank of Edmond, so you can bank with confidence and peace of mind. Visit
What happens when AI moves beyond convincing chatbots and custom image generators to something that matches—or outperforms—humans?Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it's just around the corner, while other experts point a few years down the road. In fact, it's not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Welcome to part 2 of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some of the leading voices in AI policy from the public and private sectors to role-play these complex issues. These AI leaders play roles in the US, China, and The EU, and enact policies that best align with their roles interests in the AI space. This episode contains the second and final phase of the game. We hope you enjoy this insightful episode. Our Players: US Government Players White House (NSA, AI & Crypto Czar, Assistant to Pres. For S&T) - Doug Calidas, Senior Vice President of Government Affairs for Americans for Responsible Innovation (ARI) Government research institutions (funding) - Stephen Ezell Standards and governance (NIST, DOS, etc.) - Vivek Wadhwa, Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley Regulatory and trade (DOS, Treasury, etc.) - Susan Ariel Aaronson, American author, public speaker, and GWU professor Department of Defense- Daniel Castro, vice president at the Information Technology and Innovation Foundation (ITIF) Commerce Department - Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Center Intel Community and Cyber Defense - David Mussington, professor of the practice the University of Maryland School of Public Policy, and currently serves as the CISA Executive Assistant Director Congress/State Department - Cameron Kelly, Distinguished Visiting Fellow, Brookings Institutution China players Central Military Committee representatives - Rohit Talwar, founder of FastFuture Intelligence and cyber - Daniel Richardson, President of Indepth Global AI Public/Private Industry - Sarah Myers West, co-director at AI Now Ministry of Science and Technology (MOST)/Ministry of Industry and Information technology (MIIT) - David Lin, Senior Director for Future Technology Platforms at the Special Competitive Studies Project (SCSP) European Union Governance- Courtney Radsch, Director, Center for Journalism and Liberty at Open Markets Institute Military/Security - Gordon LaForge, senior policy analyst at New America Regulatory - Michelle Nie, EU Tech Policy Fellow at the Open Markets Institute Industrial and research policy - David Goldston, director of government affairs at the Natural Resources Defense Council Intelligence Agencies - Rumman Chowdhury, scientist, entrepreneur, and former responsible artificial intelligence lead at Accenture Civil Society Large players (ChatGPT, META, Amazon, Microsoft) - Cody Buntain, Assistant Professor; Affiliate Fellow, UMD Honors College – Artificial Intelligence Cluster Medium players - Ramayya Krishnan, Dean, Heinz College Of Information Systems And Public Policy at Carnegie Mellon University Open-source communities - Jay Lee, Clark Distinguished Chair Professor and Director of Industrial AI Center in the Mechanical Engineering Dept. of the Univ. of Maryland College Park Advocacy Organizations - David Goldston, director of government affairs at the Natural Resources Defense Council Legal Community - Kahaan Mehta, Research Fellow at the Vidhi Centre for Legal Policy Universities and academia Large universities - Nita Farahany, Robinson O. Everett Distinguished Professor of Law at Duke Law Smaller schools - Anand Patwardhan, professor in the School of Public Policy at the University of Maryland Medium Universities - Elizabeth Bramson-Boudreau, CEO and Publisher at MIT Technology Review Government laboratories (Defense, DOE, etc.) - Emily M. Bender, University of Washington Professor This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to a very different episode of Siliconsciousness. Today, we are taking a creative new approach to discussing the future of AI. This episode comprises the first part of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some of the leading voices in AI policy from the public and private sectors to role-play these complex issues. These AI leaders play roles in the US, China, and The EU, and enact policies that best align with their roles interests in the AI space. This first episode contains the first phase of the game, as well as introductions from the editor in chief of MIT Technology Review Mat Honan as well as game controller Ed McGrady. We hope you enjoy. Our Players: US Government Players White House (NSA, AI & Crypto Czar, Assistant to Pres. For S&T) - Doug Calidas, Senior Vice President of Government Affairs for Americans for Responsible Innovation (ARI) Government research institutions (funding) - Stephen Ezell Standards and governance (NIST, DOS, etc.) - Vivek Wadhwa, Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley Regulatory and trade (DOS, Treasury, etc.) - Susan Ariel Aaronson, American author, public speaker, and GWU professor Department of Defense- Daniel Castro, vice president at the Information Technology and Innovation Foundation (ITIF) Commerce Department - Anupam Chander, Scott K Ginsburg Professor of Law at Georgetown University Law Center Intel Community and Cyber Defense - David Mussington, professor of the practice the University of Maryland School of Public Policy, and currently serves as the CISA Executive Assistant Director Congress/State Department - Cameron Kelly, Distinguished Visiting Fellow, Brookings Institutution China players Central Military Committee representatives - Rohit Talwar, founder of FastFuture Intelligence and cyber - Daniel Richardson, President of Indepth Global AI Public/Private Industry - Sarah Myers West, co-director at AI Now Ministry of Science and Technology (MOST)/Ministry of Industry and Information technology (MIIT) - David Lin, Senior Director for Future Technology Platforms at the Special Competitive Studies Project (SCSP) European Union Governance- Courtney Radsch, Director, Center for Journalism and Liberty at Open Markets Institute Military/Security - Gordon LaForge, senior policy analyst at New America Regulatory - Michelle Nie, EU Tech Policy Fellow at the Open Markets Institute Industrial and research policy - David Goldston, director of government affairs at the Natural Resources Defense Council Intelligence Agencies - Rumman Chowdhury, scientist, entrepreneur, and former responsible artificial intelligence lead at Accenture Civil Society Large players (ChatGPT, META, Amazon, Microsoft) - Cody Buntain, Assistant Professor; Affiliate Fellow, UMD Honors College – Artificial Intelligence Cluster Medium players - Ramayya Krishnan, Dean, Heinz College Of Information Systems And Public Policy at Carnegie Mellon University Open-source communities - Jay Lee, Clark Distinguished Chair Professor and Director of Industrial AI Center in the Mechanical Engineering Dept. of the Univ. of Maryland College Park Advocacy Organizations - David Goldston, director of government affairs at the Natural Resources Defense Council Legal Community - Kahaan Mehta, Research Fellow at the Vidhi Centre for Legal Policy Universities and academia Large universities - Nita Farahany, Robinson O. Everett Distinguished Professor of Law at Duke Law Smaller schools - Anand Patwardhan, professor in the School of Public Policy at the University of Maryland Medium Universities - Elizabeth Bramson-Boudreau, CEO and Publisher at MIT Technology Review Government laboratories (Defense, DOE, etc.) - Emily M. Bender, University of Washington Professor Learn more about your ad choices. Visit megaphone.fm/adchoices
After nine months aboard the International Space Station, astronauts Butch Wilmore and Suni Williams have finally landed back on Earth. Also, a German physicist and homebrewer discovered brief, intense physical reactions that happen when you uncork a bubbly swing-top bottle.NASA Astronauts Return To Earth After Extended Stay On The ISSAfter 286 days aboard the International Space Station—278 days longer than their initial planned mission—NASA astronauts Butch Wilmore and Suni Williams landed safely back on Earth on Tuesday, off the coast of Florida. Their mission turned into a nine-month stay on the station after their Boeing Starliner capsule had issues with its thrusters and NASA deemed it unsafe to carry the astronauts back. SpaceX's Dragon capsule instead ferried them to Earth earlier this week. And they had a splashdown surprise: A pod of curious dolphins greeted them after they landed.Host Flora Lichtman is joined by Casey Crownhart, senior climate reporter at MIT Technology Review, to catch us up on that and other big science stories from this week, including why a company is trying to “refreeze” a massive glacier, why the 10 hottest years on record are the last 10, and how the London Zoo is trying to conserve a unique frog that incubates its young in its vocal sacs.The Physics That Makes Swing-Top Bottles ‘Pop'For a lot of us, uncorking a bottle is the sound of celebration. It's also a sound that we may not think too much about—until we open our next bottle.But Dr. Max Koch, a physicist at the University of Göttingen who does home brewing on the side, got to wondering what actually makes that popping sound. What's happening inside that bottle, physics-wise? To find out more, he recorded the uncorking of an unsuccessful batch of his ginger beer using microphones and a high-speed camera, and analyzed the bubbly results with a physicist's rigor. His team's findings were published in the journal Physics of Fluids.Dr. Koch sits down with Host Flora Lichtman to talk about the brief but intense changes to temperature, sound, and speed that happen when you uncork a swing-top glass bottle.Transcripts for each segment will be available after the show airs on sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.