Technically Human is a podcast about ethics and technology where I ask what it means to be human in the age of tech. Each week, I interview industry leaders, thinkers, writers, and technologists and I ask them about how they understand the relationship between humans and the technologies we create. We discuss how we can build a better vision for technology, one that represents the best of our human values.
Listeners of The Technically Human Podcast that love the show mention: donig.
In this episode of the show, I speak with Tom Coughlin, the standing President and CEO of IEEE, the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. We discuss the IEEE's vision of technological innovatinon, what it really means to "benefit humanity" through tech, and how the tech sector can, and should, move toward a values-driven approach to innovation. Tom Coughlin is an IEEE Life Fellow, past president of IEEE-USA, past director of IEEE Region 6, past chair of the Santa Clara Valley IEEE Section, past chair of the Consultants Network of Silicon Valley and is also active with the Storage Networking Industry Association and Society of Motion Picture and Television Engineers. Coughlin is also president of Coughlin Associates, a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report), and puts on digital storage-oriented events. He is a regular storage and memory contributor for Forbes.com and media and entertainment organization websites.
Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate. Vardon Hamdiu is the co-founder and head of Sparkable, a young nonprofit organization that builds a social media platform aimed at bridging divides. Growing up immersed in diverse cultures, Vardon has always been a bridge-builder who navigates between worlds. His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion. Over the past decade, Vardon has worked on the communications team of a Swiss President, studied to become a teacher, spent an exchange semester in South Africa, and engaged with refugees facing often traumatic circumstances. These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe. He became deeply passionate about exploring why today's social media platforms are often dysfunctional and how these powerful systems, which govern our collective attention, could be constructed differently. Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch Sparkable, aiming to foster a healthier online environment.
In this episode of "Technically Human," I bring you a conversation with one of the great thinkers working at the intersection of ethics and technology, Professor Todd Presner, for an episode about his new book, Ethics of the Algorithm: Digital Humanities and Holocaust Memory. In the conversation, we talk about new direction in Holocaust memory and scholarship, how technologies are enabling new approaches, questions, and interpretations of major historical events, and how digital technologies might help us imagine a new ethics of interpretation of history and memory. Dr. Todd Presner is Chair of UCLA's Department of European Languages and Transcultural Studies. Previously, he was the chair of UCLA's Digital Humanities Program (2011-21), and from 2011-2018, he served as the Sady and Ludwig Kahn Director of the Alan D. Leve Center for Jewish Studies. He holds the Michael and Irene Ross Chair in the UCLA Division of the Humanities. His research focuses on European intellectual and cultural history, Holocaust studies, visual culture, and digital humanities. Dr. Presner's newest book was published with Princeton University Press: Ethics of the Algorithm: Digital Humanities and Holocaust Memory (Fall 2024).
In this week's episode of the show, I speak with Daniel Kelley about the culture of online gaming, and the unique set of challenges in the gaming space, related to hate, harassment, and extremism. We talk about the possibilities, and limitations, of regulating that space, and what the landscape of gaming might foretell about the future of increasingly online lives that we live, as more and more of our social interactions take place virtually. We talk about how to make those spaces safer and more inclusive, and whether moderation is the right tack to take in developing that more inclusive future, as well as what other strategies for cultivating such spaces might be possible. Daniel Kelley is the Director of Strategy and Operations of the Anti-Defamation League (ADL) Center for Technology and Society (CTS). CTS works through research and advocacy to fight for justice and fair treatment for all in digital social spaces from social media to online games and beyond. For the last five years, Daniel has been the lead author of the first nationally representative survey of hate, harassment, and positive social experiences in online games. He is also the co-author of the Disruption and Harms in Online Games Framework, a resource to define harms in online multiplayer games together with members of the game industry coalition the Fair Play Alliance. He also leads CTS' tech accountability research efforts, such as its Antisemitism and Holocaust Denial Report Card, which looks at ways to create research-grounded advocacy products to inform the public about the nature of hate and harassment online and to hold tech companies accountable.
Hi Technically Human listeners! Welcome back to another episode of the show. Today I'm sitting down with Alva Noë. We talk about his new book, The Entanglement, and the relationship between technology, philosophy, and art. In The Entanglement, Professor Noë explores the inseparability of life, art, and philosophy, arguing that we have greatly underestimated what this entangled reality means for understanding human nature. Neither biology, cognitive science, nor AI can tell a complete story of us, and we can no more pin ourselves down than we can fix or settle on the meaning of an artwork. Even more, art and philosophy are the means to set ourselves free, at least to some degree, from convention, habit, technology, culture, and even biology. Dr. Alva Noë is a philosopher of mind whose research and teaching focus is perception and consciousness, and the philosophy of art. He is the author of Action in Perception (MIT, 2004); Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness (Farrar Straus and Giroux, 2009); Varieties of Presence (Harvard, 2012); Strange Tools: Art and Human Nature (Farrar Strauss and Giroux, 2015), Infinite Baseball: Notes from a Philosopher at the Ballpark (Oxford, 2019) and, most recently, Learning to Look: Dispatches from the Art World (Oxford 2021). He holds a Bachelor of the Arts degree from Columbia University; a Bachelors of Philosophy. from University of Oxford; and a Ph.D. from Harvard University. He teaches in the philosophy department of UC Berkeley.
In this episode of the show, I speak with Dr. Thomas Mullaney about his new book, The Chinese Computer. In the book, Dr. Mullaney outlines the history and evolution of Chinese language computing technology, and explores how the technology of the QWERTY keyboard changed this history of computing. We talk about how the structure of language has shaped the history of digital technologies, and Dr. Mullaney explains how China and the non-Western world—because of the “hypographic” technologies they had to invent in order to join the personal computing revolution— helps us understand the relationship between the human mind and the technologies it creates. Thomas S. Mullaney is Professor of History and Professor of East Asian Languages and Cultures, by courtesy, at Stanford University. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. He is the author or lead editor of 7 books, including The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, Coming to Terms with the Nation: Ethnic Classification in Modern China, and The Chinese Computer—the first comprehensive history of Chinese-language computing. His writings have appeared in the Journal of Asian Studies, Technology & Culture, Aeon, Foreign Affairs, and Foreign Policy, and his work has been featured in the LA Times, The Atlantic, the BBC, and in invited lectures at Google, Microsoft, Adobe, and more. He holds a PhD from Columbia University.
Hi Technically Human Listeners! After a long summer break we are back with a brand season and brand new episodes of the show! To kick off the season, we are bringing you an episode that I'm calling “agree to disagree,” with two guests, Robert D. Atkinson and David Moschella, who join me to argue that the critiques of tech circulating in our environment are full of “myths and scapegoats.” That's the title of their new book, “Technology Fears and Scapegoats: 40 Myths About Privacy, Jobs, AI, and Today's Innovation Economy,” published this year by Pallgrave McMillan. The book argues that our era of tech critique, and the impetus for regulation that many critics advocate for and recommend, is misguided, and that our era is one of general pessimism toward AI, in which our society largely overlooks the benefits of this technology. In their words, quote, “These attitudes both reduce the enthusiasm for innovation and the efforts by government needed to spur it.” Well, as the title of the episode suggests, agree to disagree, both on the facts and the merits of the argument! A key component of this show is my commitment to talking to people with whom I disagree, and foregrounding civil discourse with people whose ideas differ from my own. My hope is that you, the listeners, can weigh out their arguments against my own and see where you land. As always, if you have thoughts about the show, please get in touch! Robert D. Atkinson is the founder and president of the Information Technology and Innovation Foundation (ITIF). He is an internationally recognized scholar and a widely published author whom The New Republic has named one of the “three most important thinkers about innovation,” Washingtonian Magazine has called a “tech titan,” Government Technology Magazine has judged to be one of the 25 top “doers, dreamers and drivers of information technology,” and the Wharton Business School has given the “Wharton Infosys Business Transformation Award.” A sought-after speaker and valued adviser to policymakers around the world, Atkinson's books include Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today's Innovation Economy (Palgrave Macmillan, 2024); Big is Beautiful: Debunking the Mythology of Small Business (MIT Press, 2018); Innovation Economics: The Race for Global Advantage (Yale, 2012); Supply-Side Follies: Why Conservative Economics Fails, Liberal Economics Falters, and Innovation Economics is the Answer (Rowman & Littlefield, 2006); and The Past And Future Of America's Economy: Long Waves Of Innovation That Power Cycles Of Growth (Edward Elgar, 2005). President Clinton appointed Atkinson to the Commission on Workers, Communities, and Economic Change in the New Economy; the Bush administration appointed him chair of the congressionally created National Surface Transportation Infrastructure Financing Commission; the Obama administration appointed him to the National Innovation and Competitiveness Strategy Advisory Board; as co-chair of the White House Office of Science and Technology Policy's China-U.S. Innovation Policy Experts Group; to the U.S. Department of Commerce's National Advisory Council on Innovation and Entrepreneurship; and the Trump administration appointed him to the G7 Global Partnership on Artificial Intelligence. The Biden administration appointed him as a member of the U.S. State Department's Advisory Committee on International Communications and Information, and a member of the Export-Import Bank of the United States' Council on China Competition. Atkinson holds a Ph.D. in city and regional planning from the University of North Carolina, Chapel Hil. David Moschella is a nonresident senior fellow at ITIF. Previously, he was a research fellow at Leading Edge Forum (LEF), where he explored the global business impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. For more than a decade before LEF, David was in charge of worldwide research for IDC, the largest market analysis firm in the information technology industry, responsible for the company's global technology industry forecasts and insights. A well-known international speaker, writer, and thought leader, David's books include Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today's Innovation Economy (Palgrave Macmillan, 2024), Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s (DXC Technology, 2018), Customer-Driven IT (Harvard Business School Press, 2003), and Waves of Power (Amacom, 1997). He has lectured and consulted on digital trends and strategies in more than 30 countries, working with leading customers and suppliers alike.
Today I'm speaking with Projjal Ghatak, CEO & Co-Founder At Onloop, about the ethics of teamwork, collaboration, and providing constructive feedback. Projjal founded OnLoop in 2020 to create a category called Collaborative Team Development (CTD) to fundamentally reinvent how hybrid teams are assessed and developed, after over a decade of frustration with clunky, traditional enterprise performance management and learning processes and tools that were either hated or ignored by his teams at companies like Uber and Accenture where he spent many years. Prior to founding OnLoop, Projjal spent three and a half years at Uber in a variety of roles including leading Strategy & Operations for Business Development globally, leading Strategy & Planning for the APAC rides business, and GM of the Philippines rides business. Besides Uber, he also spent some time raising debt and equity from New York hedge funds for an industrial conglomerate (Essar), in strategy consulting in South East Asia (Accenture), and in early-stage companies in Latin America (BlueKite, El Market) prior to that. He holds an MBA from Stanford University.
In this episode of the show, I speak with Sarah Fairweather about what it is like to be an ethics worker. We talk about how ethical work can sync up with business practices, how to develop a culture of ethics in industry, and Sarah talks me through what it is like to practice ethics as a day job. Sarah Fairweather is the Senior Program Manager of Ethics at WellSaid Labs, shaping Responsible AI for synthetic voice technology and designing policies for WellSaid Labs' ethical AI deployment. She leads the effort at WellSaid to ensure that every team in the organization is equipped with the tools and skills they need to make ethics-informed designs and decisions in support of responsible innovation. Before WellSaid Labs, she was the Director of Professional Learning at Code.org where she designed equity-focused K-12 professional development experiences and co-led the company's first Equity Working Group.
In this episode of the show, I sit down with author Mike Trigg about his new novel, Burner. Mike Trigg is an author, a novelist, a tech executive, a tech founder, and an investor in dozens of technology start-up companies for over twenty-five years. His first novel, Bit Flip, was released in August 2022 to critical acclaim, lauded by the San Francisco Chronicle as a “twisty, acerbic corporate thriller.” His work has been featured in Publisher's Weekly, Kirkus, and Literary Hub. He has been a contributor to TechCrunch, Entrepreneur, and Fast Company, and frequently posts on his author site, www.miketrigg.com. Burner is a mind-bending thriller that dives headfirst into our modern online zeitgeist of social media disinformation, toxic internet subcultures, and the human need for belonging, purpose, and love in an age of distorted electronic personas. The story confronts the loss of the American dream and the societal factors behind it, including wealth inequality, lack of opportunity, and cultural prejudices. At the same time, it is a tragic love story, asking the question of whether real human connection is inherently incompatible with our addiction to online esteem.
In this episode of the show, I sit down with Dr. Robert Pearl to talk about his new book, ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine, a book he co-authored with...ChatGPT! We talk about the deep fractures and problems in American health care that Generative AI may be positioned to solve, the changing landscape of health care, and the possibility that Amazon, Google, or OpenAI may become the nation's latest healthcare providers. For 18 years, Dr. Robert Pearl, MD served as CEO of The Permanente Medical Group (Kaiser Permanente). He is also former president of The Mid-Atlantic Permanente Medical Group. In these roles he led 10,000 physicians, 38,000 staff and was responsible for the nationally recognized medical care of 5 million Kaiser Permanente members on the west and east coasts. He is a clinical professor of plastic surgery at Stanford University School of Medicine and on the faculty at the Stanford Graduate School of Business, where he teaches courses on healthcare strategy, technology, and leadership. Pearl is board-certified in plastic and reconstructive surgery, receiving his medical degree from Yale, followed by a residency in plastic and reconstructive surgery at Stanford University. He's the author of three books: Mistreated: Why We Think We're Getting Good Healthcare—And Why We're Usually Wrong, a Washington Post bestseller (2017); Uncaring: How the Culture of Medicine Kills Doctors & Patients, a Kirkus star recipient (2021); and his newest book ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine (April 2024). All profits from sales of his books go to Doctors Without Borders. Dr. Pearl is a LinkedIn “Top Voice” in healthcare and host of the popular podcasts Fixing Healthcare and Medicine: The Truth. He publishes two monthly healthcare newsletters reaching 50,000+ combined subscribers. A frequent keynote speaker, Pearl has presented at The World Healthcare Congress, the Commonwealth Club, TEDx, HLTH, NCQA Quality Talks, the National Primary Care Transformation Summit, American Society of Plastic Surgeons, and international conferences in Brazil, Australia, India, and beyond. Pearl's insights on generative AI in healthcare have been featured in Associated Press, USA Today, MSN, FOX Business, Forbes, Fast Company, WIRED, Global News, Modern Healthcare, Medscape, Medpage Today, AI in Healthcare, Doximity, Becker's Hospital Review, the Advisory Board, the Journal of AHIMA, and more.
In this episode of the show, I talk to Dr. Tamara Kneese about Data and Society's initiative to develop standards and ways to measure the environmental impact of AI. I talk to Dr. Kneese about her work at the Algorithmic Impact Methods Lab (AIMLab), we talk about the links and frictions between tech and climate change, and we consider how AI may be changing how we experience not only life, but also our experience of death. Dr. Tamara Kneese is Project Director of Data & Society's Algorithmic Impact Methods Lab, where she is also a Senior Researcher. For the 2023-2024 academic year, she's a Visiting Scholar at UC Berkeley's Center for Science, Technology, Medicine & Society. Before joining D&S, she was Lead Researcher at Green Software Foundation, Director of Developer Engagement on the Green Software team at Intel, and Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Dr. Kneese holds a PhD in Media, Culture and Communication from NYU and is the author of Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond. In her spare time, she is a volunteer with Tech Workers Coalition.
In today's episode, I sit down with Dr. Peter Bonutti to talk about the ways in which technologies are revolutionizing our understanding of the brain, and how they may be used to treat crippling brain disorders such as stroke and seizures. Dr. Peter Bonutti, M.D. is a surgeon, inventor, author, professor, consultant, and entrepreneur. He is the founder of Bonutti Research, a medical device incubator that has developed products and technology used around the world. He maintains his clinical and surgical practice, focusing on the integration of robotics into surgical procedures. He is the founder and president of Releave, a company whose technology has already been clinically proven in more than 700 patients for the treatment of a brain related disorder. Realeve's ultimate goal is to solve one of the critical remaining barriers in brain health: the ability to bypass the brain's natural barrier preventing the delivery of effective drugs for stroke, cancer treatment, and other degenerative orders. Dr. Bonutti is a pioneer in Minimally Invasive Surgery, has over 500 patents and applications, more than 700 licenses and multiple FDA-approved products to date. Major corporations leveraging his technology include Hitachi, Kyphon, Covidien, US Surgical, Biomet, Arthrocare, Synthes, Zimmer/Biomet and Stryker. He is a prolific speaker, lecturing internationally, and has trained over 100 surgeons on his surgical techniques. In his career, Dr. Bonutti has received more than a dozen industry honors and awards for his achievements. Dr. Bonutti earned his medical degree at University of Cincinnati College of Medicine and completed his Orthopaedic Surgery Residency at Cleveland Clinic Foundation with international fellowships in Canada, Australia, and Austria.
Today I am interviewing Dr. Sam Sammane about his forthcoming book, "The Singularity of Hope”, which aims to guide readers through the challenges and opportunities of the AI era, advocating for a harmonious fusion of human intelligence and machine capabilities. Dr. Sammane envisions a world where the rapid advancements in AI and technology are harnessed for the greater good, leading to a new age of global prosperity. He is a seasoned entrepreneur with multiple success exits, and an academic with a rich blend of expertise in applied physics, digital circuit design, nanotechnology, formal methods, life science, and business. Holding a Bachelor's degree and Master's degree in Applied Physics, a Master's degree in Digital Circuit Design, and a Ph.D. in Nanotechnology, Dr. Sammane has authored several articles on high-order logic, symbolic simulation, and automatic theorem proving. Beyond the academic realm, Dr. Sammane has co-founded and led multiple successful companies in the life sciences, IT and real estate industries. He resides in southern California with his wife and three daughters.
Welcome back to a brand-new season of Technically Human! We're thrilled to be back with new episodes of the show. We are kicking off the new season, and the new year, with an episode featuring one of my favorite thinkers, Dr. Deborah Stone, to talk about what it means to count—that is to say, what it means to measure, and what it means to matter. Dr. Deborah Stone is currently a Lecturer in Public Policy in the Department of Urban Studies and Planning at MIT. She is also an Honorary Professor of Political Science at Aarhus University in Denmark, where she occasionally teaches as a visiting professor. She has taught at Duke University in the Institute of Policy Sciences (1974-77); MIT Department of Political Science (1977-86); Brandeis University Heller School, where she held the David R. Pokross Chair of Law and Social Policy (1986-99); and Dartmouth College Government Department, where she was Research Professor of Government (1999-2014). She has taught as a visitor at Yale, Tulane, University of Bremen, Germany, and National Chung Cheng University in Taiwan. She is a graduate of the University of Michigan and holds a Ph.D. in Political Science from MIT. Stone is the author of Policy Paradox: The Art of Political Decision-Making, which has been published in multiple editions (W.W. Norton), translated into five languages, and won the Aaron Wildavsky Award from the American Political Science Association for its enduring contribution to policy studies. She has also authored three other books: The Samaritan's Dilemma (Nation Books, 2008), The Disabled State (Temple University Press 1984), and The Limits of Professional Power (University of Chicago Press, 1980). She serves on the editorial boards of the Journal of Health Politics, and Policy and Law (of which she was a founder); Women, Politics and Public Policy, and Critical Policy Studies. In addition to numerous articles in academic journals and book chapters, she writes for general audiences. She was the founding senior editor of The American Prospect and her articles have appeared there as well as in in Nation, New Republic, Boston Review, Civilization, Natural History, and Natural New England. Stone has held fellowships from the Guggenheim Foundation, Harvard Law School, German Marshall Fund, Open Society Institute and Robert Wood Johnson Foundation. She was a Phi Beta Kappa Society Visiting Scholar in 2005-2006, and a Senior Fellow at Demos from 2008-2012. She has served as a consultant to the Social Security Administration, the Institute of Medicine, the Office of Technology Assessment, and the Human Genome Project. Stone is also the recipient of numerous professional awards, including, the 2013 Charles M. McCoy Career Achievement Award for a progressive political scientist who has had a long successful career as a writer, teacher, and activist (American Political Science Association).
In this episode of the show, I talk with Jared Maslin about what it means to have privacy on the internet. We talk about the difference between privacy and secrecy, the benefits and limitations of GDPR and the possibility of privacy regulation coming to the US, and we explore the biggest challenges facing data privacy today. His most recent work, including his most recent publication, "Learning From the Past: Applying Concepts of the Sarbanes-Oxley Act to Restore Consumer Trust in Global Data Privacy," involves the design and testing of a more holistic data privacy risk model, using some of the key tenets of independent auditing structures and oversight functions seen after the investor crises of Enron, Tyco, and other financial reporting fraud. Their goal is to leverage the same concepts that were once applied to restore investor trust in businesses, and to extend those concepts to data privacy in order to restore consumer trust in the businesses processing their personal data.
In this week's episode of the show I sit down with Dr. Tonya Evans to talk about the state of crypto in the wake of last week's landmark criminal fraud conviction of the former CEO of FTX and the former prophet of crypto, Sam Bankman-Fried. Dr. Evans and I discuss what new crypto economy might emerge in the wake of his conviction. We discuss the principles and the possibilities of new digital assets, and we talk about the challenges of regulating new financial technologies. Dr. Tonya M. Evans is a distinguished professor at Penn State Dickinson Law and a leading expert in intellectual property and new technologies. With a prestigious 2023 EDGE in Tech Athena Award, she is highly sought-after as a keynote speaker and consultant. Her expertise spans blockchain, entrepreneurship, entertainment law, and more. As a member of international boards and committees, including the World Economic Forum/Wharton DAO Project Series, Dr. Evans remains at the forefront of cutting-edge research. She recently testified before the House Financial Services Committee and the Copyright Office and USPTO to advise on the intellectual property law issues related to NFTs and blockchain technology.
In today's episode, I talk about how to create new legal rules to guide tech toward reflecting human values with Brian Beckcom, one of the leading lawyers of his generation. Brian Beckcom is a Texas Super Lawyer, a designation that recognizes him as one of the top legal experts and practitioners in his arena. In addition to his work as a lawyer, he is also a Computer Scientist and a Philosopher. He created and hosts the popular podcast "Lesson from Leaders with Brian Beckcom." Brian is an honors graduate of the University of Texas School of Law. He is the author of 6 books and hundreds of articles on a wide variety of topics. He successfully prosecuted many high-profile cases, including the case that emerged in the aftermath of the Somali Pirate attack on the Maersk Alabama, which made headlines around the world, and the event was made into a Hollywood blockbuster starring Tom Hanks as Captain Phillips. Representing many members of the crew, Brian and his firm took on one of the largest shipping companies in the world, while simultaneously, his investigative efforts ensured that the true story was told.He also represented Captain Wren Thomas, who was kidnapped by Nigerian mercenaries while operating off the Coast of West Africa. Captain Thomas' story has been featured in national and international media. The case received international attention from the media and maritime shipping companies because of the heroic acts of Captain Thomas during the attack and hostage situation and also because of connections to Boko Haram and corruption in West Africa. In the conversation we talk about the way that case law formed to treat piracy, that is to say, the practice of attacking and robbing ships at sea, and piracy in our digital age, that is to say, the unauthorized duplication of copyrighted content that is then sold at substantially lower prices in the 'grey' market, We talk about the possibilities for, and the obstructions to, creating legislation that would stop some of the worst consequences and tendencies of big tech. And Brian makes the case for what law, at its most ethical and generative potential, might do to guide tech toward protecting and elevating human values.
In this episode of the show, I sit down with Dr. Mark Sagar to talk about his vision of an embodied form of AI. Dr. Sagar is the co-founder and Chief Science Officer at Soul Machines, a company investigating how to use natural language processing with hyper-realistic visuals to create autonomously animated, emotionally dynamic Digital People. In addition to developing new technologies, the research seeks answers to big questions: should we be humanizing AI? How does feeding AI socio-emotional context help create rich, multimodal humanlike experiences, and at what point are we teetering on sentience? And what is really at stake the intersection of human cooperation with intelligent machines? Dr. Mark Sagar is currently Director for the Auckland Bioengineering Institute's Laboratory for Animate Technologies. He is a two-time Oscar winner, in the categories of scientific and engineering awards, for his work creating realistic digital characters for the screen. The technology has been used in Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button and Avatar. The technology he created emerged out of his research, completed in the late 1990s, in a landmark study that explored how to develop an anatomically correct virtual eye and realistic models of biomechanically-simulated anatomy. It was one of the first examples of how believable human features could be created on a screen by combining computer graphics with mathematics and human physiology. He is also the founder of the BabyX, a pioneering research initiative that seeks to combine models of physiology, cognition and emotion with advanced lifelike CGI, in an attempt to create a new form of biologically inspired AI. Dr. Sagar received his Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. In addition to his recognition by the Academy Awards, Dr. Sagar was elected as a fellow of the Royal Society of New Zealand in 2019.
Hi Technically Human listeners. This is a show about ethics and tech, but it's also a show about what it means to be human. There is no area of being human in this moment that technologies does not touch. I know that many members of this listening community have been deeply affected by the loss of life and the brutality that began with the Hamas attack on Israel and is ongoing in Israel and in Gaza. This is not a show about my politics. But it is a show that strives toward the ideals of diverse representation, and bipartisan collaboration toward ethical and humanistic ends. In this dark moment, I wanted to elevate one of our previous episodes featuring United Hatzalah. United Hatzallah is a volunteer-based organization which provides emergency medical response within minutes of any medical emergency for free. They are committed to saving human lives independent of race, religion, ethnicity, or national identity. They are non political and non religious. United Hatzalah volunteers respond to more than 675,000 calls per year throughout Israel and beyond its borders, saving lives every day. There are a lot of people in the region in need right now, and United Hatzallah is on the front lines. If you have the means, and if you want to support an organization that is working to save civilians lives, no matter what their religion, race, ethnic identity, or national identity might be, please consider supporting United Hatzallah. Insight Partners, a global software investor partnering with high-growth technology, software and internet startups, is currently matching donations to United Hatzallah, up to $1,000,000. Please consider supporting this effort, if you have the means. The ideal of universal human rights is central to this show, and when I see the tech community driving toward that effort, I think it's worth highlighting. That ideal is, and always has been, that human lives are human lives anywhere and everywhere, no matter which tribe they belong to, and that the global community has an obligation to protect those lives. Link to the donation site here: https://hedado.com/c/SoftwareinService And now, here is my episode featuring United Hatzallah, whose volunteers have been on the ground saving lives, as they have been doing since the organization was founded.
In today's conversation, I sit down with Amy Kurzweil, the author of the new graphic memoir, Artificial: A Love Story. Artificial: A Love Story tells the story of three generations of artists whose search for meaning and connection transcends the limits of life. The story begins with the LLM generated chatbot that Amy's father, the futurist Ray Kurzweil, created out of his father's archive, but the story doesn't start and end there. Instead, the story takes us on a journey through new questions that technologies are asking about what it means to be human. How do we relate to—and hold—our family's past? And how is technology changing what it means to remember the past? And what does it mean to know--and to love--in the age of AI? Amy Kurzweil is a New Yorker cartoonist and the author of two graphic memoirs: Flying Couch, a NYT's Editor's Choice and Kirkus “Best Memoir” of 2016, and Artificial: A Love Story, forthcoming October 2023. She was a 2021 Berlin Prize Fellow with the American Academy in Berlin, a 2019 Shearing Fellow with the Black Mountain Institute, and she's received fellowships from MacDowell, Djerassi, and elsewhere. Her work has been nominated for a Reuben Award and an Ignatz Award for “Technofeelia,” a four-part series with The Believer Magazine. Her writing, comics, and cartoons have also been published in The Verge, The New York Times Book Review, Longreads, Literary Hub, WIRED, and many other places. She's taught writing and comics at Parsons The New School for Design, The Fashion Institute of Technology, Center for Talented Youth, Interlochen Center for the Arts, in New York City Public Schools, and in many other venues, and she currently teaches a monthly cartooning class to a growing community of virtual students all over the world.
Hi Technically Human listeners! I'm on vacation this week, and our team has pulled one of our favorite interviews, and definitely hands down our funniest from our archives to share with you—an episode with Dan Lyons on hat makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. If you haven't had a chance to listen to the interview yet, I think you'll enjoy it! I'll be back next week with a brand-new episode of the show! In this episode, I sit down with a personal hero, the iconic literary giant Dan Lyons. We discuss Dan's experience writing about tech culture for the hit HBO show "Silicon Valley," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at Newsweek, a staff writer at Forbes, and a columnist for Fortune magazine, while also contributing op-ed columns to the New York Times about the economics and culture of Silicon Valley. Dan is the author of two of the most important recent books about Silicon Valley: Disrupted: My Misadventure in the Startup Bubble, an international best-seller, and Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us, which was chosen by The Guardian as one of the best business books of 2018. He is also the mastermind of the epic parody blog “The Fake Steve Jobs Blog.” Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space.
In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating. Dr. Julie Albright is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master's level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure. Dr. Albright's research has focused on the growing intersection of technology and social / behavioral systems. She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others. Her new book, Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream (Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society.
Earlier this year, Consumer Reports, in collaboration with the Kapor Center, debuted "Bad Input," three short films that set out to explore and to create public awareness about how biases in algorithms/data sets result in unfair practices for communities of color, often without their knowledge. In this episode of the show, I talk to Lily Gangas, Chief Technology Community Officer at the Kapor Center, and Amira Dhalla, Director of Impact Partnerships and Programs at Consumer Reports, about the film and about state of AI at the intersection of race and equity, and the importance of educating the public if we want to see change in the future of AI and human values. Consumer Reports is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. They do it by fighting to put consumers' needs first in the marketplace and by empowering them with the trusted knowledge they depend on to make better, more informed choices. The Kapor Center's work focuses at the intersection of racial justice and technology to create a more inclusive technology sector for all. Founded by Freada Kapor Klein and Mitch Kapor, the center seeks to develop a vision and practice to make the tech industry more diverse and inclusive. The Kapor Foundation, alongside Kapor Capital, and the STEM education initiative SMASH, takes a comprehensive approach to expand access to computer science education, conduct research on disparities in the technology pipeline, support nonprofit organizations and initiatives, and invest in gap-closing startups and entrepreneurs that close gaps of access for all. The Kapor Center seeks to intentionally dismantle barriers to tech and deployment of technologies across the Leaky Tech Pipeline through research-driven practices, gap-closing investments, increased access to computer science education, supporting and partnering with mission-aligned organizations, advocating for needed policy change, and more.
In today's episode, I sit down with Vinhcent Le, Senior Legal Counsel of Tech Equity at the Greenlining Institute, an organization that works towards a future where communities of color can build wealth, live in healthy places filled with economic opportunity, and are ready to meet the challenges posed by climate change. We talk about the possibilities and limitations of regulation to address inequities in tech, the challenges of negotiating race in tech production, and how greenlining seeks to address a history of redlining. Vinhcent Le (he/him/his) leads Greenlining's work to close the digital divide, to protect consumer privacy, and to ensure algorithms are fair and that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the California Privacy Protection Agency. Vinhcent received his J.D. from the University of California, Irvine School of Law and a B.A. in Political Science from the University of California, San Diego. Prior to Greenlining, Vinhcent advocated for clients as a law clerk at the Public Defender's Office, the Office of Medicare Hearing and Appeals, and the Small Business Administration.
In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics. Lauren Maffeo is an award-winning designer and analyst who currently works as a service designer at Steampunk, a human-centered design firm serving the federal government. She is also a founding editor of Springer's AI and Ethics journal and an adjunct lecturer in Interaction Design at The George Washington University. Her first book, Designing Data Governance from the Ground Up, is available from The Pragmatic Programmers. Lauren has written for Harvard Data Science Review, Financial Times, and The Guardian, among other publications. She is a fellow of the Royal Society of Arts, a former member of the Association for Computing Machinery's Distinguished Speakers Program, and a member of the International Academy of Digital Arts and Sciences, where she helps judge the Webby Awards.
We're back, after a long and restful break, with a brand new season of Technically Human! In our first episode of the season, I am joined by a guest cohost, Dr. Morgan Ames, for a conversation with Janet Haven, Executive Director of Data and Society. We talk about the movement to root data and AI practices in human values, the future of automation, and the pressing needs—and challenges—of data governance. Janet Haven is the executive director of Data & Society. She has worked at the intersection of technology policy, governance, and accountability for more than twenty years, both domestically and internationally. Janet is a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden and the National AI Initiative Office on a range of issues related to artificial intelligence. She also acts as an advisor to the Trust and Safety Foundation, and has brought her expertise in non-profit governance to bear through varied board memberships. She writes and speaks regularly on matters related to technology and society, federal AI research and development, and AI governance and policy. Before joining D&S, Janet spent more than a decade at the Open Society Foundations. There, she oversaw funding strategies and worldwide grant-making related to technology, human rights, and governance, and played a substantial role in shaping the emerging international field focused on technology and accountability. Data & Society is an independent nonprofit research organization rooted in the belief that empirical evidence should directly inform the development and governance of new technologies — and that these technologies can and must be grounded in equity and human dignity. Recognizing that the concentrated, profit-driven power of corporations and tech platforms will not steer us toward a just future, our work foregrounds the power of the people and communities most impacted by technological change. Their work studies the social implications of data, automation, and AI, producing original research to ground informed public debate about emerging technology. Dr. Morgan Ames is an adjunct professor in the School of Information and interim associate director of research for the Center for Science, Technology, Medicine and Society (CSTMS) at the University of California, Berkeley, where she teaches in Data Science and administers the Designated Emphasis in Science and Technology Studies. She is also affiliated with the Algorithmic Fairness and Opacity Working Group (AFOG), the Center for Science, Technology, Society and Policy (CTSP), and the Berkeley Institute of Data Science (BIDS).
Welcome to the final episode of the "Technically Human" season! We're ending the season with an episode of the 22 lessons on ethics and technology series, with a conversation featuring Dr. John Williams about the global imagination of tech. Dr. John Williams is a professor of English Literature at Yale University. His work is focused on international histories of technological/media innovation and the perceived difference of racial and cultural otherness. His book, The Buddha in the Machine: Art, Technology, and The Meeting of East and West (Yale University Press, 2014), examines the role of technological discourse in representations of Asian/American aesthetics in late-nineteenth and twentieth-century film and literature. The book won the 2015 Harry Levin Prize from the American Comparative Literature Association. In the conversation, we explore the diverse international histories of technological innovation and how otherness and differences have been constructed across contexts and time. The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.
These past few weeks, as violence and instability have escalated in Sudan, I've had one particular conversation on my mind, an episode of the show that I recorded a few years back with Mohamed Abubakr. In April of this year, clashes broke out in cities, with the fighting concentrated around the capital city of Khartoum and the Darfur region. As of 27 May, at least 1,800 people had been killed and more than 5,100 others had been injured. The conflict began with attacks by the paramilitary group Rapid Support Forces (RSF) on government sites across Sudan. At present the conflict has killed hundreds, injured thousands, and triggered a humanitarian catastrophe with international sanctions and a global response emerging from governments, including the United States, and international groups. In light of the conflict, I wanted to revisit the conversation I had with Mohamed, where we talked about the role that tech plays in democracy and revolution in the middle east to call attention to Sudan and those who are working passionately to help protect and restore democracy there, to recall the possibilities and optimism for a better Sudanese future, and to help remind us of our interconnectedness to others around the world. Mohamed Abubakr is a Sudanese human rights activist and peacemaker with a decade and a half of civil society experience. Since high school, he has founded and led organizations and initiatives focused on humanitarian, human rights, youth empowerment and peace programs across the Middle East and Africa (MEA) including in Darfur, South Sudan, Sudan, Egypt, Israel, the Palestinian Territories and beyond. Mohamed has also documented, reported and mobilized against human rights abuses across MEA, and since arriving in the United States has become a sought after voice at the State Department and in Congress concerning policy and human rights issues in the region. Mohamed Abubar is the president of the African and Middle Eastern Leadership Project (AMEL). AMEL empowers young activists from the Middle Eastern and African region, and connects them with one another and with peers, leaders and audiences in the global north, in order to advance human rights for all human beings. Using online platforms, social media networks, and technological innovation, AMEL provides training, mentoring, and advocacy to African and Middle Eastern activists, empowering them to step up their civil society activism, while at the same time building their skills and experience to ascend to top leadership positions.
Today's episode focuses on the growing field of compliance and regulation. Compliance is a field that is growing in importance at both national and international level. In the EU where emerging ethical principles governing tech have led governments to pass new laws, and harms caused by the tech industry have provoked increasingly sharp public reactions, companies have realized that they now must abide by new reporting obligations, that seek to monitor and prevent environmental mismanagement, sexual harassment, questionable lobbying and tax offenses. Companies are increasingly seeking to protect themselves by introducing effective compliance systems so as to meet these new requirements. In the episode, I speak with Ofir Shabtai, the Co-Founder & CTO at Shield, a company building compliance systems that can serve as internal watchdogs to monitor and ensure compliance. We talk about the emerging models of governance and the compliance movements mobilizing around the world, what compliance work looks like, and how technological systems intersect with compliance and governance.
Any long-time listeners of the show know that I'm passionate about accessibility and disability technology. Technologies that support the idea that we can have an equitable world, and that creating a more accessible world makes things better not just for the group specifically considered in that technology, but for all of us, is a key idea to me. That's why I wanted to sit down with Suman Kanuganti, the former Co-founder and CEO of Aira Tech, a high-tech startup whose work helped pioneer a way to bridge the information gap for those who are blind or low vision. At Aira, Kanuganti transformed cities, airports, and universities across the country by helping to make those spaces accessible for people who are blind or low-vision. After founding AIRA, Suman went on to start another company, PersonalAI, which is extending the principles of accessibility and mobility to the context of memory. In founding PersonalAI, Suman sought to create an AI to support memory, and to return data ownership back to the individual at this critical moment, when the assumptions that used to rule the web, where our personal data was the property of the companies whose products we use to move throughout digital space in our daily lives—Facebook, Google, WhatsApp—are in flux. In this conversation, we talk about the concept of memory and the transformation of this concept in the context of digital technologies; we talk about the challenges of, and possibilities, for creating accessibility technologies, and Suman shares his vision of returning data ownership to the people. Suman Kanuganti is the CEO of PersonalAI. He holds an MBA in Entrepreneurship / Entrepreneurial Studies from the UC San Diego Rady School of Management, a Master's in Computer Engineering from the University of Missouri-Columbia, and a Bachelor's in Electrical and Electronics Engineering from Kakatiya University, India.
Welcome to another episode of the "22 Lessons on Ethics and Technology" series! In this episode, I sit down with Jason Edward Lewis to talk about how Indigenous peoples are imagining the futures while drawing upon their heritage. How can we broaden the discussions regarding technology and society to include Indigenous perspectives? How can we design and create AI that centers Indigenous concerns and accommodates a multiplicity of thought? And how can art-led technology research and the use of computational art in imagining the future? Jason Edward Lewis is a digital media theorist, poet, and software designer. He founded Obx Laboratory for Experimental Media, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is deeply committed to developing intriguing new forms of expression by working on conceptual, critical, creative and technical levels simultaneously. He is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well Professor of Computation Arts at Concordia University. Lewis was born and raised in northern California, and currently lives in Montreal. Lewis directs the Initiative for Indigenous Futures, and co-directs the Indigenous Futures Research Centre, the Indigenous Protocol and AI Workshops, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design. Lewis' creative and production work has been featured at Ars Electronica, Mobilefest, Elektra, Urban Screens, ISEA, SIGGRAPH, FILE and the Hawaiian International Film Festival, among other venues, and has been recognized with the inaugural Robert Coover Award for Best Work of Electronic Literature, two Prix Ars Electronica Honorable Mentions, several imagineNATIVE Best New Media awards and multiple solo exhibitions. His research interests include emergent media theory and history, and methodologies for conducting art-led technology research. In addition to being lead author on the award-winning “Making Kin with the Machines” essay and editor of the groundbreaking Indigenous Protocol and Artificial Intelligence Position Paper, he has contributed to chapters in collected editions covering Indigenous futures, mobile media, video game design, machinima and experimental pedagogy with Indigenous communities. Lewis has worked in a range of industrial research settings, including Interval Research, US West's Advanced Technology Group, and the Institute for Research on Learning, and, at the turn of the century, he founded and ran a research studio for the venture capital firm Arts Alliance. Lewis is a Fellow of the Royal Society of Canada as well as a former Trudeau, Carnegie, and ISO-MIT Co-Creation Lab Fellow. He received a B.S. in Symbolic Systems (Cognitive Science) and B.A. in German Studies (Philosophy) from Stanford University, and an M.Phil. in Design from the Royal College of Art.
Welcome back for another episode in the "22 Lessons on Ethics and Technology Series! In this episode of the series, I speak to Dr. Eric Katz, and we take on the common utopian mythology of technology as inherently progressive, focusing specifically on the frequent slide from utopianism into terror. We talk about the uses of technology during the Holocaust and the specific ways in which scientists, architects, medical professionals, businessmen, and engineers participated in the planning and operation of the concentration and extermination camps that were the foundation of the 'final solution'. How can we think about the claims of technological progress in light of the Nazi's use of science and technology in their killing operations? And what can we learn from the Nazi past about how our commitment to a vision of technological progress can go horrifically wrong? Dr. Eric Katz is Professor Emeritus of Philosophy in the Department of Humanities at the New Jersey Institute of Technology. He received a B.A. in Philosophy from Yale in 1974 and a Ph.D.in Philosophy from Boston Universityin 1983. His research focuses on environmental ethics, philosophy of technology, engineering ethics, Holocaust studies, and the synergistic connections among these fields. He is especially known for his criticism of the policy of ecological restoration. Dr. Katz has published over 80 articles and essays in these fields, as well as two books: Anne Frank's Tree: Nature's Confrontation with Technology, Domination, and the Holocaust (White Horse Press, 2015) and Nature as Subject: Human Obligation and Natural Community (Rowman and Littlefield, 1997), winner of the CHOICE book award for “Outstanding Academic Books for 1997.” He is the editor of Death by Design: Science, Technology, and Engineering in Nazi Germany (Pearson/Longman, 2006). He has co-edited (with Andrew Light) the collection Environmental Pragmatism (London: Routledge, 1996) and (with Andrew Light and David Rothenberg) the collection Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology (Cambridge: MIT Press, 2000). He was the Book Review Editor of the journal Environmental Ethics from 1996-2014, and he was the founding Vice-President of the International Society for Environmental Ethics in 1990. From 1991-2007 he was the Director of the Science, Technology, and Society (STS) program at NJIT. His current research projects involve science, technology, and environmental policy in Nazi Germany.
Today I'm sitting down with Talha Baig to talk about a new to me organization, the Integrity Institute. On the show, I've spent a lot of time talking about what I see as a new workforce emerging in the tech sector, of people working in jobs in the tech sector to try and understand, assess, and mitigate some of the harms caused by technologies. That's why I was excited to learn about the Integrity Institute, a cohort of engineers, product managers, researchers, analysts, data scientists, operations specialists, policy experts and more, who are coming together to leverage their combined experience and their understanding of the systemic causes of problems on the social internet to help mitigate these problems. They want to bring this experience and expertise directly to the people theorizing, building, and governing the social internet. So I wanted to talk to Talha, who hosts the Trust in Tech podcast out of the institute, about the concept, the function, and the future of integrity work. Talha Baig is an expert on using machine learning to address platform integrity issues. He has spent 3 years as a Machine Learning Engineer reducing human, drugs, and weapon trafficking on Facebook Marketplace. He has insider knowledge on how platforms use AI for both good and bad, and shares his thoughts on his new podcast Trust in Tech, where he has in-depth conversations about the social internet with other platform integrity workers. They discuss the intersections between internet, society, culture, and philosophy with the goal of helping individuals, societies, and democracies to thrive.
Between 2020 and 2022, I spent a lot of time reading about ventilators. So did a lot of the country. News coverage of the pandemic talked about everything from the serious shortage in ventilators around the country to new technologies available that might help save lives by helping victims of the virus breathe. From the pandemic that started in March of 2020, to the wildfires in California in August of that same year that made it difficult to take the outside air, I have spent a lot of time over the last few years thinking about breathing, that simple and essential activity that we'll do, mostly unconsciously, throughout our lives. And how that activity of breathing is, at this moment in history, connected to technology. That's why I wanted to talk to Aurika Savickaite, an- Acute Care Nurse Practitioner and medical professional at the University of Chicago who has spent her entire career providing top-quality patient care and advocating for the use of helmet-based ventilation to improve healthcare outcomes. Aurika is a recognized expert in noninvasive ventilation via the helmet interface and has garnered widespread respect within the medical community for her passionate work in this area. In 2014, she was involved in a successful three-year trial study at the University of Chicago Medical Center that tested the effectiveness of helmet-based ventilation in the ICU. Drawing on this experience, she authored a capstone paper on Noninvasive Positive Pressure Ventilation for the Treatment of Acute Respiratory Failure in Immunocompromised Patients, which has been instrumental in raising awareness about the benefits of this technology. In March 2020, Aurika founded HelmetBasedVentilation.com, a website that has become a valuable resource for medical professionals seeking to learn more about the benefits of helmets and their use in treating patients with respiratory distress. Aurika continues to actively manage the website and update it with the latest research and information about helmet-based ventilation. Today, Aurika is dedicated to educating clinicians about the use of helmet-based ventilation and she believes that the evidence-based information she provides can help save lives, shorten ICU stays, lower the workload for medical staff, and improve overall healthcare outcomes. Her goal is to promote the use of this technology in both ICU and non-ICU settings and help to make it more widely available to those who need it.
Welcome back to another episode in the “22 Lessons on Ethics and Technology for the 21st Century” series. In this episode of the series, we take a deep dive into the history of how technology intersects with human rights. My thinking on ethics and technology has human rights at its foundations, so I was particularly excited to sit down with Dr. Jay Aronson, one of the leading thinkers on science, technology, and human rights. We explore how technologies have coincided with the development of human rights in ethical and political terms, and we look at the role that technologies play in our contemporary moment in enforcing human rights--and violating them. Dr. Jay Aronson is the founder and director of the Center for Human Rights Science at Carnegie Mellon University. He is also Professor of Science, Technology, and Society in the History Department. Aronson's research and teaching examine the interactions of science, technology, law, media, and human rights in a variety of contexts. His current project focuses on the documentation and analysis of police-involved fatalities and deaths in custody in the United States.This work is being done through collaborations with the Pennsylvania Prison Society and Dr. Roger A. Mitchell, the Chief Medical Examiner of Washington, DC. In addition, he maintains an active interest in the use of digital evidence (especially video) in human rights investigations. In this context, he primarily facilitates partnerships between computer scientists and human rights practitioners to develop better tools and methods for acquiring, authenticating, analyzing, and archiving human rights media. Previously, Aronson spent nearly a decade examining the ethical, political, and social dimensions of post-conflict and post-disaster identification of the missing and disappeared in collaboration with a team of anthropologists, bioethicists, and forensic scientists he assembled. This work built on his doctoral dissertation, a study of the development of forensic DNA profiling within the American criminal justice system. His recent book, Who Owns the Dead? The Science and Politics of Death at Ground Zero (Harvard University Press, 2016), which analyzes the recovery, identification, and memorialization of the victims of the 9/11 World Trade Center attacks, is a culmination of this effort. Aronson has also been involved in a variety of projects with colleagues from statistics, political science, and conflict monitoring to improve the quality of civilian casualty recording and estimation in times of conflict. Aronson received his Ph.D. in the History of Science and Technology from the University of Minnesota and was both a pre- and postdoctoral fellow at Harvard University's John F. Kennedy School of Government. His work is funded by generous grants from the MacArthur Foundation, the Oak Foundation, and the Open Society Foundations.
Welcome back to a brand new season of “Technically Human!” Today's episode features another conversation in the "22 Lessons on Ethics and Technology" series. I teach science fiction as a way of thinking about ethics and technology, because I fundamentally believe that before we can build anything, we first have to imagine it. Science fiction is at the core of so many of our technological innovations, offering us utopian visions of how the world could be, or how our values might be captured and catapulted by new technologies—or dystopias about how technology's promise can go terribly, horribly wrong. So I was thrilled to talk with Professor Lisa Yaszek, one of the world's leading experts on science fictions, for this episode, about the role of science fiction in creating a global imaginary about technology that crosses centuries, continents, and cultures. Dr. Lisa Yaszek is Regents Professor of Science Fiction Studies in the School of Literature, Media, and Communication at Georgia Tech. She is particularly interested in issues of gender, race, and science and technology in science fiction across media as well as the recovery of lost voices in science fiction history and the discovery of new voices from around the globe. Dr. Yaszek's books include The Self-Wired: Technology and Subjectivity in Contemporary American Narrative (Routledge 2002/2014); Galactic Suburbia: Recovering Women's Science Fiction (Ohio State, 2008); Sisters of Tomorrow: The First Women of Science Fiction (Wesleyan 2016); and Literary Afrofuturism in the Twenty-First Century (OSUP Fall 2020). Her ideas about science fiction as the premiere story form of modernity have been featured in The Washington Post, Food and Wine Magazine, and USA Today and on the AMC miniseries, James Cameron's Story of Science Fiction. A past president of the Science Fiction Research Association, Yaszek currently serves as an editor for the Library of America and as a juror for the John W. Campbell and Eugie Foster Science Fiction Awards.
Welcome back to another episode of the 22 lessons on ethics and technology series, in a conversation with Dr. Judith Kalb about the growth of online education and technologies of virtual meeting. How have our human interactions changed with the introduction, and normalization, of online meetings? How have virtual technologies transformed our relationships to one another, and to the information we exchange when we meet? What are the ethics of learning and the transformation of what it means to learn, to teach, and to interact with our colleagues, students, and bosses online? Dr. Judith E. Kalb is a professor in the Department of Languages, Literatures, and culture at the University of South Carolina. She earned a BA in Slavic Languages and Literatures at Princeton University and a joint PhD in Slavic Languages and Literatures and Humanities at Stanford University. Dr. Kalb's research focuses on the interactions between Russian culture and the Greco-Roman classical tradition. Her book Russia's Rome: Imperial Visions, Messianic Dreams, 1890-1930, examines the image of ancient Rome in the writings of Russian modernists. Her new project focuses on Russia's reception of Homer. An award-winning teacher and a pioneer in online teaching and pedagogy, Dr. Kalb enjoys introducing students to the incredible world of Russian culture and the larger European literary tradition of which it forms a part. That's all for this season of “Technically Human.” We will return with new episodes in April. In the meantime, check out our archive, of over 100 episodes of the show, featuring conversations with thinkers, critics, and leaders across fields, industries and from around the world about how we navigate our humanity in the age of technology. We'll see you in April!
Welcome back, for another episode of the “22 Lessons on Ethics and Technology” series. In this episode, I speak with Dr. Lauren Klein about the complicated relationship between data, race, and gender, and what she calls “data feminism.” What is the relationship between data visualizations, representation, and construction of categories—and difference? How have visualizations constructed race and gender? And how can a feminist data science approach help in constructing a more just and equal world? Dr. Lauren Klein is an associate professor in the Departments of English and Quantitative Theory & Methods at Emory University. She received her A.B. from Harvard University and her Ph.D. from the Graduate Center of the City University of New York (CUNY). Her research interests include digital humanities, data science, data studies, and early American literature. Before arriving at Emory, Klein taught in the School of Literature, Media, and Communication at Georgia Tech where she directed the Digital Humanities Lab. She is currently at work on two major projects: the first, Data by Design, is an interactive book on the history of data visualization. Awarded an NEH-Mellon Fellowship for Digital Publication, Data by Design emphasizes how the modern visualizing impulse emerged from a set of complex intellectually and politically-charged contexts in the United States and across the Atlantic. Her second project, tentatively titled Vectors of Freedom, employs a range of quantitative methods in order to surface the otherwise invisible forms of labor, agency, and action involved in the abolitionist movement of the nineteenth-century United States. Dr. Klein is the author of An Archive of Taste: Race and Eating in the Early United States (University of Minnesota Press, 2020). This book shows how thinking about eating can help to tell new stories about the range of people, from the nation's first presidents to their enslaved chefs, who worked to establish a cultural foundation for the United States. Klein is also the co-author (with Catherine D'Ignazio) of Data Feminism (MIT Press, 2020), a trade book that explores the intersection of feminist thinking and data science. With Matthew K. Gold, she edits Debates in the Digital Humanities (University of Minnesota Press), a hybrid print/digital publication stream that explores debates in the field as they emerge. The most recent book in this series is Debates in the Digital Humanities 2019.
In this episode, I speak with Dr. Nick Chatrath about the crucial role that leadership plays in the future of AI development. We talk about organizational culture, the very human leaders driving technological production, and why human independent thinking matters more than ever, in the age of artificial intelligence. Dr. Nick Chatrath is an expert in leadership and organizational transformation with the aim of helping humans flourish. He holds a doctorate from Oxford University and serves as managing director for a global leadership training firm. His book, The Threshold: Leading in the Age of AI, which comes out this week and is published by Diversion Books, offers a revolutionary framework for how leaders in all kinds of organizations can adapt to the new age of technology by leaning into the qualities and skills that make us uniquely human.
Today's episode features a conversation with Medha Parlikar, about the ethics of the blockchain and cryptocurrency. We talk about the vision of what cryptocurrency could be, what dangers it might pose to our values, and what the future of cryptocurrency might look like in a web-3 world. Medha Parlikar is co-founder and chief technology officer of CasperLabs. She has more than 30 years of tech experience and is one of the top women leaders in blockchain. She is a prolific speaker, having spoken at several global conferences including Davos, LA Blockchain Summit, and NFT.LA , among others. Medha is a mentor and has worked with organizations including Strongurl to elevate and encourage women in blockchain/tech. A quick note: Medha and I recorded this episode right before some really big things happened in the crypto world. Like the crypto crash in December of 2022 when it was revealed that the notorious Crypto entrepreneur, investor, and billionaire “SBF,” or Sam Bankman Fried, the founder and CEO of the cryptocurrency exchange FTX and associated trading firm Alameda Research, was discovered to have likely committed massive fraud. The discovery led to a high-profile collapse resulting in chapter 11 bankruptcy in late 2022, and a massive shake to the crypto industry. Things might have changed in the crypto world a bit since then, but even so, neither blockchain technology, or the place of cryptocurrencies in the financial industry, seem to be going anywhere, and I think the conversation stands up to time. You be the judge!
In this week's edition of the “22 lessons on ethics and technology series,” I speak with Dr. Nassim Parvin. We talk about the ethical and political dimensions of design and technology, especially as related to values of democratic participation and social justice. How have digital technologies impacted, and how do they continue to impact, the future of social and collective interactions, particularly in the arenas of political participation and social justice? How do the designs of technologies create platforms for participation--or inhibit it? And how have the values of democracy, equity, and justice nfluence the way we imagine and design the technologies that we claim will serve these values? Dr. Nassim Parvin is an Associate Professor at the Digital Media program at Georgia Tech, where she also directs the Design and Social Justice Studio. Her research explores the ethical and political dimensions of design and technology, especially as related to questions of democracy and justice. Rooted in pragmatist ethics and feminist theory, she critically engages emerging digital technologies—such as smart cities or artificial intelligence—in their wide-ranging and transformative effect on the future of collective and social interactions. Her interdisciplinary research integrates theoretically-driven humanistic scholarship and design-based inquiry, including publishing both traditional scholarly papers and creating digital artifacts that illustrate how humanistic values may be cultivated to produce radically different artifacts and infrastructures. Her scholarship appears across disciplinary venues in design (such as Design Issues), Human-Computer Interaction (such as ACM CSCW), Science and Technology Studies (such as Science, Technology, and Human Values), as well as philosophy (such as Hypatia: Journal of Feminist Philosophy). Her designs have been deployed at non-profit organizations such as the Mayo Clinic and exhibited in venues such as the Smithsonian Museum, receiving multiple awards and recognitions. She is an editor of Catalyst: Feminism, Theory, Technoscience, an award-winning journal in the expanding interdisciplinary field of STS and serve on the editorial board of Design Issues. My teaching has also received multiple recognitions inclusive of the campus-wide 2017 GATECH CETL/BP Junior Faculty Teaching Excellence Award. Dr. Parvin received her PhD in Design from Carnegie Mellon University. She holds an MS in Information Design and Technology from Georgia Tech and a BS in Electrical Engineering from the University of Tehran, Iran.
We're back for another installment of the “22 Lessons on Ethics and Technology” special series. In this week's episode of the series, I am joined by Dr. Mar Hicks. This episode tells the story of labor and gender discrimination in the tech industry. Dr. Hicks explains the historical background of gendered technological production that has influenced the development of computing. In her historical outline, she explains that while women were a hidden engine of growth in high technology from World War II to the 1960s, American and British computing in the 1970s experienced a gender flip, becoming male-identified in the 1960s and 1970s. What can this history teach us about the need for gender equity in technological production now? And what are the consequences of continued gender inequity for our future? Professor Mar Hicks is a historian of technology, gender, and labor, specializing in the history of computing. Dr. Hicks's book, Programmed Inequality (MIT Press, 2017) investigates how Britain lost its early lead in computing by discarding the majority of their computer workers and experts--simply because they were women. Dr. Hicks's current project looks at transgender citizens' interactions with the computerized systems of the British welfare state in the 20th century, and how these computerized systems determined whose bodies and identities were allowed to exist. Hicks's work studies how collective understandings of progress are defined by competing discourses of social value and economic productivity, and how technologies often hide regressive ideals while espousing "revolutionary" or "disruptive" goals. Dr. Hicks is also co-editing a volume on computing history called Your Computer Is On Fire (MIT Press, 2020). Dr. Hicks runs the Digital History Lab at Illinois Tech.
In this week's episode, I am joined by Dr. Christopher Nguyen. We talk about the emerging concept of "human first AI," and the changing terrain of both AI ethics, and AI development. We imagine what a human-first approach to AI might look like, and what gets in the way of developing an ethical approach to AI in the tech industry. Christopher Nguyen's career spans four decades, and he has become an industry leader in the field of Engineering broadly, and AI specifically. Since fleeing Vietnam in 1978, he has founded multiple tech companies and has played key roles in everything from building the first flash memory transistors at Intel to spearheading the development of Google Apps as its first Engineering Director. As a professor, Christopher co-founded the Computer Engineering program at the Hong Kong University of Science and Technology, or HKUST. He earned his Bachelor of Science. degree from the University of California-Berkeley, summa cum lauday, and a PhD. from Stanford University. Today, he's become an outspoken proponent of the emerging field of “AI Engineering” and a thought leader in the space of ethical, human-centric AI. With his latest company, Aitomatic, he's hoping to redefine how companies approach AI in the context of life-critical, industrial applications.
This week, I turn my mic over to a guest host, for an interview with Dr. Jared Roach about the growing field of systems biology, an interdisciplinary field of study taking over the biological sciences, focused on complex interactions within biological systems. How can we update the study of biology for the 21st century? How can computational and mathematical analysis help us understand biological systems? And what can we newly see or understand about ourselves if we the way that complex networks interact within our bodies? Today's host, Zoë Gray, is a math major honor student at Cal Poly. She has a background in electrical engineering, and she is particularly interested in considering the pace of technological development, and the ethics of a system of technological production that moves so quickly. Dr. Jared Roach, MD, PhD is a Senior Research Scientist at The Institute for Systems Biology. Starting as a graduate student in the 1990s, Roach worked on the Human Genome Project from its early days through the end of the project. Dr. Roach contributed strategic and algorithmic designs to the Human Genome Project, including the pairwise end-sequencing strategy. He was a Senior Fellow at the Department of Molecular Biotechnology at the University of Washington from 1999-2000. In 2001, he became a Research Scientist at the Institute for Systems Biology. His group currently applies systems biology and genomics to complex diseases, focusing on the systems biology architecture of Alzheimer's disease.
In this week's “22 Lessons on Ethics and Technology" special series, I sit down with Dr. Evelynn Hammonds to talk about how race and gender have shaped the histories of science, medicine, and technological development. We explore the divisions between investigations of gender within scientific and technological inquiry, and race within these same fields. How can an intersectional approach challenge our science and technologies to better serve, and include, a broader diversity of people? How have our concepts of science and technology, and our assumptions about what they can and should do, been shaped by exclusions? How can those trained and working in the Humanities can learn from those trained in and working in the Sciences and Technology fields, and vice-versa? How does an understanding of the history of ideas, and the people and forces that have shaped them, inform our ability to build, innovate, and create work cultures that are more ethical and equitable? Professor Hammonds is the Barbara Gutmann Rosenkrantz Professor of the History of Science and Professor of African and African American Studies in the Faculty of Arts and Sciences, and Professor of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health at Harvard University. She was the first Senior Vice Provost for Faculty Development and Diversity at Harvard University (2005-2008). From 2008-2013 she served as Dean of Harvard College and Chair of the Department of History of Science (2017-2022). Professor Hammonds' areas of research include the histories of science, medicine and public health in the United States; race, gender and sexuality in science studies; feminist theory and African American history. She has published articles on the history of disease, race and science, African American feminism, African-American women and the epidemic of HIV/AIDS; analyses of gender and race in science, medicine and public health and the history of health disparities in the U.S.. Professor Hammonds' current work focuses on the history of the intersection of scientific, medical and socio-political concepts of race in the United States. She is currently director of the Project on Race & Gender in Science & Medicine at the Hutchins Center for African and African American Research at Harvard. Prof. Hammonds holds a B.S. in physics from Spelman College, a B.E.E. in electrical engineering from Ga. Tech and an SM in Physics from MIT. She earned the PhD in the history of science from Harvard University. She served as a Sigma Xi Distinguished Lecturer (2003-2005), a visiting scholar at the Max Planck Institute for the History of Science in Berlin, a Post-doctoral Fellow in the School of Social Science at the Institute for Advanced Study in Princeton, and a Visiting Professor at UCLA and at Hampshire College. Professor Hammonds was named a Fellow of the Association of Women in Science (AWIS) in 2008. She served on the Board of Trustees of Spelman and Bennett Colleges and currently on the Board of the Arcus Foundation, and the Board of Trustees of Bates College. In 2010, she was appointed to President Barack Obama's Board of Advisers on Historically Black Colleges and Universities and in 2014 to the President's Advisory Committee on Excellence in Higher Education for African Americans. She served two terms as a member of the Committee on Equal Opportunity in Science and Engineering (CEOSE), the congressionally mandated oversight committee of the National Science Foundation (NSF), the Advisory Committee of the EHR directorate of the NSF, and the Advisory Committee on the Merit Review Process of the NSF. Professor Hammonds is the current vice president/president-elect of the History of Science Society. At Harvard, she served on the President's Initiative on Harvard and the Legacy of Slavery; the Faculty Executive Committee of the Peabody Museum and she chaired the University-wide Steering Committee on Human Remains in the Harvard Museum Collections. She also works on projects to increase the participation of men and women of color in STEM fields. Prof. Hammonds is the co-author of the National Academy of Sciences (NAS) recently released report (December 9, 2021) Transforming Technologies: Women of Color in Tech. She is a member of the Committee on Women in Science, Engineering, and Medicine (CWSEM) of the NAS and the NAS Roundtable on Black Men and Black Women in Science, Engineering and Medicine. She is an elected member of the National Academy of Medicine (NAM) and the American Academy of Arts and Sciences. She holds honorary degrees from Spelman College and Bates College. For the academic year 2022-2023, Prof. Hammonds is the inaugural Audre Lorde Visiting Professor of Queer Studies at Spelman College.
This week, we continue our “22 Lessons on Ethics and Technology series” with a conversation with Dr. John Basl about how our relationship with tech is changing what he calls an “ethic of life, an ethical perspective on which all living things deserve some level of moral concern. Professor Basl is an associate professor of philosophy in the department of philosophy & religion at Northeastern University and a faculty associate at the Edmond J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society at Harvard University. He works primarily in moral philosophy and applied ethics, especially on issues related to emerging technologies. He is an editorial board member for the new journal AI and Ethics. His most recent book, The Death of the Ethic of Life, is available from Oxford University Press. And that's all for this season! We are staying off our technologies for the winter break—we'll be back with more episodes of the Technically Human podcast in 2023. The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman. Don't forget to subscribe to the show to make sure you don't miss an episode! You can find us on your favorite podcast app--Apple podcasts, Google Play, Spotify—or wherever you get your podcasts. Enjoy the break, and we'll see you in January.
In this episode of "Technically Human," I host Chris Leong and Maria Santacaterina for a conversation about the growing pervasiveness of sociotechnical systems. You may not know the term "sociotechnical system," but if you've booked a flight online, tried to reach an agent on the DMV's hotline, or tried to contact your congressperson, you almost certainly have interacted with one of them. How have sociotechnical systems changed the way we access services, the way we spend our time, and the way we interact with one another? What are the benefits--and the consequences--of living in a world increasingly organized and processed through these systems? Maria Santacaterina is a Global Strategic Leader & Board Executive Advisor, who has worked in 100+ markets and has over 30 years international experience. She focuses on leading growth, strategic change and digital business transformation, particularly on the level of corporate culture and strategy. She advocates for a new approach to futurist imagining, which she calls “adaptive resilience,” in order to build enduring value and values; while responding to an accelerating rate of change, complexity and exponential technological disruption. Chris Leong is a Transformation and Change Leader with a career spanning over 30 years in financial services, enterprise software and consulting industries globally. He thinks about, writes, and advises on the impacts of automated decision-making and profiling outcomes from all digital services on customers and consumers, the trustworthiness of Socio-Technical Systems and the organisations that deploy them. Together, Maria and Chris have co-authored several landmark articles on STSs, including their piece "Responsible Innovation: Living with socio-technical systems" and "Have you outsourced to a sociotechnical system." Enjoy the episode, and thanks for tuning in! We're off next week for the thanksgiving break—join us the first week of December for a new episode of the “22 Lessons in Ethics and Technology” series. To learn more about the 22 Lessons on Ethical Technology series, visit www.etcalpoly.org. And don't forget to subscribe to the show so that you don't miss an episode. You can find us on Apple podcasts, Google Podcasts, Spotify, or wherever you get your podcasts! We'll see you in December.
Welcome to another interview in the "22 Lessons in Ethics and Technology" series! In this episode, I speak with Dr. Pavel Cenkl, about the need for intellectual diversity and multidimensional approaches to technological solutions to the major problems of our time. Professor Cenkl discusses how the major problems we face require that we bring together people trained in a wide variety of approaches. Focusing on environmental issues--climate change, ecological destruction, and the possible proliferation of future pandemics--we consider how ethical approaches to technology depend on thinking across boundaries of ideas and including voices across a variety of institutions, cultures, and experiences. Dr. Pavel Cenkl is the Head of Schumacher College and Director of Learning and Land at Dartington Trust. He has worked for more than two decades in higher education in America and has always been drawn to colleges and universities whose curriculum fully integrates learning with practice and thinking with embodiment. Having taught and served as Dean for nearly 15 years at Vermont's Sterling College, Pavel brings a depth of experience to Schumacher College's unique approach to experiential learning. While pursuing research in ecologically-minded curriculum design and teaching courses in environmental philosophy, Dr. Cenkl is also a passionate endurance and adventure runner. Over the past five years through a project called Climate Run, he has covered hundreds of miles in the Arctic and subarctic on foot in order to bring attention to the connections between our bodies and the more-than-human world in the face of a rapidly changing climate. Dr. Cenkl holds a Ph.D. in English and is the author of many articles, chapters, and two books: Nature and Culture in the Northern Forest: Region, Heritage, and Environment in the Rural Northeast (Iowa City: University of Iowa Press, 2010); and This Vast Book of Nature: Writing the Landscape of New Hampshire's White Mountains, 1784-1911 (Iowa City: University of Iowa Press, 2006). He is currently working on a book titled Resilience in the North: Adventure, Endurance, and the Limits of the Human, which threads together personal narrative and observation with environmental philosophy and reflections on what it means to be human.
Welcome to our 3rd episode of the "22 Lessons on Ethical Technology" series! We will be releasing new episodes in the series every first and second Friday of the month through the duration of the series. In this episode, I sit down with Dr. N. Kate Hayles, one of the founding theorists of posthumanism, a key term to understanding the changing and dynamic relationship between humans and machines in the digital age. What is the role of the Humanities in understanding our relationship to technology? How have our technological innovations have changed the nature of “the human?" And what is the future of the human relationship to our machines--and to our understanding of ourselves? Dr. N. Katherine Hayles is a Distinguished Research Professor of English at the University of California, Los Angeles and the James B. Duke Professor of Literature Emerita at Duke University. She teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Her most recent book, Postprint: Books and Becoming Computational, was published by the Columbia University Press (Spring 2021). Among her many books is her landmark work How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and Writing Machines, which won the Suzanne Langer Award for Outstanding Scholarship. She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships. Dr. Hayles is a member of the American Academy of Arts and Science. She holds a B.S. from the Rochester Institute of Technology, an M.S. from the California Institute of Technology, an M.A. from Michigan State University, and a Ph.D. from the University of Rochester. Within the field of Posthuman Studies, Dr. Hayles' book How We Became Posthuman is considered "the key text which brought posthumanism to broad international attention. Her work has laid the foundations for multiple areas of thinking across a wide variety of urgent issues at the intersection of technology, including cybernetic history, feminism, postmodernism, cultural and literary criticism, and is vital to our ongoing conversations about the changing relationship between humans and the technologies we create.
In this episode of "Technically Human," I give my mic over to two guest hosts, David Geitner and Roman Rosser, to interview Dr. Robert Pearl about the intersection between tech, medicine, and our health. Dr. Pearl answers questions about the way that technologies are radically reshaping health care; the hosts ask questions about bias in medicine; and the group discusses the ways in which our current system fails to treat us, well, well. Dr. Robert Pearl is the former CEO of The Permanente Medical Group (1999-2017), the nation's largest medical group, and former president of The Mid-Atlantic Permanente Medical Group (2009-2017). He serves as a clinical professor of plastic surgery at Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business, where he teaches courses on strategy and leadership, and lectures on information technology and health care policy. He is the author of Mistreated: Why We think We're Getting Good Healthcare—And Why We're Usually Wrong, a Washington Post bestseller that offers a roadmap for transforming American healthcare. His new book, Uncaring: How the Culture of Medicine Kills Doctors & Patients is available now. All proceeds from these books go to Doctors Without Borders. Dr. Pearl hosts the popular podcasts "Fixing Healthcare" and Coronavirus: The Truth. He publishes a newsletter with over 12,000 subscribers called HYPERLINK "https://robertpearlmd.com/newsletter/" Monthly Musings on American Healthcare and is a regular contributor to Forbes. He has been featured on CBS This Morning, CNBC, NPR, and in TIME, USA Today and Bloomberg News. David Geitner is a third-year Biological Sciences major and Frost Scholar at California Polytechnic State University. He grew up in Yuba City California where he learned to love science, sports, community service, and the outdoors. He works in an on-campus research lab working with protein phosphomimetics for protein-to-protein interactions. David aspires to be a dentist as quality dental care is a necessity for society. David hopes to go into the military as a dentist and provide a service to his country. Roman Rosser is a student studying Aerospace Engineering at Cal Poly, San Luis Obispo. He recently joined the PROVE team which is building a long-distance electric car. Roman hopes to work on designing or building new vehicles and has a particular passion for orbital rockets. His hobbies include lifting, backpacking, surfing and reading. A special thank you to David Geitner and Roman Rosser for hosting this week's episode, and to Dr. Pearl for joining us for the show. We'll be back next week with another episode of the “22 Lessons in Ethical Technology special series,” so stay tuned! You can find more information about the 22 Lessons series and the Technically Human Podcast, on our website, www.etcalpoly.org. And don't forget to subscribe to the show! You can find us on Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts.