Natural, artificial, or technological alteration of the human body
POPULARITY
Welcome back to the second half of this eye-opening discussion with Ben Lamm, CEO of Colossal Biosciences. In Part 2, Tom and Ben dig even deeper, tackling the massive ethical questions and transformative possibilities that arise when humans hold the keys to edit and design life itself. Whether it's confronting the future of embryo screening, germline editing, the potential for designer babies, or the international arms race in biotechnology—no topic is off limits. Ben shares insider stories on Colossal's dire wolf project, explains misconceptions about cloning, and reveals the unexpected hurdles and breakthroughs in the world of synthetic biology. They discuss how these advances can directly impact human longevity, environmental crises like plastic pollution, and even set the stage for building the living cities and ocean habitats of tomorrow. This is a no-holds-barred, jam-packed episode for anyone intrigued by the future of engineering life—and the urgent questions we all must face as the bio-revolution unfolds. SHOWNOTES 33:30 – The Moral and Ethical Responsibility of ‘Playing God' 34:58 – Human Genome Editing, Embryo Selection, and the Coming Revolution in IVF 39:14 – The Personal Side: Ben's Own IVF Journey and Making Hard Choices 44:41 – The Slippery Slope: Intelligence, Disease, and Future Human Potential 51:36 – International Competition: US vs. China in Biotech and Human Enhancement 58:46 – Accelerating Gene Editing: Multiplexing, Cloning, and Animal Selection 1:09:12 – Rewilding and Ecosystem Impact: Fact vs. Jurassic Park Fiction 1:22:42 – The Longevity Escape Velocity and Radical Life Extension 1:44:08 – Innovations for the Planet: Enzymes That Break Down Plastic & Ocean Engineering 1:49:40 – The Future of Synthetic Biology: Designed Environments, Health, and Next-Gen Conservation FOLLOW BEN LAMM Twitter/X: @federallamm LinkedIn: Ben Lamm CHECK OUT OUR SPONSORS ButcherBox: Ready to level up your meals? Go to https://ButcherBox.com/impact to get $20 off your first box and FREE bacon for life with the Bilyeu Box! Vital Proteins: Get 20% off by going to https://www.vitalproteins.com and entering promo code IMPACT at check out Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact Netsuite: Download the CFO's Guide to AI and Machine Learning at https://NetSuite.com/THEORY iTrust Capital: Use code IMPACTGO when you sign up and fund your account to get a $100 bonus at https://www.itrustcapital.com/tombilyeu Mint Mobile: If you like your money, Mint Mobile is for you. Shop plans at https://mintmobile.com/impact. DISCLAIMER: Upfront payment of $45 for 3-month 5 gigabyte plan required (equivalent to $15/mo.). New customer offer for first 3 months only, then full-price plan options available. Taxes & fees extra. See MINT MOBILE for details. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens when millions of people take health into their own hands?In this episode of The Augmented Life, Michael Tiffany sits down with Shelby Newsad, biohacker, investor, and principal author of A Biohacker Future, to explore the explosive intersection of AI, gene therapy, neuromodulation, and self-experimentation.Shelby shares what inspired her to write one of the most ambitious pieces of biotech futurism to date—and why she believes biohacking is not just a fringe movement, but a full-blown social shift toward radical individual empowerment.Topics include:• The rise of the biohacker of necessity• The future of follistatin gene therapy and muscle enhancement• Ultrasonic neuromodulation and the fast-tracking of meditation mastery• How ChatGPT is changing the doctor-patient power dynamic• The role of AI therapy, context-aware LLMs, and digital self-understanding• Why optimism may actually be a sign of higher intelligenceThis episode is a deep, exploratory ride through what's possible when curiosity meets capability, and why the next wave of biotech innovation might start with you.
Learn more about Michael Wenderoth, Executive Coach: www.changwenderoth.comIn this episode of 97% Effective, host Michael Wenderoth continues his conversation with Alvaro Fernandez, CEO of SharpBrains. They dive deeper into the specific actions that transformed SharpBrains from a blog into the “go to” platform on Brainfitness – creating a profitable business that has enabled Alvaro to stay at the forefront of a field he is deeply passionate about. They discuss the power of physical proximity in building relationships, harnessing industry controversy to gain media attention, and becoming a hub to detect signals and latent needs. You'll gain insights into sharp ways to amplify your influence – and smile listening to Alvaro's personal encounters with the late greats Larry King, the CNN legend, and Dr. Marian Diamond, the brain science pioneer. SHOW NOTES:Getting started: How Alvaro first connected, as an outsider, with leaders in neuroscience“At the very beginning, nothing really replaces physical proximity”The power of visibility: How to be featured in the popular press and get invited to the World Economic ForumThe importance of having a good online presence so people can find youNavigating hiccups: What SharpBrains stopped doing when people started trusting them less“Controversies are not bad”: How 3 media uproars brought SharpBrains massive media attentionThe power of “connecting the dots” for journalists“You are Dr. Fernandez for the day!”: When Alvaro was interviewed by the Larry King on CNNHow being the hub of the network brings people and opportunities to to youWhen Dr. Marian Diamond reached out: joy, a keynote, and lifelong respectHow conferences can be powerful mechanisms that provide interesting signalsWhy you need to quiet your mind and listenGet bored easily? The importance of creating a business that is consistent with your values and priorities - but demands that you grow and adaptNavigating tradeoffsBig Idea #1: The top way to start developing your neuroplasticity (from a lunch chat with the late Dr. Marian Diamond)Big Idea #2: What companies most want to address employee mental health – but what SharpBrains believes they most need (feel free to copy and spread their idea!)Got an idea about how to harness the human brain? How to reach Alvaro. BIO AND LINKS:Álvaro Fernández Ibáñez is CEO of SharpBrains, the leading brain fitness and neuroscience think tank and advisory. Named a Young Global Leader by the World Economic Forum, Alvaro has been quoted by The New York Times, The Wall Street Journal, BBC, AP, CNN, and more. He co-authored the book The SharpBrains Guide to Brain Fitness: How to Optimize Brain Health and Performance at Any Age, and is the Editor-In-Chief for seminal market reports on neurotechnology and digital brain health. Alvaro enjoys serving in the World Economic Forum's Council on the Future of Human Enhancement, and in the Global Teacher Prize Academy run by the Varkey Foundation. He holds an MBA and MA in Education from Stanford University and a BA in Economics from Universidad de Deusto, in his native Spain.The previous episode on 97% Effective: Linkedin: https://www.linkedin.com/in/alvarofernandez/SharpBrains Advisors: https://sharpbrains.comHis book, SharpBrains Guide to Brain Fitness: https://a.co/d/gRrA0qCPublic speaking: Alvaro featured on CNN's Larry King Live and other recent talks: https://sharpbrains.com/about-us/speaking/Becoming the hub of the network: HBR article “Find networking stressful? Try becoming a connector instead” https://changwenderoth.com/articles/Understanding the brain, with the #2 most popular college professor in the world: Dr. Marian Diamond, “My Love Affair with the Brain”: https://www.youtube.com/watch?v=fIB1v0pLhNMAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Learn more about Michael Wenderoth, Executive Coach: www.changwenderoth.comHow do you get out of the rat race and live your ideal life? How can you set up a business that aligns with your passions, and provides financial independence? Alvaro Fernandez did just that – turning a blog into SharpBrains, the leading think tank and advisory in brain fitness. In this episode of 97% Effective, host Michael Wenderoth picks Alvaro's “sharp brain” to understand the mindset and strategies that transformed SharpBrains into the “go to” – and amplified Alvaro's influence and impact. What resources are waiting for you to create? What powerful community can you activate to propel you and your ideas?SHOW NOTES:Economics and seeing the brain as an asset A brief history of SharpBrains: from blog in 2005 to leading advisory in 2025“Becoming a platform to add value and fill a needed gap”: The secret sauce that underpins SharpBrains and has powered their successWhat brings credibility and trust?SharpBrains is NOT a non-profit – and did not raise outside capital! How Alvaro created value, but captured value – and made money.Find latent needs + protype and don't overthink + quickly test: How SharpBrains assessed new business ideasAlvaro's hard truth: the ability to say no to things you want to do, but may ultimately destroy your coreThe power of creating content: How Alvaro would cut through the noise todayTwo key conditions that causes a community to appear – and power youHow to prevent people from “stealing” your idea: Ideas vs. Execution and knowing your business model BIO AND LINKS:Álvaro Fernández Ibáñez is CEO of SharpBrains, the leading brain fitness and neuroscience think tank and advisory. Named a Young Global Leader by the World Economic Forum, Alvaro has been quoted by The New York Times, The Wall Street Journal, BBC, AP, CNN, and more. He co-authored the book The SharpBrains Guide to Brain Fitness: How to Optimize Brain Health and Performance at Any Age, and is the Editor-In-Chief for seminal market reports on neurotechnology and digital brain health. Alvaro enjoys serving in the World Economic Forum's Council on the Future of Human Enhancement, and in the Global Teacher Prize Academy run by the Varkey Foundation. He holds an MBA and MA in Education from Stanford University and a BA in Economics from Universidad de Deusto, in his native Spain. Linkedin: https://www.linkedin.com/in/alvarofernandez/SharpBrains Advisors: https://sharpbrains.comHis book, SharpBrains Guide to Brain Fitness: https://a.co/d/gRrA0qCPublic speaking: Alvaro featured on CNN's Larry King Live and other recent talks: https://sharpbrains.com/about-us/speaking/Pervasive Neurotechnology market report: https://sharpbrains.com/pervasive-neurotechnology/Michael's book, Get Promoted: https://tinyurl.com/453txk74Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Imagine a future in which Band-Aids talk to your cells, pacemakers are powered by light and your gut microbiome gets a tune-up—all thanks to tiny bioelectric devices. Sounds like sci-fi, right? Think again. Prof. Bozhi Tian of the University of Chicago is on the frontier of bioelectronics, building living machines that can heal, enhance and maybe even transform what it means to be human. In this episode, he explains his research lab's work and explores the thrilling, strange and sometimes unsettling world in which biology meets technology.
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/385-ai-utopia Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. Nick Bostrom is a professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World. Website: https://nickbostrom.com/ Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
What if everything you know is just a simulation? In 2022, I was joined by the one and only Nick Bostrom to discuss the simulation hypothesis and the prospects of superintelligence. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, there is no one better to answer this question than him! Tune in. — Key Takeaways: 00:00:00 Intro 00:00:44 Judging a book by its cover 00:05:22 How could an AI have emotions and be creative? 00:08:22 How could a computing device / AI feel pain? 00:13:09 The Turing test 00:20:02 The simulation hypothesis 00:22:27 Is there a "Drake Equation" for the simulation hypothesis? 00:27:16 Penroses' orchestrated objective reality 00:34:11 SETI and the prospect of extraterrestrial life 00:49:20 Are computers really getting "smarter"? 00:53:59 Audience questions 01:01:09 Outro — Additional resources:
FM-0095 (Title) BioHacking, Starting Your Human Enhancement.Dr. Mark & Dr. Michele lead the way in today's technology of physiology and the trend getting all the attention is making the most of your body through BioHacking. Plus taken control of Gluten and other Alergens.Get a FREE chapter of Fork Your Diet: http://forkyourdiet.com For Functional Medical Institute supplements https://shop.fmidr.com/ Financial consulting for your future https://kirkelliottphd.com/sherwood/ To Find out more information about the plan Kevin Sorbo uses with the Functional Medical Institute https://sherwood.tv/affiliate/?id=152... To watch “Fork Your Diet” look to Amazon Prime: https://www.amazon.com/gp/video/detail/B07RQW5S94/ref=atv_dp_share_cu_r.To watch the hit comedy - "W.W.J.R. - When Will Jesus Return" starring Zoltan Kaszas & Dr. Mark Sherwood. https://www.vudu.com/content/movies/d...Our privacy policy & disclaimer apply to this video. You can view the details here: https://fmidr.com/privacy-polcy
Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
On today's Equipping You in Grace show, Dave considers what human enhancement is and why it matters, human enhancement and a secular worldview and biblical worldview, a biblical framework for thinking about enhancement technologies, and much more on this new Equipping You in Grace show.What you'll hear in this episodeWhat human enhancement is and why it matters.Human Enhancement and a Secular Worldview.Human Enhancement and a Biblical Worldview.Three Distinctions for Thinking About Enhancements.Basic Ethics Questions to Consider.A Biblical Framework for Thinking About Enhancement Technologies.Subscribing, sharing, and your feedbackYou can subscribe to Equipping You in Grace via iTunes, Google Play, or your favorite podcast catcher. If you like what you've heard, please consider leaving a rating and share it with your friends (it takes only takes a second and will go a long way to helping other people find the show). You can also connect with me on Twitter at @davejjenkins, on Facebook, or via email to share your feedback.Thanks for listening to this episode of Equipping You in Grace!
At their February 2024 meeting, Allied Defence Ministers formally adopted NATO's Biotechnology and Human Enhancement Technologies Strategy. Current NATO staff driving the development and delivery of this Strategy outline one of its main features: the first-ever set of Principles of Responsible Use for Biotechnology and Human Enhancement technologies in defence and security.
Join us as Tom Paladino shares his groundbreaking journey into the realm of scalar energy and its astounding potential to heal and transform lives. Listen in as he recounts his inspirations drawn from the legendary Nikola Tesla and discusses his innovative approach to remote healing through photographs, tapping into the informational essence of our very being. Our conversation sheds light on the seamless connection between all life through the universal energy fabric and teases the possibilities for future healing modalities.Connect Tom,Website: https://www.scalarlight.com/......#soulawakening #consiousness#innerwisdom #quantumfield#higherdimensions #lightbody#raiseyourfrequency #conciousness#thirdeyeawakening #metaphysics#quantumhealing #ascendedmasters#consciousawakening #awakenyoursoul#thirdeyethirst #manifestingdreams#powerofpositivtiy #spiritualawakenings#higherconscious #spiritualthoughts#lightworkersunited #highestself#positiveaffirmation #loaquotes#spiritualinspiration #highvibrations#spiritualhealers #intuitivehealer#powerofthought#spiritualityreignssupreme --- Support this podcast: https://podcasters.spotify.com/pod/show/thehiddengateway/support
For decades, philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford) has led the conversation around technology and human experience (and grabbed the attention of the tech titans who are developing AI - Bill Gates, Elon Musk, and Sam Altman). Now, a decade after his NY Times bestseller Superintelligence warned us of what could go wrong with AI development, he flips the script in his new book Deep Utopia: Life and Meaning in a Solved World (March 27), asking us to instead consider “What could go well?” Ronan recently spoke to Professor Nick Bostrom. Professor Bostrom talks about his background, his new book Deep Utopia Life and Meaning in a Solved World, why he thinks advanced AI systems could automate most human jobs and more. More about Nick Bostrom: Swedish-born philosopher Nick Bostrom was founder and director of the Future of Humanity Institute at Oxford University. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, his work has pioneered some of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds.His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit.
Join my mailing list https://briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA
Nick Bostrom is a Professor at Oxford University and the founding director of the Future of Humanity Institute. Nick is also the world's most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a wrote a New York Times bestseller which sparked a global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list. He has just published a new book called “Deep Utopia: Life and Meaning in a Solved World.” What you will learn Find out why Nick is spending time in seclusion in Portugal Nick shares the big ideas from his new book “Deep Utopia”, which dreams up a world perfectly fixed by AI Discover why Nick got hooked on AI way before the internet was a big deal and how those big future questions sparked his path What would happen to our jobs and hobbies if AI races ahead in the creative industries? Nick shares his thoughts Gain insights into whether AI is going to make our conversations better or just make it easier for people to push ads and political agendas Plus loads more!
We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains. Nick Bostrom, a professor at the University of Oxford and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that, in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility. Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty. Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future. ---------------------------------------------------------------------------------------------------------------------------------------- chapters: 0:00 Smarter than humans 0:57 Brains: From organic to artificial 1:39 The birth of superintelligence 2:58 Existential risks 4:22 The future of humanity -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Go Deeper with Big Think: ►Become a Big Think Member Get exclusive access to full interviews, early access to new releases, Big Think merch and more ►Get Big Think+ for Business Guide, inspire and accelerate leaders at all levels of your company with the biggest minds in business -------------------------------------------------------------------------------------------------------------------------------------------------------------------- About Nick Bostrom: Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002). Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots. ---------------------------------------------------------------------------------------------------------------------------------------- About Big Think | Smarter Faster™ ► Big Think The leading source of expert-driven, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, Big Think helps you get smarter, faster by exploring the big ideas and core skills that define knowledge in the 21st century. Get Smarter, Faster. With Episodes From The Worlds Biggest Thinkers. Follow The Podcast And Turn On The Notifications!! Share This Episode If You Found It Valuable Leave A 5 Star Review... Learn more about your ad choices. Visit megaphone.fm/adchoices
Join the conversation with Lieutenant Colonel Dr. Jeanne Krick, as she brings the weighty world of medical ethics into focus, sharing her journey from Neonatology to being the Army Surgeon General's consultant for Medical Ethics. Our discussion orbits the moral quandaries that surface in military medicine, dissecting the intricate balance between patient autonomy and the rigors of military policy and regulations. Dr. Krick provides a riveting narrative on the daily impact of ethical decision-making and the burgeoning field of bioethics. Join us for a candid discussion about what it takes to make life-and-death decisions when duty, honor, and humanity intersect. As we unpack the layers of Dr. Krick's expertise, the fabric of military medical ethics is revealed in its full complexity. The establishment and significance of ethics committees take center stage, as we navigate through complex scenarios where commanders and medical professionals must align on treatment decisions for service members. Dr. Krick's role in shaping policies at a non-deployed level juxtaposes the high-stakes ethical calls required in active war zones, sparking a conversation on the critical need for robust ethical guidelines and training. The forecast for military medical ethics is a combination of change, challenges, and innovation as we look to the horizon where artificial intelligence and human enhancement technologies promise to redefine the boundaries of healthcare. Dr. Krick's insights on the ethical dimensions of AI in medicine, the military's stance on pandemic responses, and the intricacies of cultural sensitivity within patient confidentiality offer a guide for navigating these uncharted waters. Her perspective underscores the importance of early ethicist involvement in policy-making and the role of shared decision-making in aligning medical actions with patients' values. For medical professionals, ethicists, or anyone intrigued by the moral challenges of healthcare, this episode is an indispensable look into the courageous work of those who serve in medicine's toughest arenas. Chapters: (00:00) Exploring Medical Ethics and Consultations (10:21) Military Medical Ethics and Committees (18:23) Ethics in Healthcare and Deployed Settings (30:28) Cultural Differences and Patient Confidentiality (36:19) AI Impact on Medical Ethics (44:54) Medical Ethics and Decision-Making Challenges (50:03) Future of Military Medical Ethics Chapter Summaries: (00:00) Exploring Medical Ethics and Consultations Dr. Jeanne Krick discusses the impact of her bioethics training and education on her problem-solving approach in military medicine and the evolving horizon of medical ethics. (10:21) Military Medical Ethics and Committees Military medical ethics, diverse committees, and educational opportunities for ethical training within the military healthcare system. (18:23) Ethics in Healthcare and Deployed Settings Patient-centered care, organizational ethics, resource allocation, and treatment of enemy combatants in deployed environments. (30:28) Cultural Differences and Patient Confidentiality Cultural differences in medical ethics, patient autonomy, confidentiality, and military readiness are discussed with real-life scenarios. (36:19) AI Impact on Medical Ethics Ethical considerations in AI healthcare, human enhancement in the military, and balancing autonomy and mission readiness during pandemics. (44:54) Medical Ethics and Decision-Making Challenges Equipping medical students with ethical tools, understanding principles and care, reconciling legal constraints, and debating neonatology. (50:03) Future of Military Medical Ethics Future of medical ethics in military medicine, involving ethicists in policy-making, rapid decision-making in emergencies, and balancing guidance with patient wishes. Take Home Messages: Medical ethics in the military setting require balancing individual autonomy with military protocol, highlighting the unique ethical challenges faced by military medical professionals. The journey from neonatology to a consultant for the Army Surgeon General underscores the importance of interdisciplinary backgrounds and analytical thinking in navigating complex ethical decisions in military medicine. The role of ethics committees in military medical treatment facilities is critical, offering diverse perspectives and aiding in difficult decision-making processes when commanders and medics must align on service member treatment. Ethical training and guidelines are essential for military healthcare providers, particularly in deployed settings where high-pressure situations demand rapid and morally sound decision-making. Cultural sensitivity and confidentiality issues present unique ethical dilemmas in military medicine, necessitating careful consideration of cultural relativism and the intent behind sharing medical information within the command structure. The advent of artificial intelligence and human enhancement technologies in healthcare brings forth new ethical dimensions that require transparency and the involvement of ethicists to ensure moral foundations are integrated. The COVID-19 pandemic has highlighted the need for robust ethical frameworks in military medicine, particularly regarding vaccinations and individual autonomy versus mission readiness. Early ethicist involvement in policy-making and shared decision-making processes is key to aligning medical actions with patients' values, ensuring that care remains patient-centered even amidst rapid changes in the medical landscape. Medical students, especially those in military programs, must be equipped with a strong ethical toolkit to face the challenges of contemporary and future medical practice, including varying treatment approaches and legal constraints. The future of military medical ethics points towards an increase in formal ethics training and the early incorporation of ethical considerations in policy-making to better prepare for complex situations such as pandemics and large-scale combat operations. Episode Keywords: Medical Ethics, Military Medicine, Bioethics, Ethical Decision-Making, Patient Autonomy, Military Protocol, Ethics Committees, Artificial Intelligence, Cultural Sensitivity, Patient Confidentiality, Healthcare, Ethics Consultations, Military Healthcare System, Ethical Training, Organizational Ethics, Resource Allocation, Combat Operations, Cultural Relativism, AI Algorithms, Human Enhancement, Informed Consent, Pandemic Response, Vaccinations, Harm Principle, Ethical Toolkit, Ethics of Care, Legal Constraints, Neonatology, Formal Ethics Training, Shared Decision-Making, Emergency Situations, Guidance Hashtags: #wardocs #military #medicine #podcast #MilMed #MedEd #MilitaryMedicalEthics #DrJeanneKrick #BioethicsInUniform #HealthcareOnTheFrontlines #EthicalDecisionMaking #ArtificialIntelligenceEthics #PatientAutonomy #MedicalEthicsTraining #NeonatologyEthics #CulturalSensitivityInMedicine Other Medical Ethics Resources: -DoD Medical Ethics Center- https://www.usuhs.edu/research/centers/dmec The DMEC is situated out of USUHS and has several resources for those in uniform on medical ethics (I am a little embarrassed that I forgot to mention them in the actual interview last night...). Their website has a link to their internal training course, which is really a series of YouTube videos that cover some basic bioethics topics. They also have an app (I believe it's available through all the usual sources and on all devices) that is free to download and has plenty of resources. The app could be a great resource for folks looking for more material, especially in austere environments. -American Society for Bioethics and Humanities- https://asbh.org/ This is the main organization for medical ethics within the US. There are links to many helpful resources on their site, including professional development, endorsed meetings, and guidelines/standards for clinical ethics consultation. Honoring the Legacy and Preserving the History of Military Medicine The WarDocs Mission is to honor the legacy, preserve the oral history, and showcase career opportunities, unique expeditionary experiences, and achievements of Military Medicine. We foster patriotism and pride in Who we are, What we do, and, most importantly, How we serve Our Patients, the DoD, and Our Nation. Find out more and join Team WarDocs at https://www.wardocspodcast.com/ Check our list of previous guest episodes at https://www.wardocspodcast.com/episodes Listen to the “What We Are For” Episode 47. https://bit.ly/3r87Afm WarDocs- The Military Medicine Podcast is a Non-Profit, Tax-exempt-501(c)(3) Veteran Run Organization run by volunteers. All donations are tax-deductible and go to honoring and preserving the history, experiences, successes, and lessons learned in Military Medicine. A tax receipt will be sent to you. WARDOCS documents the experiences, contributions, and innovations of all military medicine Services, ranks, and Corps who are affectionately called "Docs" as a sign of respect, trust, and confidence on and off the battlefield, demonstrating dedication to the medical care of fellow comrades in arms. Follow Us on Social Media Twitter: @wardocspodcast Facebook: WarDocs Podcast Instagram: @wardocspodcast LinkedIn: WarDocs-The Military Medicine Podcast
Dr. Joseph Vukov is an Associate Professor in the Philosophy Department at Loyola University Chicago. He is also Associate Director of the Hank Center for the Catholic Intellectual Heritage at Loyola, and an Affiliate Faculty Member in Catholic Studies and Psychology. Nationally, Vukov also serves as the Vice President of Philosophers in Jesuit Education. His research explores questions at the intersection of ethics, neuroscience, and philosophy of mind, and at the intersection of science and religion. He is a prolific author of articles and monographs including Navigating Faith and Science, and the topic of today's conversation The Perils of Perfection: On the Limits and Possibilities of Human Enhancement, Part of the Magenta Series at New City Press.
Benevolent cyborgs. Not a phrase you hear often these days. With all the hand-wringing and media fear-mongering about AI and new technologies, we seem to have lost the bigger vision of how technology can improve our lives. That's why today, I'm speaking with Dr. Cori Lathan, a techno-optimist who believes technology can be used to build empathy and connection. Today we discuss how Star Wars and a very creative 2nd grade teacher sparked her journey into innovation and invention, how technology is being used to build empathy and connection, why empathy makes a better design team, and the future of human-machine interaction. To access the episode transcript, please click on the episode title at www.TheEmpathyEdge.com Key Takeaways:Technology can be a tool to help children achieve developmental milestones and build empathy. The media will give the negative side of AI and technology because it gets better views and clicks. But great things are happening with technology that is helping to create a beautiful future. Designing tech is about more than what happens behind the computer screen. It is about understanding the user experience and what it means for your end user. "We are creating the future, someone isn't doing it for us. We can create the future we want to see. We can choose the direction it goes." — Dr. Cori Lathan Episode References: Dr. Cori Lathan's Book: Inventing the Future, Stories from a Techno-Optimist: https://inventthefuture.tech/Dr. Cori Lathan's TEDx Talk: Innovation, Empathy, and the Future of Human-Machine Interaction: https://www.youtube.com/watch?v=FnV6QDhwvhkThe Empathy Edge Podcast: Ron Gura: How Technology Helps People Navigate GriefBrand Story Breakthrough course to help you craft a clear, compelling brand story - includes weekly office hours with Maria!About Corinna Lathan + Founder and Former Board Chair and CEODr. Corinna Lathan is a technology entrepreneur who has developed robots for kids with disabilities, virtual reality technology for the space station, and wearable sensors for training surgeons and soldiers. She is a global thought leader in the relationship between technology and human performance and believes in a future of benevolent cyborgs! Dr. Lathan is Co-Founder of AnthroTronix, Inc., a biomedical engineering company focused on brain health, which she led for 23 years as Board Chair. and CEO. She developed one of the first FDA-cleared digital health platforms winning a prestigious Gold Edison Award. She was named a Woman to Watch by Disruptive Women in Health Care, a Technology Pioneer, and a Young Global Leader by the World Economic Forum. She also Chaired the Forum's Councils on Artificial Intelligence and Robotics, and Human Enhancement and Longevity. Dr. Lathan has been featured in Forbes, Time, and the New Yorker magazines and her work has led to such distinctions as MIT Technology Review Magazine's “Top 100 World Innovators,” and one of Fast Company Magazines “Most Creative People in Business.” Dr. Lathan received her B.A. in Biopsychology and Mathematics from Swarthmore College, an M.S. in Aeronautics and Astronautics, and a Ph.D. in Neuroscience from M.I.T.Connect with Dr. Cori Lathan: AnthroTronix: www.atinc.com Twitter: twitter.com/clathan LinkedIn: linkedin.com/in/clathan Instagram: instagram.com/drcoril Join the tribe, download your free guide! Discover what empathy can do for you: http://red-slice.com/business-benefits-empathy Connect with Maria: Get the podcast and book: TheEmpathyEdge.comLearn more about Maria and her work: Red-Slice.comHire Maria to speak at your next event: Red-Slice.com/Speaker-Maria-RossTake my LinkedIn Learning Course! Leading with EmpathyLinkedIn: Maria RossInstagram: @redslicemariaX: @redsliceFacebook: Red SliceThreads: @redslicemaria
Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we're living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
Please
«Der Mensch hat einen Konstruktionsfehler, kann aber optimiert und unsterblich werden»: So denken und daran arbeiten Transhumanisten. Doch wo bleibt im Kampf gegen jegliche Schwäche eigentlich noch der Mensch? Und ist ewig leben überhaupt erstrebenswert? Ein Gespräch. Herzschrittmacher, 3D-Druck von Organen und Gliedmassen, Implantate, die Blinde zu Sehenden machen: Vieles ist bereits möglich oder zumindest in Ansätzen da. Beim sogenannten «Human Enhancement» soll die Leistungsfähigkeit von Menschen durch technische oder chemische Mittel künstlich erweitert werden. Der technologische und medizinische Fortschritt bringt tiefgreifende Möglichkeiten der Erweiterung des Menschen mit sich. Visionen von Cyborgs und einem «Upgrade» des vermeintlich fehlerhaften Menschen lösen aber nicht nur Euphorie, sondern auch Bedenken aus. Welche technischen, aber vor allem auch welche ethischen Herausforderungen bringen diese Technologien mit sich? Wird dabei das Natürliche gegenüber dem künstlich Erweiterten grundsätzlich abgewertet? Wird da der Mensch Gott? Und wie steht es mit der Angst, dass die Menschheit durch etwas selbst Geschaffenes erst übertroffen und dann ausgelöscht werde könnte? Unter der Leitung von Olivia Röllin diskutieren Janina Loh, Technikphilosoph:in und Ethiker:in, und Johannes Hoff, Theologe und Philosoph. Diese Sendung ist eine Wiederholung vom 8. Januar 2023.
Healthusiasm with host Christophe Jauquet For this episode, the Healthusiasm Podcast invited László Puczkó, an experience engineer and a well-being intelligence expert who has been active in every aspect and domain of the health, well-being and travel spectrum. Here's what the panel discussed about: The popularity of Wellness Travel & Medical Tourism The looming desire for Human Enhancement The need for Health Equity On top of that, the Healthusiasm Panel also talked about: the Vaginal microbiome-host interactions that can be modeled in a human vagina-on-a-chip the sensorial quality of Digital Lavender the body scanner business Neko Health by Spotify founder Daniel Ek Telemedicine as the preferred channel for prescription refills and minor illnesses the world's largest four-day workweek trial HIV at-home tests in the UK shopping loyalty cards used for diagnosing women with ovarian cancer and a debit card provided by healthcare teams to offer people of underprivileged communities with access to better and healthier food Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen/
Join us for a thrilling episode as we sit down with Dave Aspry, the founder of Upgrade Labs. Aspry has dedicated his life to exploring and experimenting with the latest technologies and techniques to enhance and transform human performance, and he's here to share his insights with us. In this episode, we delve into the fascinating world of biohacking and learn about the cutting-edge technologies and methods that Upgrade Labs uses to help people achieve their goals, whether it's improving their brain function, gaining strength and muscle, or becoming more resilient to stress. With over eight years of experience and a wealth of knowledge, Aspry is the perfect guide to take us on a journey to the forefront of human enhancement and wellness. Get ready to be inspired, informed, and entertained! --- Support this podcast: https://podcasters.spotify.com/pod/show/calexwomack/support
«Der Mensch hat einen Konstruktionsfehler, kann aber optimiert und unsterblich werden»: So denken und daran arbeiten Transhumanisten. Doch wo bleibt im Kampf gegen jegliche Schwäche eigentlich noch der Mensch? Und ist ewig leben überhaupt erstrebenswert? Ein Gespräch. Herzschrittmacher, 3D-Druck von Organen und Gliedmassen, Implantate, die Blinde zu Sehenden machen: Vieles ist bereits möglich oder zumindest in Ansätzen da. Beim sogenannten «Human Enhancement» soll die Leistungsfähigkeit von Menschen durch technische oder chemische Mittel künstlich erweitert werden. Der technologische und medizinische Fortschritt bringt tiefgreifende Möglichkeiten der Erweiterung des Menschen mit sich. Visionen von Cyborgs und einem «Upgrade» des vermeintlich fehlerhaften Menschen lösen aber nicht nur Euphorie, sondern auch Bedenken aus. Welche technischen, aber vor allem auch welche ethischen Herausforderungen bringen diese Technologien mit sich? Wird dabei das Natürliche gegenüber dem künstlich Erweiterten grundsätzlich abgewertet? Wird da der Mensch Gott? Und wie steht es mit der Angst, dass die Menschheit durch etwas selbst Geschaffenes erst übertroffen und dann ausgelöscht werde könnte? Unter der Leitung von Olivia Röllin diskutieren Janina Loh, Technikphilosoph:in und Ethiker:in, und Johannes Hoff, Theologe und Philosoph.
Is a cyborg a human being? How might technological enhancement or therapy help us more fully participate in being human? In this episode we talk with theologian Victoria Lorrimar and neuroengineer Chris Rozell about artificial intelligence, human enhancement, and the questions this raises for us as human beings. Victoria Lorrimar is a lecturer of Systematic Theology at Trinity College Queensland. Her research focuses on theological anthropology and how a theological understanding of what it means to be human can engage the prospect of technologies that promise to enhance human characteristics and abilities. Chris Rozell is currently the Julian T. Hightower Chair and Professor of Electrical & Computer Engineering at the Georgia Institute of Technology. Dr. Rozell is an educator and researcher developing technology to enable interactions between biological and artificial intelligence systems.
Nick Bostrom https://nickbostrom.com/ Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002).
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think maybe the critical issue here is the governance aspect which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
The Creative Process in 10 minutes or less · Arts, Culture & Society
"I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
"If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."If all jobs could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long? I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org
Welcome to the Alfalfa podcast
In this episode we pick up our conversation from last week about transhumanism and how technology might redefine what it means to be human. We consider what place technology has in today's social narrative and whether it makes sense as Christians to automatically resist efforts to use cutting-edge science to reshape ourselves. Is the human body to be regarded as a Lego kit or a flawed masterpiece of art? How do we discern the Creator's original intention for our bodies in a world where they, like everything else, have been broken by the Fall? And how might it change our ethics in this area if we focused our attention on the resurrected Jesus as the firstfruits of a new kind of humanity? If you want to go deeper into some of the topics we discuss, find more resources to read, listen to and watch at John's website: http://www.johnwyatt.com
Billions of dollars are currently being spent by a suite of private firms, mostly in Silicon Valley, pursuing radical research to enhance human capacities. These companies want to put off, or even defeat, aging, upload our minds to computers and give humans new abilities. Is this simply the next frontier for science and something to be welcomed, or should Christians hesitate to endorse research which appears to target our very created selves? What is the difference between using technology to tackle cancer versus tackling the aging process itself? And what is driving tech billionaires to spend their fortunes in this way? If you want to go deeper into some of the topics we discuss, find more resources to read, listen to and watch at John's website: http://www.johnwyatt.com