Thrivve Podcast

Follow Thrivve Podcast
Share on
Copy link to clipboard

Thrivve is at the intersection of people, technology and innovation. Our podcast is bringing awareness about emerging technologies and its impact in our lives. We invite leaders in different industries to debate the big questions. From the future of work to the future of our humanity, stay tuned to…

Kelly Forbes


    • Oct 24, 2024 LATEST EPISODE
    • monthly NEW EPISODES
    • 43m AVG DURATION
    • 57 EPISODES


    Search for episodes from Thrivve Podcast with a specific topic:

    Latest episodes from Thrivve Podcast

    #51 : Harnessing AI: Opportunities and Challenges

    Play Episode Listen Later Oct 24, 2024 40:44


    In this episode, we sit down with Zulfiya Forsythe to discuss the potential of artificial intelligence (AI) and its impact on various industries and societies. Zulfiya, shares her insights on the benefits and challenges of AI, emphasizing the importance of responsible implementation and ethical considerations. We delve into specific applications of AI, such as its potential to improve healthcare, education, and economic development in remote regions. Additionally, the conversation explores the ethical implications of AI, including bias, privacy concerns, and the potential for job displacement About Zulfiya Zulfiya, the visionary founder of Omadli LLC, is a driving force in the world of data analytics. With a deep passion for harnessing the power of data and AI, she leads her company in providing comprehensive solutions tailored to various industries. Passionate about the power of data, Zulfiya leads her company in providing tailored solutions to various industries. Her journey began with a fascination for data's transformative potential, inspiring her transition from accounting to data science. About The AI Asia Pacific Institute (AIAPI). The AI Asia Pacific Institute (AIAPI) is a global nonprofit organization committed to strengthening the Asia-Pacific economies by facilitating the responsible development and adoption of artificial intelligence. The AIAPI serves as an independent catalyst, uniting stakeholders to guide AI's responsible development through interdisciplinary research, awareness raising, international collaboration, and policy advisory activities.

    #50 : Navigating the Ethical Frontier: AI, Ethics, and the Pacific with Henry Dobson

    Play Episode Listen Later Sep 17, 2024 55:11


    In our 50th episode, we sit down with Henry Dobson. This podcast delves into the world of AI ethics. We discuss the challenges of applying numerous ethical principles to AI, the gap between theory and practice in AI ethics, and the unique considerations for AI in the Pacific Islands. We also explore the controversial topic of AGI, discussing its feasibility, potential consequences, and the implications of creating machine consciousness. About Henry Dobson Henry Dobson is a Research Fellow at the Centre for Biomedical Ethics at the National University of Singapore. Henry holds a PhD in AI ethics from the University of Melbourne, and a Master's degree in philosophy of mind from Monash University. He also spent several years in London working with entrepreneurs and early-stage startups focusing on product design and business development. About The AI Asia Pacific Institute (AIAPI). The AI Asia Pacific Institute (AIAPI) is a global nonprofit organization committed to strengthening the Asia-Pacific economies by facilitating the responsible development and adoption of artificial intelligence. The AIAPI serves as an independent catalyst, uniting stakeholders to guide AI's responsible development through interdisciplinary research, awareness raising, international collaboration, and policy advisory activities.

    #49 : Navigating the Future: Roger Spitz on Techistentialism and Climate Resilience in the Pacific Islands

    Play Episode Listen Later Sep 3, 2024 40:21


    In the second episode of Season 6, we sit down with Mr. Roger Spitz, a thought leader in Techistentialism, to explore its application in the Pacific Islands. Roger shares insights on how technology, inseparable from human life, can influence the development of these regions, particularly as they face the existential threat of climate change. We discuss the role of digital technology and AI in both mitigating and exacerbating these challenges. Roger also highlights the importance of foresight over prediction in navigating future uncertainties and sheds light on the AAA Approach—Anticipatory, Antifragile, and Agile—as a strategy to build resilience and sustainable growth in the Pacific Islands. About Roger Spitz Roger Spitz is the bestselling author of the four-volume collection The Definitive Guide to Thriving on Disruption and Disrupt With Impact: Achieve Business Success in an Unpredictable World. President of Techistential (Strategic Foresight), and founder of Disruptive Futures Institute (Think Tank) in San Francisco, Spitz is an expert advisor of the World Economic Forum's Global Foresight Network. Spitz writes extensively on the future of AI and strategic decision-making, and serves on the AI Council of the Indian Society for Artificial Intelligence. He founded the Techistential Center for Human & Artificial Intelligence, and is known for coining the term “Techistentialism.” Spitz is also a partner of Vektor Partners a venture capital fund (Palo Alto, London), and former Global Head of Technology M&A with BNP Paribas, where he advised on over 50 transactions with deal value of $25bn. About AI Asia Pacific Institute The AI Asia Pacific Institute (AIAPI) is a global not-for-profit committed to strengthening the Asia-Pacific economies by facilitating the responsible development and adoption of artificial intelligence. The AIAPI serves as an independent catalyst, uniting stakeholders to guide AI's responsible development through interdisciplinary research, awareness raising, international collaboration, and policy advisory activities. Read our latest report on ' The State of AI in the Pacific Islands' here

    #48 : Bridging Tradition and Innovation: Dr. Karaitiana Tairu on AI's Role in the Pacific Islands

    Play Episode Listen Later Aug 27, 2024 32:23


    Our first guest of this season, Dr. Karaitiana Tairu, joins us for an insightful discussion on the intersection of AI and the Pacific Islands. Dr. Tairu discusses the intricate balance between preserving cultural heritage and embracing innovation, highlighting the role of AI in creating sustainable solutions tailored to the unique needs of the Pacific Islands. This episode is a must-listen for anyone interested in the convergence of tradition and technology About AI Asia Pacific Institute The AI Asia Pacific Institute (AIAPI) is a global not-for-profit committed to strengthening the Asia-Pacific economies by facilitating the responsible development and adoption of artificial intelligence. The AIAPI serves as an independent catalyst, uniting stakeholders to guide AI's responsible development through interdisciplinary research, awareness raising, international collaboration, and policy advisory activities. About Dr Karaitiana Taiuru Dr Taiuru is a leading authority and a highly accomplished visionary Māori technology ethicist specialising in Māori rights with AI, Māori Data Sovereignty and Governance with emerging digital technologies and biological sciences. He brings extensive expertise in mātauranga, tikanga Māori, te Tiriti and advocacy for digital Māori rights and a profound understanding of the intersection between Māori knowledge and emerging technologies. A professional director with membership with Institute of Directors, with roles including membership and Kahui Māori advisor of the New Zealand AI Forum, member and tangata whenua governor of the AI Researchers Association, invited member of the Expert panel on AI and healthcare for the Office of the Prime Minister's Chief Scientific Advisor, a legislated expert member of the Intellectual Property Office (IPONZ) Trade Marks Advisory Committee and Ministry of Health Tikanga Expert on Assisted Reproduction, as well as many other governance appointments.

    Season 6: The State of Artificial Intelligence in the Pacific Islands

    Play Episode Listen Later Aug 16, 2024 0:54


    In our latest season, we dive into the transformative potential of AI in the Pacific Islands. Join us as we discuss insights from our annual report, "The State of Artificial Intelligence in the Pacific Islands," and hear from leading experts in the region. We explore how AI, including Generative AI, can address critical challenges like geographic isolation, climate change, and economic fragmentation in the Pacific Islands

    #47: Examining Regulation for ChatGPT: Dr. Luciano Floridi

    Play Episode Listen Later May 31, 2023 54:22


    The AI Asia Pacific Institute (AIAPI) is hosting a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. We have published a briefing note outlining some of the critical risks of generative AI and highlighting potential concerns.  The following is a conversation with Dr. Luciano Floridi.  Dr. Luciano Floridi holds a double appointment as professor of philosophy and ethics of information at the University of Oxford, Oxford Internet Institute where he is also Governing Body Fellow of Exeter College, Oxford, and as Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he is the director of the Centre for Digital Ethics. He is adjunct professor ("distinguished scholar in residence"), Department of Economics, American University, Washington D.C. Dr. Floridi is best known for his work on two areas of philosophical research: the philosophy of information, and information ethics (also known as digital ethics or computer ethics), for which he received many awards, including the Knight of the Grand Cross of the Order of Merit, Italy's most prestigious honour. According to Scopus, Floridi was the most cited living philosopher in the world in 2020.Between 2008 and 2013, he held the research chair in philosophy of information and the UNESCO Chair in Information and Computer Ethics at the University of Hertfordshire. He was the founder and director of the IEG, an interdepartmental research group on the philosophy of information at the University of Oxford, and of the GPI the research Group in Philosophy of Information at the University of Hertfordshire. He was the founder and director of the SWIF, the Italian e-journal of philosophy (1995–2008). He is a former Governing Body Fellow of St Cross College, Oxford. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    #46: Examining Regulation for ChatGPT: Dr. Pedro Domingos

    Play Episode Listen Later May 23, 2023 72:41


    The AI Asia Pacific Institute (AIAPI) is hosting a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. We have published a briefing note outlining some of the critical risks of generative AI and highlighting potential concerns. The following is a conversation with Dr. Pedro Domingos.  Dr. Pedro Domingos is a professor emeritus of computer science and engineering at the University of Washington and the author of The Master Algorithm. He is a winner of the SIGKDD Innovation Award and the IJCAI John McCarthy Award, two of the highest honors in data science and AI. He is a Fellow of the AAAS and AAAI, and has received an NSF CAREER Award, a Sloan Fellowship, a Fulbright Scholarship, an IBM Faculty Award, several best paper awards, and other distinctions. Dr. Domingos received an undergraduate degree (1988) and M.S. in Electrical Engineering and Computer Science (1992) from IST, in Lisbon, and an M.S. (1994) and Ph.D. (1997) in Information and Computer Science from the University of California at Irvine. He is the author or co-author of over 200 technical publications in machine learning, data mining, and other areas. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. Dr. Domingos was program co-chair of KDD-2003 and SRL-2009, and served on the program committees of AAAI, ICML, IJCAI, KDD, NIPS, SIGMOD, UAI, WWW, and others. He has written for the Wall Street Journal, Spectator, Scientific American, Wired, and others. He helped start the fields of statistical relational AI, data stream mining, adversarial learning, machine learning for information integration, and influence maximization in social networks. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    #45: Examining Regulation for ChatGPT: Dr. Toby Walsh & Dr. Stuart Russell

    Play Episode Listen Later May 16, 2023 75:14


    The AI Asia Pacific Institute (AIAPI) has hosted a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. The following is a conversation with Dr. Toby Walsh and Dr. Stuart Russell.  Dr. Toby Walsh is Chief Scientist at UNSW.ai, UNSW's new AI Institute. He is a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney, and he is also an adjunct fellow at CSIRO Data61. He was named by the Australian newspaper as a "rock star" of Australia's digital revolution. He has been elected a fellow of the Australian Academy of Science, a fellow of the ACM, the Association for the Advancement of Artificial Intelligence (AAAI) and of the European Association for Artificial Intelligence. He has won the prestigious Humboldt Prize as well as the NSW Premier's Prize for Excellence in Engineering and ICT, and the ACP Research Excellence award. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. He has played a leading role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka "killer robots"). His advocacy in this area has led to him being "banned indefinitely" from Russia. Dr. Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a recipient of the IJCAI Computers and Thought Award and Research Excellence Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    Season 5: Examining Regulation for ChatGPT

    Play Episode Listen Later Apr 30, 2023 0:58


    Generative Artificial Intelligence systems have significantly advanced in recent years, enabling machines to generate highly realistic content such as text, images, and audio. While these advancements offer numerous benefits, it is critical that we are aware of the associated risks. The AI Asia Pacific Institute has hosted a series of conversations with leading AI experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi. Join us for season 5 of this Podcast.Subscribe now, wherever you are listening to join these conversations.

    #44: Professor Seongwook Heo on the AI Landscape & Governance in South Korea

    Play Episode Listen Later Oct 17, 2022 54:33


    This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India. The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country's unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration. In this podcast, Dr Heo shares the recent developments in South Korea to advance trustworthy AI. Dr. Heo is an associate professor at Seoul National University Law School in Korea. He holds a Ph. D. in law from Seoul National University. Before joining the faculty of SNU, he served as a judge of Seoul Central District Court in Korea. This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    #43: Wan Sie LEE on the AI Landscape & Governance in Singapore

    Play Episode Listen Later Sep 27, 2022 30:17


    This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India. The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country's unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration. In this podcast, Wan Sie LEE shares the recent developments in Singapore to advance trustworthy AI. This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch. Links to some of the initiatives that have been covered in this conversation: Veritas; AI Singapore; NovA!

    #42: Arunima Sarkar on the AI Landscape & Governance in India

    Play Episode Listen Later Aug 17, 2022 35:42


    This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India. The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country's unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration. In this podcast, Arunima Sarkar shares the recent developments in India to advance trustworthy AI. This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    #41: The AI Landscape & Governance in Australia and India

    Play Episode Listen Later Aug 1, 2022 55:32


    This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India. The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country's unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration. In this podcast, we will dissect the salient points and key findings from our study, looking closely at Australia and India to locate convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia Pacific region. This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    #40: The AI Landscape & Governance in Singapore, South Korea and Japan

    Play Episode Listen Later Jul 19, 2022 59:32


    This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India. The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country's unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration. In this podcast, we will dissect the salient points and key findings from our study, looking closely at Singapore, Japan and South Korea to locate convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia Pacific region. This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

    Season 4: Asia-Pacific Collaboration & AI Governance

    Play Episode Listen Later Jun 29, 2022 1:15


    AI is transnational and borderless. As AI developments unfold at such unprecedented scale and pace, the call for building trustworthy AI has been resounding. To achieve it, international cooperation is crucial. Join us for a whole new season of this Podcast, where we are deepen our research aiming to promote and amplify collaboration in the Asia-Pacific region. We will dissect Singapore, Japan, South Korea, Australia and India to find convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia-Pacific. Subscribe now, wherever you are listening to join these conversations.

    #39: Algorithmic Decisions for Security, Standardisation and Trustworthy AI

    Play Episode Listen Later Dec 8, 2021 45:29


    "Regulation is coming" — David Berend David Berend is leading the standardisation of AI Security in Singapore where he and his team are about to publish the world first version of the standard in the next 2 months. Furthermore, he developed the research tools for AI quality assurance as part of his Ph.D., which he is now commercialising as a spin off from Nanyang Technological University, Singapore. Finally, he is member of the German Standards Commission and ISO, to integrate the Singapore Standard Achievements into global context. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #38: Algorithmic Decisions & Power and Sustainability

    Play Episode Listen Later Nov 1, 2021 42:55


    "What always needs to be at the forefront: what physical and regulatory constraints is your system contending with at any given time and how do you design a suite of methods that actually satisfy those constraints" — Priya L. Donti Priya L. Donti is a Ph.D. student in the Computer Science Department and the Department of Engineering & Public Policy at Carnegie Mellon University, co-advised by Zico Kolter and Inês Azevedo. She is also co-founder and chair of Climate Change AI, an initiative to catalyze impactful work at the intersection of climate change and machine learning. Her work focuses on machine learning for forecasting, optimization, and control in high-renewables power grids. Specifically, Priya's research explores methods to incorporate the physics and hard constraints associated with electric power systems into deep learning models. Please see here for a list of her recent publications. Priya is a member of the MIT Technology Review 2021 list of 35 Innovators Under 35, and a 2022 Siebel Scholar. She was previously a U.S. Department of Energy Computational Science Graduate Fellow, an NSF Graduate Research Fellow, and a Thomas J. Watson Fellow. Priya received her undergraduate degree at Harvey Mudd College in computer science and math with an emphasis in environmental analysis. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #37: Algorithmic Decisions & the Financial Industry

    Play Episode Listen Later Oct 1, 2021 61:43


    In this conversation, we covered many practical recommendations on operationalising ethics of AI, discussed FEAT, Singapore's framework for the financial industry and arrived at some predictions in relation to what's next for the industry. David Hardoon is the Senior Advisor for Data and Artificial Intelligence at UnionBank Philippines, Chair of Data Committee at Aboitiz Group and acting in capacity of Managing Director for Aboitiz Data Innovation. He is also an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB) and to Singapore's Central Provident Fund Board (CPF). Prior to his current roles, David was Monetary Authority of Singapore (MAS) first appointed Chief Data Officer and Head of Data Analytics Group. In these roles he led the development of the AI strategy both for MAS and Singapore's financial sector as well as driving efforts in promoting open cross-border data flows. David pioneered the regulator and central bank adoption of data science as well as establishment of the Fairness, Ethics, Accountability and Transparancy (FEAT) principles, first-of-a-kind guidelines for adopting Artificial Intelligence in the financial industry, as well as establishing the MAS-backed Veritas consortium. ….. Hardeep Arora has around 21yrs of experience in Analytics and Data Science Technology in Financial Services. He is based out of Singapore and is heading the AI Research and Engineering at a social media startup called Aaqua where he is building their AI infrastructure and engineering team. Hardeep was the head of AI in FS @ Element AI, where he worked on numerous client engagements, including MAS Veritas Phase 1. He also did a brief stint in Accenture Singapore where he setup an AI Lab focusing on Financial Services use-cases. He also spent more than a decade working in AI and Analytics teams in banks including Standard Chartered, JP Morgan, and Barclays. Hardeep has a graduate degree in Computer Science and MBA in Finance, he is AI advisor for few startups in the region, and a regular speaker at events in Singapore. *** For show notes and past guests, please visit https://aiasiapacific.org/podcast/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    Season 3: Algorithmic Decisions & Impact on Humans

    Play Episode Listen Later Sep 7, 2021 0:58


    How's algorithmic decisions impacting our humanity? It's the question of our times. Join us for a whole new season of this Podcast, where we are deepen our research into the impact of algorithmic decisions on humans. To do this, we will go on a journey and look at the application of algorithm decisions and its implications in different industries. We will explore the Financial Industry, Power & Sustainability and Social Media. Wish fascinating guests, as part of this series we are also joined by different guest interviewers, they will guide us as we explore this world of algorithmic decisions. Starting on 1st October, Subscribe now, wherever you are listening to join these conversations.

    #36: Irakli Beridze on a Global Governance of Artificial Intelligence

    Play Episode Listen Later May 7, 2021 49:38


    "I would argue that the growing digital divide could be as dangerous as the climate crises" — Irakli Beridze Irakli Beridze is the Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations. More than 20 years of experience in leading multilateral negotiations, developing stakeholder engagement programmes with governments, UN agencies, international organisations, private industry and corporations, think tanks, civil society, foundations, academia, and other partners on an international level. Mr Beridze is advising governments and international organizations on numerous issues related to international security, scientific and technological developments, emerging technologies, innovation and disruptive potential of new technologies, particularly on the issue on crime prevention, criminal justice and security. Mr Beridze is supporting governments worldwide on the strategies, action plans, roadmaps and policy papers on AI. Since 2014, Initiated and managed one of the first United Nations Programmes on AI. Initiating and organizing number of high-level events at the United Nations General Assembly, and other international organizations. Finding synergies with traditional threats and risks as well as identifying solutions that AI can contribute to the achievement of the United Nations Sustainable Development Goals. He is a member of various international task forces, including the World Economic​ Forum’s Global Artificial Intelligence Council, the UN High-level panel for digital cooperation, the High-Level Expert Group on Artificial Intelligence of the European Commission. He is frequently lecturing and speaking on the subjects related to technological development, exponential technologies, artificial intelligence and robotics and international security. He has numerous publications in international journals and magazines and frequently quoted in media on the issues related to AI. Irakli Beridze is an International Gender Champion supporting the IGC Panel Parity Pledge. He is also recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013.​ *** For show notes and past guests, please visit https://aiasiapacific.org/podcasts/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #35: Jake Taylor and Alan Ho on Social Media and AI

    Play Episode Listen Later Apr 27, 2021 64:51


    "We believe that working from the perspective of harms, rather than risks, and developing pathways where humans grapple with the challenges of technology as they deploy have been and will be a path for enabling good from these new technologies" — Jake Taylor and Alan Ho Jake Taylor has been doing research in quantum information science and quantum computing for the past two decades, most recently at the National Institute of Standards and Technology. In addition to his research, he spent the last three years as the first Assistant Director for Quantum Information Science at the White House Office of Science and Technology Policy, where he led the creation and implementation of the National Quantum Initiative (quantum.gov) and the COVID-19 High Performance Computing Consortium (covid19-hpc-consortium.org). Now taking a year as a TAPP Fellow at Harvard's Belfer Center for Science and International Affairs, Jake is looking at how lessons learned in implementing science and tech policy for an emerging field can enable public purpose in other areas. He is the author of more than 150 peer reviewed scientific articles, a Fellow of the American Physical Society and the Optical Society of America, and recipient of the Silver and Gold medals from the Department of Commerce. He can be found on twitter @quantum_jake and at https://www.quantumjake.org. Alan Ho is a life-long engineer and entrepreneur. He has worked at a number of large and small technology companies that deployed artificial intelligence in their products. He is currently the product management lead at Google’s Quantum AI team. His responsibilities include the identification of applications of quantum computing that can benefit society. You can find the article mentioned in the conversation 'Identifying and Reducing Harms: a Look at Artificial Intelligence' here. *** For show notes and past guests, please visit https://aiasiapacific.org/podcasts/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #34: Vincent Vuillard on the Future of Work

    Play Episode Listen Later Apr 13, 2021 45:59


    "People need to embrace technology rather than fear it" — Vincent Vuillard Vincent is the Co-Founder of FutureWork Studio, a tech and consulting company helping organisations navigate the rapidly changing work-scape by equipping them with the tools and thinking they need to thrive now and in the future. Vincent has deep expertise in large scale transformations across multiple industries and sectors globally, having been with McKinsey & Company for a number of years before moving into the Corporate sector, where he led the Strategic Capabilities and Future of Work function for Fonterra, a $20bn organisation with more than 22,000 employees globally. As an officer in the French Armed Forces, Vincent also spent more than a decade leading teams in challenging situations across many parts of the globe. Vincent has a Masters Degree in Statistics and Data Analytics, an MBA from the University of Melbourne, and speaks fluent English and Portuguese in addition to his native French. Vincent was one of two inaugural TEDx Auckland Salon ‘in the Dark’ speakers in 2019, a world first event for TEDx, and is also a member of the SingularityU global expert faculty, speaking regularly on the future of work topic. *** For show notes and past guests, please visit https://aiasiapacific.org/podcasts/ If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #33: Hilary Sutcliffe on Trust and Tech Governance

    Play Episode Listen Later Mar 31, 2021 46:17


    "If we don't see soft law actually working, then societal trust is not going to follow" — Hilary Sutcliffe Hilary runs London-based not-for-profit SocietyInside. The name is a riff on the famous brand ‘IntelInside’ and its focus is the desire that innovation should have the needs and values of people and planet at its heart - not simply the making of money. She explores the issues of trust, ethics, values and governance of technology (AI, nanotech, biotech and gene editing in particular) through collaborative research, exploring trustworthy process design, public speaking, coaching, mentoring and acting as a critical friend to organisations of all types. She is director of the TIGTech initiative which explores trustworthiness and trust in the governance of technology, was previously co-chair of the World Economic Forum Global Future Council on Values, Ethics & Innovation and member of its Agile Governance Council. She was recently named one of the 100 Brilliant Women in AI Ethics for 2021. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #32: Amba Kak and Shazeda Ahmed on the Implications of Biometrics & Emotion Recognition

    Play Episode Listen Later Mar 17, 2021 57:40


    Today, we are welcoming Amba Kak and Shazeda Ahmed from the AI Now Institute: A research institute examining the social implications of artificial intelligence. Amba is currently Director of Global Policy & Programs at AI Now Institute at New York University where she develops and leads the Institute’s global policy engagement and partnerships, and is also a fellow at the NYU School of Law. Amba has over a decade of experience in the field of technology-related policy across multiple jurisdictions and has provided her expertise to government regulators, civil society organizations, and philanthropies. She is currently part of the Strategy Advisory Board of the Mozilla Foundation. Shazeda is a doctoral candidate at the University of California at Berkeley’s School of Information. She is a 2020-21 fellow in the Transatlantic Digital Debates at the Global Public Policy Institute. From 2019-20 she was a pre-doctoral fellow at two Stanford University research centers, the Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC). Shazeda has worked as a researcher for Upturn, the Mercator Institute for China Studies, Ranking Digital Rights, and the Citizen Lab. From 2018–19, she was a Fulbright fellow at Peking University's Law School in Beijing, where she conducted field research on how tech firms and the Chinese government are collaborating on the country's social credit system. Shazeda's work on the social inequalities that arise from state-firm tech partnerships in China has been featured in outlets including the Financial Times, WIRED, the South China Morning Post, Logic magazine, TechNode, The Verge, CNBC, Voice of America, and Tech in Asia. This conversation covers the recent reports published by the AI Now Institute 'Regulating Biometrics: Global Approaches and Urgent Questions' and by Article 19 'Emotional Entanglement: China’s emotion recognition market and its implications for human rights'. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #31: Cathy O’Neil on Weapons of Math Destruction

    Play Episode Listen Later Mar 3, 2021 51:49


    "We have to acknowledge that it doesn't benefit everyone and understand the extend to which it harms people" — Cathy O’Neil Cathy O’Neil earned a Ph.D. in math from Harvard, was a postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then switched over to the private sector, working as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. She left finance in 2011 and started working as a data scientist in the New York start-up scene, building models that predicted people’s purchases and clicks. She wrote Doing Data Science in 2013 and launched the Lede Program in Data Journalism at Columbia in 2014. She is a regular contributor to Bloomberg View and wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy. She recently founded ORCAA, an algorithmic auditing company. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #30: John C. Havens on Prioritising Ethics

    Play Episode Listen Later Feb 16, 2021 37:12


    "I sync, therefore I am" — John C. Havens John C. Havens is Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems that has two primary outputs – the creation and iteration of a body of work known as Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems and the recommendation of ideas for Standards Projects focused on prioritizing ethical considerations in A/IS. Currently there are fifteen approved Standards Working Groups in the IEEE P7000™ series. He is also Executive Director for The Council on Extended Intelligence (CXI) that was created to proliferate the ideals of responsible participant design, data agency and metrics of economic prosperity prioritizing people and the planet over profit and productivity. CXI is a program founded by The IEEE Standards Association and MIT whose members include representatives from the EU Parliament, the UK House of Lords, and dozens of global policy, academic, and business leaders. Previously, John was an EVP of Social Media at PR Firm, Porter Novelli and a professional actor for over 15 years. John has written for Mashable and The Guardian and is author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines and Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #29: Renée Cummings on Diversity, Equity and Inclusion in AI

    Play Episode Listen Later Feb 3, 2021 35:36


    "For AI to be successful there must be trust. We must have AI systems we can trust." — Renée Cummings Renée Cummings is a criminologist, criminal psychologist, therapeutic jurisprudence specialist, AI ethicist and the historic first Data Activist in Residence, at The School of Data Science, University of Virginia. Renée is also a community scholar at Columbia University. Advocating for AI we can trust, more diverse, equitable, and inclusive AI, she is on the frontline of ethical AI, generating real time responses to many of the consequences of AI. Renée also specializes in AI risk management, justice-oriented AI, social justice AI, AI policy and governance, and using AI to save lives. She is committed to using AI to empower and transform communities by helping governments and organizations navigate the AI landscape and develop future AI leaders. Renée works at the intersection of AI, criminal justice, racial justice, social justice, design justice, epidemiological and urban criminology, and public health. She has extensive experience in trauma-informed justice interventions, homicide reduction, gun and gang violence prevention, juvenile justice, evidence based policing and law enforcement leadership. Her work extends to rehabilitation, reentry and reducing recidivism. Renée is committed to fusing AI with criminal justice for ethical real time solutions to improve law enforcement accountability and transparency, reduce violence, enhance public safety, public health, and quality of life. A thought-leader, motivational speaker, and mentor, Renée is an articulate, dynamic, and passionate speaker who has mastered the art of creative storytelling and deconstructing complex topics into critical everyday conversations that inform and inspire. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    Season 2 Trailer: AI Asia Pacific Institute

    Play Episode Listen Later Jan 29, 2021 0:49


    Join Kelly Forbes for Season 2 of the AI Asia Pacific Institute Podcast. With fascinating guests, we will explore the legal, ethical and social implications of artificial intelligence. What's its greatest potential? And what could possibly go wrong? Subscribe now, wherever you listen and join the AI Asia Pacific Institute newsletter at https://aiasiapacific.org.

    #28: Beyond Ethical Principles in AI with Matthew Newman

    Play Episode Listen Later Dec 8, 2020 44:13


    "Even if we don't get to that stage of AGI, it is absolutely probable that we will get to the point where there is enough complexity in some of these AI systems which can have an ongoing and profound effect on people's lives" — Matthew Newman Matthew is a global leader in the operationalisation of AI Ethics and responsible use of frontier technology. He is founder and CEO of TechInnocens, a consultancy that provides practical advisory to C-suite, board and AI-leadership on embracing trusted-use; as well as a member of The Cantellus Group, an innovative boutique consulting group helping business and policy leaders harness the opportunities, new risks and trade-offs with AI and other frontier technologies. Matthew has over 20 years experience of advising leadership teams on technology-driven transformation at some of the world's most respected enterprises, as well as start-ups, SMEs and government organisations. He engages with the World Economic Forum's Artificial Intelligence & Machine Learning Platform, co-develops standards for Ethical-use of AI at the IEEE and is a member of the Global Governance of AI Roundtable. Matthew also provides expertise to the Australian Human Rights Commission, the Australian Federal Government and the European AI Alliance on issues of AI policy and the intersection with social license to operate. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend Weapons of Math Destruction by Cathy O'Neil. *** Season 2 starts in January 2021, with all-new episodes. Subscribe now, wherever you are listening. For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #27: A Practical Guide to Building Ethical AI with Oliver Smith

    Play Episode Listen Later Nov 24, 2020 59:13


    "Trust is the canvas that we operate on without really recognising it" — Oliver Smith Ollie is responsible for overall strategy and is Head of Ethics at Koa Health, alongside establishing and maintaining strong partnerships, and business model development. He has extensive experience in strategy and innovation across a range of sectors. Before joining Koa he was Director of Strategy and Innovation at Guy’s and St Thomas’ Charity, responsible for investing £100m over five years in innovations across acute, primary, and integrated care, and biomedical research and digital health. He was a Senior Civil Servant in the UK Department of Health; responsible for UK Tobacco Control Policy, and wrote the government’s first comprehensive childhood obesity strategy. Oliver was also a Policy Adviser in the Prime Minister’s Strategy Unit under Tony Blair. He has an MA in Politics, Philosophy, and Economics from Oxford University. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend Weapons of Math Destruction by Cathy O'Neil. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #26: A Centre of Excellence to Champion the Ethical Use of AI with Kate MacDonald

    Play Episode Listen Later Nov 17, 2020 36:37


    "We need to start thinking about how we can use AI to break down some of these borders, to break down some of these jurisdictions and to make sure that we work together as humans, not just as different countries" — Kate MacDonald Kate MacDonald is the New Zealand Government Fellow to the World Economic Forum and the Government representative to the OECD Network of AI Experts, working in the areas of artificial intelligence and regulation. Kate is an experienced civil servant, with a background in policy, international relations and futures thinking, and has spent the last ten years working in the areas of cybersecurity and digital policy. She has held various roles for the New Zealand government, including setting up the new Digital Minister’s office and working closely with New Zealand’s international partners in the Digital Nations group. While home in New Zealand, she is currently based at the Ministry of Business Innovation and Employment, where she is working on digital and AI policy. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee. *** You can access the World Economic Forum article "AI is here. This is how it can benefit everyone" mentioned in the conversation here. For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #25: Implementing Ethics in AI with Merve Hickok

    Play Episode Listen Later Nov 10, 2020 49:38


    "The field of AI ethics is calling for implementation" — Merve Hickok Merve Hickok is the founder of www.aiEthicist.org platform and Lighthouse Career Consulting. She is an independent AI ethics consultant, lecturer and speaker, focusing on capacity building, awareness raising on ethical and responsible development and use of AI and its governance. She has over 15 years of senior level experience in Fortune 100 companies. Merve is part of IEEE workgroups 7008 and P2863 that work to set global standards and frameworks on ethics of autonomous and intelligent systems; is an instructor at RMDS Lab providing training on AI & Ethics; a founding editorial board member of Springer Nature AI & Ethics journal. She is a ForHumanity Fellow working to draft rules for independent audit of AI systems; a technical/policy expert for AI Policy Exchange; and a member of the leadership team at Women in AI Ethics™ Collective working to empower women in the field. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #24: Building a digital consciousness with Dr. Mark Sagar

    Play Episode Listen Later Nov 2, 2020 61:05


    "If human cooperation is the most powerful force in history, then human cooperation with intelligent machines will actually define the next era of history" — Dr. Mark Sagar Double Academy Award winner Dr. Mark Sagar is the CEO and co-founder of Soul Machines and Director of the Laboratory for Animate Technologies at the Auckland Bioengineering Institute. Mark has a Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. He has previously worked as the Special Projects Supervisor at Weta Digital and Sony Pictures Imageworks and developed technology for the digital characters in blockbusters such as Avatar, King Kong, and Spiderman 2. His pioneering work in computer-generated faces was recognised with two consecutive Scientific and Engineering Oscars in 2010 and 2011, and Mark was elected as a Fellow of the Royal Society in 2019 in recognition of his world-leading research. Mark is responsible for driving the technology vision of Soul Machines and sits on the Board of Directors. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #23: Revolutionising the Regulatory Landscape with Mona Zoet

    Play Episode Listen Later Oct 27, 2020 40:09


    "It does not really mean that you should be afraid of using the technology, but you need to understand what you are doing" — Mona Zoet Mona Zoet, founder and CEO of RegPac Revolution, has over 18 years of experience in the Financial Services Industry within the Legal, Risk and Compliance areas, previously specializing in AML and KYC within some of the world’s biggest banks. During her time in the banking industry, she became acutely aware of the Regulatory, Operational and Risk Management pain points faced by banks and other Financial Institutions alike. She is an Executive Board Member, Southeast Asia Lead and Singapore Chapter President of the International RegTech Association (IRTA) which exists to ease and accelerate the evolution of the RegTech industry, by facilitating integration, collaboration and innovation of all stakeholders, within the Financial Services sector. Mona recently contributed to “The Legal Aspects of Blockchain”, a book published by the UNOPS, focusing on the legal implications that blockchain has, not only in humanitarian and development work, but also on existing regulatory frameworks, data and identity. Mona also shares co-authorship for the book #RegTech Blackbook, which highlights all the latest development of RegTech, FinTech, WealthTech from different angles. To end, Mona has also been mentioned as one of the RegTech top 100 influencers, a report created by Analytica One. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #22: The Role of AI in Climate Change with Sherif Elsayed-Ali

    Play Episode Listen Later Oct 19, 2020 41:10


    "There are no human rights without a liveable planet." — Sherif Elsayed-Ali Sherif Elsayed-Ali is a leading expert in the tech for good space and has unique experience at the intersection of technology and social issues. He co-founded Amnesty Tech, which leads Amnesty International’s work on the impact of technology on human rights and the potential uses of new technologies to advance human rights protection. Sherif also previously co-chaired the World Economic Forum's Global Future Council on human rights and technology. He is the former Director, AI for Climate, at Element AI. Sherif has been at the forefront of technology and human rights, instigating, among others, the development of the Toronto Declaration on equality and non-discrimination in machine learning and Amnesty International’s groundbreaking research on surveillance and online abuse. Over the past few years, he co-authored various reports on the theme of technology and human rights, including the World Economic Forum’s report on preventing discriminatory outcomes in machine learning. Sherif previous speaking engagements include the World Economic Forum at Davos, Chatham House, Web Summit, CogX and RightsCon. His opinion pieces were published by The Guardian, Reuters, Aljazeera and Open Democracy, among others. Sherif studied engineering and international law at the American University in Cairo and has a master in public administration from Harvard Kennedy School. He is now setting up a new climate tech venture, which will be a deep tech company focused on developing and deploying new solutions to enable a net zero future. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #21: Investors’ Expectations on Responsible Artificial Intelligence and Data Governance with Janet Wong

    Play Episode Listen Later Oct 13, 2020 52:45


    "If you don't gain trust from customers and regulators, things will just backlash." — Janet Wong Janet Wong is part of the EOS at Federated Hermes Asia and global emerging markets stewardship team, engaging with listed companies on material ESG topics. They have been shortlisted by the Principles for Responsible Investment’s Stewardship project of the year 2020. Janet is also the global lead on responsible artificial intelligence and governance in financial services. She joined the team following the completion of her two-year Master of Public Administration degree in Social Impact from the London School of Economics and Political Science. Previously, she worked for HSBC Global Banking in Hong Kong overseeing global banking relationships with Hong Kong blue chip firms. Janet holds a Bachelor of Business Administration degree in Global Business and Management from the Hong Kong University of Science and Technology. She is fluent in Cantonese, Mandarin and English. She is also a CFA charter holder. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #20: UNESCO's developments on Artificial Intelligence and Gender Equality with Saniye Gülser Corat

    Play Episode Listen Later Oct 8, 2020 71:12


    "We need to make gender equality more explicit and we need to position gender equality principles in a way that provides for greater accountability." — Saniye Gülser Corat Saniye Gülser Corat served as Director for Gender Equality at UNESCO from September 2004 to August 2020. She is the lead author of the landmark 2019 study “I’d Blush if I Could: Closing Gender Divides in Digital Skills in Education”, which found widespread inadvertent gender bias in the most popular artificial intelligence tools for consumers and business. This report sparked a global conversation with the technology sector, culminating in a keynote address at the 2019 Web Summit in Lisbon, the largest annual global technology conference. As a result of her report and address, Gülser was interviewed by more than 600 media outlets around the world, - including the BBC, CNN, CBS, ABC, NYT, The Guardian, Forbes, Time. She published UNESCO’s follow-up research in August 2020. This report, “Artificial Intelligence and Gender Equality” is based on a dialogue with experts from the private sector and civil society and sets forth proposed elements for a framework on gender equality and AI for further consideration, discussions and elaboration amongst various stakeholders. The Digital Future Society named Gülser one of the top ten women leaders in technology for 2020. During her tenure at UNESCO, Gülser launched special campaigns and programs for girls’ education in STEM and digital skills, the safety of women journalists, and the advancement of women in science. She successfully led change at UNESCO, convincing the 195 member states to recognize gender equity as a global priority for the organization, and achieving gender parity among senior leadership that stood at a mere 9% when she joined UNESCO. Gulser has deep and broad cultural fluency with OECD and emerging markets, having run projects in her native Turkey, Europe, Canada, Southeast Asia, and sub-Saharan Africa. She holds graduate degrees from Carleton University, Canada and College of Europe, Belgium and Executive Education certificates from the Harvard Business School and Harvard Kennedy School. Gülser is a TED and international keynote speaker. She serves on the boards of Women’s Leadership Academy (China), International Advisory Committee for Diversity Promotion, Kobe University (Japan), UPenn Law School Global Women’s Leadership Project (USA), and Exponent, a global gender equality incubator. Based in Paris, she is a strategic advisor to Coopersmith Law + Strategy for technology, education, multi-laterals, and gender equality. She speaks Turkish, English and French. This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #19: Ethical by Design: Principles for Good Technology with Dr Matthew Beard

    Play Episode Listen Later Oct 1, 2020 53:37


    "If ethics frames and guides our collective decision-making, we can ensure we reap the benefits of technology without falling foul of avoidable, manageable shortcomings." — Ethical by Design: Principles for Good Technology Dr Matt Beard is a moral philosopher with an academic background in applied and military ethics. He has taught philosophy and ethics at university for several years, during which time he has been published widely in academic journals, book chapters and spoken at national and international conferences. Matt’s has advised the Australian Army on military ethics including technology design. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears on television, radio, online and in print. How do we ensure that the technology we create is a force for good? How do we protect the most vulnerable? How do we avoid the risks inherent in a belief in unlimited progress for progress' own sake? What are the moral costs of restraint - and who will bear the costs of slower development? This conversation covers the recent paper published by The Ethics Centre which addresses the above questions by proposing a universal ethical framework for technology: Ethical by Design: Principles for Good Technology *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #18: Neuralink: Potential Legal and Ethical Implications with Dr Allan McCay

    Play Episode Listen Later Sep 23, 2020 52:45


    "If a person were to commit a crime by way of brain-computer interface, what would the ‘criminal act’ be?" — Dr Allan McCay Dr Allan McCay teaches criminal law at the University of Sydney. He is a member of the Management Committee of the Julius Stone Institute of Jurisprudence, also at the University of Sydney Law School, and at Macquarie University is an Affiliate Member of the Centre for Agency, Values, and Ethics. He has previously taught at the Law School at the University of New South Wales, and the Business School at the University of Sydney. Allan trained as a solicitor in Scotland and has also practiced in Hong Kong with the global law firm Baker McKenzie. His first book, Free Will and the Law: New Perspectives is published by Routledge. His second book (with Nicole Vincent and Thomas Nadelhoffer) is entitled Neurointerventions and the law: Regulating human mental capacity and is published by Oxford University Press. He holds a PhD from the University of Sydney Law School and is interested in behavioural genetics, neuroscience, neurotechnology, and the criminal law. His philosophical interests relate to free will and punishment, and ethical issues emerging from artificial intelligence. In relation to legal practice, he is interested in behavioural legal ethics and the future of legal work. His work has appeared in The Sydney Morning Herald, The Age, The Australian, and Radio National, and overseas/global media sources including The Independent (UK), The Statesman (India), The Huffington Post and The Conversation. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #17: AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies

    Play Episode Listen Later Sep 14, 2020 43:14


    "What most matters today is the question about individuals and their own data: Is gathered personal information processed to invigorate self-determination and expand opportunities, or does it narrow possible human experiences?." — James Brusseau, AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies Does AI conform to humans, or will we conform to AI? In this conversation, James proposes an ethical evaluation of AI-intensive companies which will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The larger goal is a model for humanitarian investing in AI intensive companies that is intellectually robust, manageable for analysts, useful for portfolio managers, and credible for investors. For the full paper: AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #16: The State of AI Ethics

    Play Episode Listen Later Sep 8, 2020 29:46


    "It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other." — Abhishek Gupta, Montreal AI Ethics Institute The Montreal AI Ethics Institute is an international, non-profit research institute dedicated to defining humanity’s place in a world increasingly characterized and driven by algorithms. They do this by creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI. The Institute's goal is to build public competence and understanding of the societal impacts of AI and to equip and empower diverse stakeholders to actively engage in the shaping of technical and policy measures in the development and deployment of AI systems. For the full report: The State of AI Ethics *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #15: We Need to Talk about A.I.

    Play Episode Listen Later Sep 3, 2020 34:14


    "My key takeaway would be that there is an urgency around the conversation. It doesn't matter how far away AGI is. AI is having an impact now. It's going to continue to have an impact. It's going to affect our lives. It's going to affect the lives of our children. We need to have the conversation now because we don't actually know how much time we have before it's too late to have the conversation." — Leanne Pooley Leanne Pooley has been a documentary filmmaker for over 25 years and has directed films all over the world. In 2011 Leanne’s work was recognised by the New Zealand Arts Foundation and she was made a New Zealand Arts Laureate. Leanne was named an “Officer of the New Zealand Order of Merit” for Services to Documentary Filmmaking in the 2017 New Year’s Honours List and she is a member of The Academy of Motion Picture Arts and Sciences (The Oscars). Leanne is the director of the recently released documentary WE NEED TO TALK ABOUT A.I. for Universal Pictures and GFC Films. The documentary explores the existential risk and exponential benefits of Artificial General Intelligence. Leanne has served as a judge for the International Emmy Awards, is a voting member of the Documentary Branch of the Academy of Motion Picture Arts and Sciences (The Oscars), has extensive teaching experience and has published several articles on documentary filmmaking. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #14: Emotion AI: A Scientist’s Quest to Reclaim our Humanity

    Play Episode Listen Later Aug 31, 2020 39:48


    "Perhaps if ethics had been a mandatory part of the core curriculum of computer scientists, these companies wouldn't have lost the public trust in the way they have today. " — Rana el Kaliouby A pioneer in Emotion AI, Rana el Kaliouby, Ph.D. (@Kaliouby), is Co-Founder and CEO of Affectiva, and author of the newly released book Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology. A passionate advocate for humanizing technology, ethics in AI and diversity, Rana has been recognized on Fortune’s 40 Under 40 list and as one of Forbes' Top 50 Women in Tech. Rana is a World Economic Forum Young Global Leader and a newly minted Young Presidents' Organization member, and co-hosted a PBS NOVA series on AI. Rana holds a Ph.D. from the University of Cambridge and a Post Doctorate from MIT. In this podcast, Rana shares her journey as she follows her calling – to humanize our technology and how we connect with one another. According to Rana, if the point of AI was to design smarter computers that could emulate human thought and decision making, our machines would need more than pure logic. Like human beings, they would need a way to interpret and process emotion. *** For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #13: Humans and AI: Why Upskilling is Key

    Play Episode Listen Later Aug 24, 2020 20:29


    "My humble recommendation is to empower ourselves through upskilling and having an open mindset to become a life-long learner" — Carolyn Chin-Parry Carolyn Chin-Parry is the current Women of the Year from the Women in IT Asia Awards. She is a Managing Director and Digital Innovation Leader at PwC Singapore and also leads PwC's Asia Pacific Digital Upskilling Initiative for 84,000 employees in the region. Carolyn is an active contributor of PwC Singapore's Diversity & Inclusion Committee and provides pro bono digital upskilling for charities, NGOs and social enterprises. She is a Board Director for a charity, the Digital Industry Vice Chair for the Australian Chamber of Commerce in Singapore, and sits on the Advisory Boards for the Australian Institute of Company Directors, Shes Loves Data (non-profit) and EGN. Carolyn is a former Chief Digital Officer and has led some of the largest transformation projects in Asia Pacific for multiple industries. She has been previously featured by The Economist, CIO Magazine, Standard Chartered Bank, Microsoft, IBM, Nomura, GovTech, SGInnovate and many more. In her free time, Carolyn enjoys time with her young family and actively researches on technology to help under-represented communities. In this podcast, we discussed on pressing topics relating to how the current challenging times has impacted: - the Future of Work - the Future of the Workplace - the casual workforce which is disproportionately female (and what we can all do to help under-represented communities) - how humans and AI have their own roles to play in the future and why upskilling is key You can get in touch with Carolyn on LinkedIn. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #12: How to build trust in AI

    Play Episode Listen Later Aug 4, 2020 48:05


    "Trust needs to be built" — Dr Antonio Feraco In this conversation, we focused on what processes are available to encourage trust within AI systems. Antonio has a lot to share in this area, having been active in the space as TÜV SÜD explores the development of a quality framework. This is a significant contribution to the development of a future certification or accreditation process for AI systems. Dr Antonio Feraco, is Managing Consultant for Industry 4.0 at TÜV SÜD. He is responsible for supporting the process manufacturing sector, in adopting technologies within the Industrial Internet of things and Industry 4.0 space to enable end to end integration, improve HSE and optimize efficiency. With a PhD in Artificial Intelligence, a MSc in Industrial Engineer and a PMP®, Antonio ran successful projects in IOT and I4.0 for Oil and Gas, Mining and Pharma, and has joined several international and large initiatives in EU and ASEAN, as a Digitisation and Industry 4.0 expert. He worked with R&D, consultancy and technology advisory sector before and has comprehensive experience in AR and VR, AI, business process optimisation, project management, energy efficiency and robotics. Antonio is also an Adjunct Professor of Innovation Process Management at the University of Vitez since 2013. He also delivers talks for industry and academia in both Technical topics such as Predictive Maintenance, and non-Technical ones like Management of Innovation Processes and Technology Transfer Strategies. You can get in touch with Antonio on LinkedIn or by email: Antonio.FERACO@tuv-sud.sg. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. Sponsorships: off for this episode

    #11: The Future of the Financial Industry in Light of AI

    Play Episode Listen Later Jun 15, 2020 51:26


    "We are going to see an accelerated shift from the west to the east as we come out of this pandemic" — Scott Bales In this conversation, we covered a wide range of topics: from how technology startups need to manage their initial purpose to the future of the financial industry in a post-pandemic world. Scott discussed the latest developments in Singapore in working towards becoming an AI hub and encouraging innovation. We discussed the potential for AI in bringing a positive impact in the world. We also covered the challenges that arise from its development and how the collaboration between human and machine might unfold in the next few years, specifically the importance of education in shaping these changes. Scott is a technology enthusiast leading senior executive and a global leader in the cutting edge arena known as ‘The Digital Shift’, encompassing innovation, culture, design, and technology in a digital world. As a trusted strategic advisor, Scott thrives on the intersection between cultural and behavioural changes in the face of technology innovations and how those reshape industries. Scott enjoys helping leaders navigate accelerated change and complexity in the digital economy, and as a 'Digital Warrior', Scott has found a way to mesh a fascination with people and what motivates them together with a raw enthusiasm for technology. In a world where technology reigns, one must practice what you preach, and Scott does exactly that. He’s a founding member of Next Money, mentor to entrepreneurs across the world, sits on the multiple boards and holds advisory positions at several startups. Scott worked previously as Chief Mobile Officer for Moven, the world’s first-ever digital everyday bank, lead Amazon Web Services growth in enterprise, built MetLife’s innovation lab LumenLab, and built multiple Fintech ventures that have surpassed US $100 million valuations. As a multiple-time best-selling author, Scott has appeared at TEDx, Social Media Week, Google Think, Fund Forum, Asian Banker, Next Money and a long list of private events. His thought leadership has appeared in WIRED, Australian Financial Review and E27. You can get in touch with Scott here on Twitter, LinkedIn or find his latest book here. If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

    #10: Webinar Series: The Next Wave of Innovation — Lead by COVID-19

    Play Episode Listen Later Jun 1, 2020 83:03


    In this episode, we are sharing our latest webinar, the second in a series of dialogues on COVID-19. This was an enlightening discussion as our panellists, Vincent Vuillard from FutureWork Studio and Michelle Hancic from Pymetrics, shared their view on how COVID-19 might fuel the next wave of innovation and its impact on the future of work. Here are some important topics that have been highlighted during the webinar: 1. Working from Home vs Working Anywhere - the changing landscape of where and when work is done, what it means for the future of the office. 2. Who owns the talent - COVID-19 has seen unprecedented collaboration between organisations which is challenging the traditional mindset that employees belong to the organisation. 3. Diversity & Inclusion - We are seeing women and minority groups losing their jobs at a faster rate than men. How can we address these challenges? 4. Mindset - Linear vs exponential shift, challenging the mindset around how work is done. Will COVID-19 result in a long-lasting change or will we fall back into old patterns? For the full recording of the panel, head over here: https://lnkd.in/gTASBrA Stay connected by signing up for our mailing list, following us on Twitter or sending us an email on contact@aiasiapacific.org.

    #9: Ethics, Privacy and Trust by Design

    Play Episode Listen Later May 26, 2020 49:51


    "Do what is preferable, not acceptable. Preference is greater than acceptance" — Nathan Kinch Nathan is the CEO of Greater Than X. He's the creator of Data Trust by Design and dabbles in startup investments when he can. Nathan spent the bulk of his career grappling with the complexity and nuance of the rapidly evolving personal information economy. He's led work for governments, big tech, banks, teclos, startups, as well as research and policy institutes. He writes often and speaks at events all around the world. In this conversation, we discussed the challenges in navigating ethical principles and how organisations can increase their trustworthiness by designing appropriate frameworks and placing social preferability as the goal. On the way, we covered definitions such as trust and ethics and covered some of the current challenges arising in the intersection of COVID-19 & technology. Specifically the challenge of using AI to fight and manage the virus while respecting privacy and other digital rights. Nathan proposed suggestions on how we can encourage trust within organisations, discussed the possibility of regulation and whether ethical frameworks can effectively have an impact.

    #8: Determining our Digital Future in the Age of AI and in the Midst of COVID-19

    Play Episode Listen Later May 11, 2020 41:06


    "We have a real opportunity at this turning point of the digital revolution to make sure that we remember alternatives are possible" — Lizzie O'Shea In today's episode, we discussed how different technologies are impacting us and how we can navigate these challenges in the age of AI. Lizzie discussed some of the technologies deployed to fight and manage COVID-19 and how we can use this challenging time to determine our digital future. Lizzie is a lawyer, writer, and broadcaster. Her commentary is featured regularly on national television programs and radio, where she talks about law, digital technology, corporate responsibility, and human rights. In print, her writing has appeared in the New York Times, Guardian, and Sydney Morning Herald, among others. Lizzie is a founder and board member of Digital Rights Watch, which advocates for human rights online. She also sits on the board of the National Justice Project, Blueprint for Free Speech and the Alliance for Gambling Reform. At the National Justice Project, Lizzie worked with lawyers, journalists and activists to establish a Copwatch program, for which she was a recipient of the Davis Projects for Peace Prize. In June 2019, she was named a Human Rights Hero by Access Now. As a lawyer, Lizzie has spent many years working in public interest litigation, on cases brought on behalf of refugees and activists, among others. I was proud to represent the Fertility Control Clinic in their battle to stop harassment of their staff and patients, as well as the Traditional Owners of Muckaty Station, in their successful attempt to stop a nuclear waste dump being built on their land. Lizzie’s book, Future Histories looks at radical social movements and theories from history and applies them to debates we have about digital technology today. It has been shortlisted for the Premier’s Literary Award. When we talk about technology we always talk about the future—which makes it hard to figure out how to get there. In Future Histories, Lizzie O’Shea argues that we need to stop looking forward and start looking backwards. Weaving together histories of computing and social movements with modern theories of the mind, society, and self, O’Shea constructs a “usable past” that help us determine our digital future.

    #7: Ethicability: How to Decide What's Right and Find the Courage to Do it

    Play Episode Listen Later Mar 25, 2020 43:56


    I don't believe that AI is ever going to be capable of resolving ethical dilemmas in a way that we can all agree about - Roger Steare Professor Roger Steare is internationally recognized as one of the leading experts advising the Boards and executive teams on building high performing, ethical organizations. His work with BP after the Gulf of Mexico disaster has been crucial to the company’s recovery plan, with Roger’s decision-making framework and leadership training endorsed within the US Department of Justice Consent Agreement of 2016. He has advised Barclays, HSBC, Lloyds Bank and RBS after the credit crisis, PPI mis-selling and Libor manipulation scandals, with his work publicly endorsed by the Financial Conduct. He is the author of "ethicability" and "Thinking outside the inbox"; and co-designer of the MoralDNA, a psychometric profile that measures moral values and decision-making preferences, which has a database of over 70,000 people from more than 200 countries. He has been described as a "disruptive", "provocative" and "world-class" keynote speaker on leadership, culture and ethics. His work has also been profiled in The Times, the Financial Times, The Wall Street Journal, Les Echos and The Guardian. This conversation covered a wide range of topics, from the ethical challenges of AI and other technologies to Superintelligence. Got questions or interested in sponsoring the podcast? Please email us at contact@aiasiapacific.org

    #6: AI to fight hiring bias

    Play Episode Listen Later Mar 1, 2020 34:29


    In the day and age of Netflix, Spotify and Amazon -- platforms that take in information about you and give you personalized recommendations that seem to know you better than you know yourself – where was the equivalent for jobs? Netflix’s movie recommendations are not based on their “back of the movie” blurbs. instead, they analyze movies based on deep analysis of traits and then match you based on the traits you like in movies. So why are we still evaluating people based on their “blurbs,” i.e. their resumes? Why was no one applying this powerful technology to help us make one of our most important decisions – what we do with our careers? — Frida Polli, Pymetrics CEO We know that algorithms have a tendency to mirror the biases from society. In this episode, we talked with Michelle Hancic, Head of Industrial and Organisational Psychology for APAC at Pymetrics about what the company is doing to bring solutions in this area and what the future holds in respect of AI in HR and other industries. Pymetrics, a New York start-up, is working to eliminate all the “interview bias” and “educational pedigree bias” inherent in the current recruitment process. It makes software to help companies evaluate job applicants, replacing flawed methods like campus recruiting and résumé screens with a series of neuroscience-based games that are intended to be nondiscriminatory.

    Claim Thrivve Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel