POPULARITY
In this episode of No Brainer, hosts Geoff Livingston and Greg Verdino chat with Kate O'Neill about leadership, human-friendly AI decision making, and her new book What Matters Next. Kate is the founder and CEO of strategic advisory firm KO Insights, host of the Tech Humanist Show, a highly acclaimed global keynote speaker, and a four-time business author. Her work has earned her a coveted spot on the Thinkers50 Radar in 2020 and she was shortlisted for the Thinkers50 Digital Thinking Award in 2023. Drawing on the key themes from her latest book and her wider body of work, Kate emphasizes the need for leaders to balance action and inaction, approach the future as an extension of the decisions we make today, align technology with human experience, and maintain a clear purpose in their organizations. the discussion also touches on the role of strategic optimism in shaping a better future through informed decision-making. Links: · About Kate: https://www.koinsights.com/about-kate/ · Connect with Kate: https://linkedin.com/in/kateoneill · What Matters Next: https://www.koinsights.com/books/what-matters-next-book/ · About KO Insights: https://koinsights.com · The Tech Humanist Show: https://thetechhumanistshow.com Chapters 00:00 Intro 02:41 What's a Tech Humanist? 05:30 Navigating Human-Friendly Technology Decisions 08:08 The Impact of Deregulation on Technology Decisions 10:42 Balancing Action and Inaction 13:26 The Role of Purpose in Technology Decisions 25:21 Aligning Technology with Purpose 27:34 Articulating Purpose for Digital Transformation 29:37 Integrating Technology into Business Strategy 32:22 The Role of Human Workers in an AI-Driven World 33:59 Balancing Profit and Human Value 36:43 Strategic Optimism and Decision Making Learn more about your ad choices. Visit megaphone.fm/adchoices
Technology is changing at a rapid rate. AI is changing at a rapid rate. How do you keep up with not only the technology, but use it to thrive? How can all of this change feel optimistic?Well, today the team is joined by the perfect person to help us see this vision. Known globally as the "Tech Humanist," Kate O'Neill is a leading voice on AI ethics, responsible tech, and human-centric digital transformation. As a sought-after expert, she advises Fortune 500 leaders and speaks at high-profile events worldwide on navigating our tech-driven future while prioritizing human experiences. Some other things Kate is known for:- Author of six books, including the forthcoming "What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast"- Shortlisted for the prestigious Thinkers50 Digital Thinking Award- Featured expert on BBC, WIRED, NPR, and other major media outlets- Pioneering roles: First content management position at Netflix, built Toshiba's first intranetIn this episode of the Magical Learning Podcast, The team and Kate discuss the evolving landscape of leadership in the context of rapid technological change. Kate shares insights on building adaptable and resilient leadership, the challenges leaders face in a fast-paced environment, and the importance of clarity in decision-making. The conversation also touches on the significance of slowing down to gain insights, navigating the complexities of technology and human needs, and overcoming imposter syndrome in leadership roles. Kate emphasizes the need for leaders to develop effective learning processes and to invest in human-centric technology solutions.To grab a copy of Kate's Book "What Matters Next": https://www.koinsights.com/books/what-matters-next-book/And to find Kate or reach out:https://www.koinsights.com/https://www.linkedin.com/in/kateoneill/Chapters00:00 Welcome to the Magical Learning Podcast03:07 Introduction of Guests and Their Backgrounds05:04 Building Adaptable and Resilient Leadership in a Rapidly Changing Tech Landscape10:47 Challenges of Rapid Change in Technology14:51 The Intersection of Technology and Human Needs18:21 Navigating a Complex World of Change19:01 Navigating Rapid Change in Tech and Pop Culture21:38 The Power of a Learner's Mindset25:19 Building Effective Learning Frameworks29:44 Leveraging Relationships for Learning30:29 Imposter Syndrome in Leadership35:25 Visionary Leadership and Future PlanningAll Magical Learning podcasts are recorded on the beautiful lands of the Kulin, Ngunnawal and Wiradjuri nations, and we pay our respect to their elders past and present.As always, if you are having trouble, you can always send us a message.Listen to/watch this podcast here: https://open.spotify.com/show/128QgGO....To find out more about our free content, sign-up for future webinars as well as our other services, go to https://magicallearning.com/ and sign up!You can also find us on our socials: Instagram: / magical_learning Facebook: / magicallearningteam Linkedin: / magicallearning Youtube: / @magicallearning Have a Magical week!
Kate O'Neill is a strategic advisor for Google, Yale, the UN, and more. One of the first 100 employees at Netflix, she teaches leaders how to leverage tech with empathy to maximum value to customers and shareholders. Her new book is titled, "What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast". Learn more about Kate at https://www.koinsights.com.
After winning the prestigious New York Digital Award in 2024, Redefining AI returns with an electrifying Season Four! Join your host Lauren Hawker Zafer, on behalf of Squirro, the Enterprise Gen AI Platform, as we embark on another season of groundbreaking conversations. Episode two guides us into a conversation with Kate O'Neill on What Matters Next: Navigating Leadership and Human-Centered Tech. Who is Kate O'Neill? Kate is founder and CEO of KO Insights, a strategic advisory firm which improves human experience at scale — especially in data-driven, algorithmically-optimized, AI-led interactions. Her clients and audiences include Adobe, the city of Amsterdam, the city of Austin, Cambridge, Coca-Cola, Etsy, Getty Images, Google, Harvard, IBM, McDonald's, Microsoft, the United Nations, Yale, and Zoom. Before starting KO Insights, Kate was one of the first 100 employees at Netflix, a technologist at Toshiba, and founder of the groundbreaking analytics firm, [meta]marketer. Kate has received awards and wide recognition. She was named “Technology Entrepreneur of the Year,” a “Power Leader in Technology,” a “Woman of Influence,” was featured by Google in the launch of their global campaign for women in entrepreneurship. Her insights have been featured in the New York Times, The Wall Street Journal, and WIRED, and she has appeared as an expert tech commentator on the BBC and NPR. Known for her ability to make complex topics relatable, Kate is a sought-after keynote speaker, appearing at conferences and corporate events and has spoken to hundreds of thousands of audience members worldwide. She's written six books, including four on business strategy and technology: Tech Humanist, Pixels and Place, A Future So Bright, and What Matters Next. Why listen? This episode of Redefining AI is a must-listen for leaders, innovators, and tech enthusiasts navigating the fast-paced world of artificial intelligence and digital transformation. Featuring renowned tech strategist and author Kate O'Neill, we dive into essential topics like ethical AI, human-centered leadership, informed decision-making, and aligning business goals with meaningful human experiences. Kate shares actionable strategies from her new book, What Matters Next, offering frameworks like the "Now/Next Continuum" and generative thinking to help leaders balance innovation with responsibility. Packed with real-world examples and practical insights, this episode is perfect for anyone looking to future-proof their organization, embrace ethical technology, and lead with impact in 2025. Stream it now on Spotify to stay ahead in the AI-driven business landscape! #ai #techpodcast
After winning the prestigious New York Digital Award in 2024, Redefining AI returns with an electrifying Season Four! Join your host Lauren Hawker Zafer, on behalf of Squirro, the Enterprise Gen AI Platform, as we embark on another season of groundbreaking conversations. Spotlight Two, teases us into a conversation with Kate O'Neill on What Matters Next: Navigating Leadership and Human-Centered Tech. Who is Kate O'Neill? Kate is founder and CEO of KO Insights, a strategic advisory firm which improves human experience at scale — especially in data-driven, algorithmically-optimized, AI-led interactions. Her clients and audiences include Adobe, the city of Amsterdam, the city of Austin, Cambridge, Coca-Cola, Etsy, Getty Images, Google, Harvard, IBM, McDonald's, Microsoft, the United Nations, Yale, and Zoom. Before starting KO Insights, Kate was one of the first 100 employees at Netflix, a technologist at Toshiba, and founder of the groundbreaking analytics firm, [meta]marketer. Kate has received awards and wide recognition. She was named “Technology Entrepreneur of the Year,” a “Power Leader in Technology,” a “Woman of Influence,” was featured by Google in the launch of their global campaign for women in entrepreneurship. Her insights have been featured in the New York Times, The Wall Street Journal, and WIRED, and she has appeared as an expert tech commentator on the BBC and NPR. Known for her ability to make complex topics relatable, Kate is a sought-after keynote speaker, appearing at conferences and corporate events and has spoken to hundreds of thousands of audience members worldwide. She's written six books, including four on business strategy and technology: Tech Humanist, Pixels and Place, A Future So Bright, and What Matters Next. Why listen? This episode of Redefining AI is a must-listen for leaders, innovators, and tech enthusiasts navigating the fast-paced world of artificial intelligence and digital transformation. Featuring renowned tech strategist and author Kate O'Neill, we dive into essential topics like ethical AI, human-centered leadership, informed decision-making, and aligning business goals with meaningful human experiences. Kate shares actionable strategies from her new book, What Matters Next, offering frameworks like the "Now/Next Continuum" and generative thinking to help leaders balance innovation with responsibility. Packed with real-world examples and practical insights, this episode is perfect for anyone looking to future-proof their organization, embrace ethical technology, and lead with impact in 2025. Stream it now on Spotify to stay ahead in the AI-driven business landscape!
The pioneering force in tech-humanism, Kate O'Neill comes back on Thrive LouD. As a global keynote speaker, author, and podcaster, Kate brings her deep expertise and optimistic approach to shaping humanity's future in our increasingly tech-driven world. Join host Lou Diamond and Kate O'Neill as they dive into her latest book, "What Matters Next." Discover how Kate combines strategic optimism and innovative frameworks to help leaders make human-friendly tech decisions confidently in an era defined by rapid technological change. This episode is an essential listen for anyone seeking to navigate the complex landscape of modern technology and its implications on business and leadership. Kate shares insights about the speed of tech adaptation, the common fears leaders face, and introduces the critical concept of balancing the 'harms of action' versus the 'harms of inaction.' She offers valuable tools and strategies for leaders who might feel daunted by these rapid advancements, providing a visionary approach to future-ready decision-making. Tune in as Kate O'Neill illuminates the path to confidently steering through tech-driven changes and ensuring our decisions today lead to a better, more human-centric tomorrow. Don't miss this thought-provoking and uplifting discussion that promises to inspire and equip you to thrive in the ever-evolving tech landscape! Follow Us:
Kate is a digital innovator, chief executive, business writer, and a globally recognized speaker known as the "Tech Humanist." She is the founder and CEO of KO Insights, a strategic advisory firm that enhances human experiences at scale through data-driven and AI-led interactions. Kate has worked with prestigious clients like Google, IBM, Microsoft, and the United Nations, and she was one of the first 100 employees at Netflix. The Tech Humanist returns to the podcast to talk about building relationships, decision making, AI, and much more!
The future of technology is rapidly changing the world of work, which made many people fear that technology will replace them. But you don't have to be afraid because it's not about the robots and remote work; it's about YOU. In today's episode, Kate O'Neill, the Author of Tech Humanist and the Founder of KO Insights, delves into how human-centric digital transformation shapes the workplace and upskills humans. Explore the ethical implications of AI Automation and how to ensure technology serves humanity. Also, hear Kate's insights on using tech to solve the world's biggest challenges – with a human touch. Today's conversation is something you don't want to miss! Join Kate O'Neill and be inspired to shape your future in a technology-driven world. Check out the full series of "Career Sessions, Career Lessons" podcasts here or visit pathwise.io/podcast/. A full written transcript of this episode is also available at https://pathwise.io/podcasts/the-interconnected-future-of-technology-and-humanity-with-kate-oneill.Become a PathWise member today! Join at https://pathwise.io/join-now/
" the best way to solving human problems at scale is to focus on what we CAN do, and make sure we are intentionally working to get there"Kate and I delve into the future world of tech, exploring trends and different technology and human enabled ways of meeting business objectives in today's world. When it comes to alignment, it is difficult to bring business, human and digital strands together, in particular in terms of big data and AI, and many organisations do not understand the strands well enough yet. We touch on responsible tech, bigger societal issues and the need to be clear and intentional about purpose and ethics in a world that is becoming more complex by the minute as technology connects us to everything in every way ! We must invest in building trust and repairing division, interacting with people in person, hearing and listening to others. Emerging tech brings with it enormous capacity and scale, but what do we want to scale? How do leaders and organisations answer this question with purpose and optimism, to bridge the digital/human gap intelligently ? Kate shares her research, insights and experience from her books and from working with leaders all over the globe. The main insights you'll get from this episode are : - We have an ancient fear of tech taking over our lives/humanity, but it is really a means to meet business objectives; business leaders must align their objectives with human objectives and outcomes and use the alignment to build tech around them.- When it comes to alignment, it is difficult to bring business, human and digital strands together, in particular in terms of big data and AI, and many organisations do not understand the strands well enough, e.g. C-suite human dynamics.- Many leaders do not know how to act appropriately in the face of AI – when any deployment could be out of date within months – but it is far less about tech and far more about aligning the organisation, which will outlast any tech deployment.- Transformation is not led by tech but by strategy based around alignment; it is about serving people well during transformation by having a strategy that begins with organisational purpose – this is a useful north star for organisations and ultimately a very human concept.- What we do in business is driven by what we want to accomplish and what matters; innovation is what is going to matter and shows us what we need to do to get to a future we want – experimenting with new tech is good, but it should not lead anything.- Tech for good and responsible tech are on the rise and have seen many different efforts, e.g. hackathons to create tools and systems to serve people, civic tech to help people; tech ethics looks at how businesses deploy tech in support of their products/services in a responsible way to avoid unintended consequences and harm to downstream communities.- It is vital not to abandon ethical concerns as AI is on the rise and to align business objectives with responsible action. The UN's sustainable development goals (SDG) can be used as a roadmap for a better, brighter future and to improve life for everyone on the planet.- Responsible tech needs to become as important as DE&I but it is currently often just a talking point rather than an action plan, but it is at least the start of discourse. It is a challenging time for making big decisions in a changing technology landscape and we must consider the future for bankable foresights.- Within organisations, there must be individual personal agency, speaking truth to
known as the “Tech Humanist”, Kate O'Neill is the founder and CEO of KO Insights, a strategic advisory firm committed to improving the human experience at scale through more meaningful and aligned strategy. Among her prior roles, she created the first content management role at Netflix, developed Toshiba America‘s first intranet, and founded [meta]marketer, one of the first digital strategy and analytics agencies. Kate has appeared as an expert tech commentator on BBC, NPR, and a wide variety of international media, and her written insights have been featured in WIRED, CMO.com, and many other outlets. Through KO Insights, Kate speaks, writes, advises, and advocates on a range of strategic challenges and ethical issues: big data, privacy, emerging tech trends in retail and other industries, intelligent automation and the future of work, digital transformation due to COVID-19, the role of technology in dealing with climate change, managing change at exponential scale, and more. Kate's research, writing, speaking, and advocacy all concentrate on the impact of data and emerging technologies on current and future human experiences — from both a business perspective, in terms of innovation and digital transformation strategy, and a general perspective, in terms of humanity overall. Her approach is consistently “both/and”: business-savvy and human-centric. She advises business and civic leaders on building data-led and technology-driven human experiences that are respectful as well as successful, and helps people overall understand the impact of the data and emerging technologies affecting their lives more and more each day. Her books have included “Tech Humanist” and “Pixels and Place,” as well as her latest, “A Future So Bright,” which launched in September 2021. https://www.linkedin.com/in/kateoneill https://www.koinsights.com https://twitter.com/kateo Watch our highest-viewed videos: 1-DR R VIJAYARAGHAVAN - PROF & PRINCIPAL INVESTIGATOR AT TIFR India's 1st Quantum Computer- https://youtu.be/ldKFbHb8nvQ 2-TATA MOTORS- DRIVING THE FUTURE OF MOBILITY IN INDIA- SHAILESH CHANDRA- MD: TATA MOTORS-https://youtu.be/M2Ey0fHmZJ0 3-MIT REPORT PREDICTS SOCIETAL COLLAPSE BY 2040 - GAYA HERRINGTON -DIR SUSTAINABILITY: KPMG- https://youtu.be/Jz29GOyVt04 4-WORLDS 1ST HUMAN HEAD TRANSPLANTATION- DR SERGIO CANAVERO - https://youtu.be/KY_rtubs6Lc 5-DR HAROLD KATCHER - CTO NUGENICS RESEARCH Breakthrough in Age Reversal- https://youtu.be/214jry8z3d4 6-Head of Artificial Intelligence-JIO - Shailesh Kumar https://youtu.be/q2yR14rkmZQ 7-STARTUP FROM INDIA AIMING FOR LEVEL 5 AUTONOMY - SANJEEV SHARMA CEO SWAAYATT ROBOTS - https://youtu.be/Wg7SqmIsSew 8-MAN BEHIND GOOGLE QUANTUM SUPREMACY - JOHN MARTINIS - https://youtu.be/Y6ZaeNlVRsE 9-BANKING 4.0 - BRETT KING FUTURIST, BESTSELLING AUTHOR & FOUNDER MOVEN - https://youtu.be/2bxHAai0UG0 10-E-VTOL & HYPERLOOP- FUTURE OF INDIA" S MOBILITY- SATYANARAYANA CHAKRAVARTHY https://youtu.be/ZiK0EAelFYY 11-HOW NEUROMORPHIC COMPUTING WILL ACCELERATE ARTIFICIAL INTELLIGENCE - PROF SHUBHAM SAHAY- IIT KANPUR- https://youtu.be/sMjkG0jGCBs 12-INDIA'S QUANTUM COMPUTING INDUSTRY- PROF ARUN K PATI -DIRECTOR QETCI- https://youtu.be/Et98nkwiA8w Connect & Follow us at: https://in.linkedin.com/in/eddieavil https://in.linkedin.com/company/change-transform-india https://www.facebook.com/changetransformindia/ https://twitter.com/intothechange https://www.instagram.com/changetransformindia/ Listen to the Audio Podcast at: https://anchor.fm/transform-impossible https://podcasts.apple.com/us/podcast/change-i-m-possibleid1497201007?uo=4 https://open.spotify.com/show/56IZXdzH7M0OZUIZDb5mUZ https://www.breaker.audio/change-i-m-possible https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xMjg4YzRmMC9wb2RjYXN0L3Jzcw Don't Forget to Subscribe www.youtube.com/@toctwpodcast #artificialintelligence #future #ai
Widely known as the “Tech Humanist”, Kate O'Neill is founder and CEO of KO Insights, a strategic advisory firm committed to improving human experience at scale through more meaningful and aligned strategy. Kate sits down with Dan Pontefract on this episode to discuss the link between technology and our humanity. It's don't miss viewing. Kate is helping humanity prepare for an increasingly tech-driven future and is doing so through her signature strategic optimism. Among her prior roles, she created the first content management role at Netflix, developed Toshiba America‘s first intranet, and founded [meta]marketer, one of the first digital strategy and analytics agencies. Kate has appeared as an expert tech commentator on BBC, NPR, and a wide variety of international media, and her written insights have been featured in WIRED, CMO.com, and many other outlets. She has worked with global companies such as Google, Etsy and Cisco to optimise the role technology plays in the modern world. Her books have included “Tech Humanist” and “Pixels and Place,” as well as her latest, “A Future So Bright,” which launched in September 2021.
“What does it mean for people if their jobs are threatened by a sense of automation sort of encroaching on the tasks that make up their job?” Debbie talks with Kate O'Neill about the automation of work and how to adapt. This episode shares insightful ways to think about where automation is useful and where it isn't.Follow Kate on LinkedIn
Today we talk about Optimism. No, I am not referring to how to be all positive all the time and all that stuff. I am referring to how you as tech leader, expert, consultant, can approach the future with a different lens. A lens of possibilities. My guest is Tech Humanist Kate O'Neill (https://www.koinsights.com/). If you are asking what a Tech Humanist is then listen to this episode as she will explain. She is also the author of the books Tech Humanist, and A Future So Bright. Her goal is to help humanity prepare for an increasingly tech driven future. Today we talk about how to prepare to become more optimistic about this future and if you already are, then how do we inject this optimism to others. For more resources, subscribe to the newsletter via https://www.hardcoresoftskillspodcast.com/ If you are ready to enhance the skills you need to advance your career or the performance of your team, visit https://www.hardcoresoftskillspodcast.com/support/ Connect with me via LinkedIn at https://www.linkedin.com/in/yadiraycaro/ .
Hello and welcome to The Tech Humanist Show! In this introductory episode, host Kate O'Neill explains what a tech humanist is and what you can expect from future episodes. Guests on this episode includue Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdhury, Chris Gilliard, and Rahaf Harfoush. The Tech Humanist Show is a multi-media […]
Hello and welcome to The Tech Humanist Show! In this introductory episode, host Kate O'Neill explains what a tech humanist is and what you can expect from future episodes. Guests on this episode includue Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdhury, Chris Gilliard, and Rahaf Harfoush. The Tech Humanist Show is a multi-media format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Transcript Hello humans, and welcome to The Tech Humanist Show! In this introductory episode, I'll explain what a tech humanist is, and what you can expect from future episodes. A Tech Humanist, as I've coined it, is a person who sees the exciting opportunities technology offers humanity, while remaining cautious & conscious of the potential risks and harms those technologies bring. It isn't the same thing as a techno-utopian, who believes technology will inevitably bring about a utopia in the future, or a techno-solutionist, who believes technology is the solution to all our problems. Instead, a tech humanist believes that when we design technology, we have to think of humanity first and foremost, and remain active and diligent in making technology work better for all people. Here are a few clips from some of the experts I've spoken with for The Tech Humanist Show who sum it up well. Emma Bedor Hiland: “I do actually identify as a tech humanist, because I am optimistic about what technologies can do, and offer, and provide and the ways they might be utilized to enhance human flourishing, especially in health spaces and including the mental healthcare space, too. I just think we also need to be realistic about what technology can do, and the ways that technologies are deployed which might cause us harm.” Oluwakemi Olurinola: “I actually like the humanist put beside the tech. Since I advocate for empathy and social and emotional learning while we also train on the digital skills, I am a tech humanist.” Dorothea Baur: “I'm proud to be a humanist! I believe that there is something distinctive about humans that we need to keep alive. One of the biggest achievements is that, like, 200 and, y'know, 40 years ago, when the Enlightenment set in, where we said, ‘hey, people, dare to use your own minds!' it was like a wake-up call, because we didn't really make an effort to explore the world because we thought everything was determined by God. By stepping out of this dependency and using our own brains, we liberated ourselves. And so now, are we taking it too far? Have we used our brains so far that we're eventually training machines that are smarter than us and they're kind of imposing their decisions again upon us, and not just imposing their decisions on us, but also imposing decisions that are equally as intransparent as god's decisions, if you look at certain algorithms. We cannot delegate our responsibility to machines! We can use machines to improve our health, and our well-being, etc., to improve the world, but we cannot entirely delegate responsibility to machines.” Right now, we're seeing massive shifts in the way humans live and interact with technology, which makes tech humanism more important than ever. To maintain our agency, we need to work together to fight bias in our algorithms, make sure we think of the user experience and how technology affects us, and consider the role humans play in a world that is becoming increasingly automated. Dr. Rumman Chowdhury: “I recognize and want a world in which people make decisions that I disagree with, but they are making those decisions fully informed, fully capable. Whether it's being able to derive meaning from the systems we've created, or understanding what our meaning is, or what our purpose is as a human being, and not having that be shaped or guided by other forces.” Dr. Chris Gilliard: “We really need to think about the effects of these things. Like, what are the potential harms of this thing? Before you put it out, right? When [REDACTED] came out and said, ‘we had no idea that people would use it to spread racism and misogyny!' …they could have done that work, right? One of the things I've seen that does give me a little bit of hope is that there are more and more people not only saying that we have to do that work, but being inside these companies and actually holding them accountable for doing it.” Rahaf Harfoush: “For me I think the reality is that everything that has the capacity to help us can also simultaneously hurt us in some new and different ways. I don't necessarily think about what's gonna help humanity, I think about, ‘what challenges are gonna emerge from this technology, and how can we navigate that?'.” The first season of this podcast featured a number of interviews with some of today's top thinkers, experts, and educators in the field of technology, with one guest interview per episode. From season 2 onward, every episode will instead focus on a key area of the intersection of technology and humanity, and the ways technology is changing and shaping the human experience. Each episode will feature multiple guests, featuring clips pulled from season one, as well as brand-new interviews that haven't been and won't be released on the podcast. Together, we'll be tackling big ideas about how to make the future a brighter place for everyone. The guests you heard in this episode were, respectively, Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdry, Chris Gilliard, and Rahaf Harfoush. You can hear more from them and all my guests in past and future episodes of the podcast, or find full interviews at TheTechHumanist.com.
In this episode, Kathleen Cohen an XR Immersive Tech & Experience Strategist and Tech Humanist at The Collaboratorium dives into her work in XR storytelling. Having just completed an art residency as part of the University of Idaho's Virtual Technology & Design Research Lab focused on tech humanism, Kathleen asked questions inside virtual worlds focusing on indigenous peoples in Northern Idaho and neurodivergent individuals in Boise to explore questions like, “What makes up you?” and “What is real?” and, “Can creating identity in a virtual space preserve and better clarify identity in that space and/or for us as a physical community?” Learn more about her work at https://www.kathleencohen.com/
Kate O'Neill is the author of “A Future So Bright,” a book that argues that the best way to confront challenges and build a better tomorrow is to allow ourselves to envision the brightest future possible, while at the same time acknowledging the ways the future could go dark and working to prevent them from happening. Widely known as the “Tech Humanist,” Kate is helping humanity prepare for an increasingly tech-driven future with her signature strategic optimism. Kate is also the founder and CEO of KO Insights, a strategic advisory firm committed to improving human experience at scale. As a professional global keynote speaker, Kate regularly speaks with leadership audiences around the world, exploring how data and emerging technologies like AI are shaping the future of human experiences, and advocating with her signature strategic optimism for humanity's future in an increasingly tech-driven and exponentially-changing world. Her clients and audiences include many Fortune 500 and World's Most Admired companies and brands, including tech giants like Google and IBM, household-name brands like Coca Cola and Colgate, future-forward cities like Amsterdam and Austin, top universities like Cambridge and Yale, and even the United Nations. Read the show notes here: https://bwmissions.com/one-away-podcast/
Kate O'Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale. In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist'. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism'. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.A transcript of this episode can be found here.Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don't miss it.
Kate O'Neill is founder and CEO of KO Insights, a strategic advisory firm committed to improving human experience at scale. Kate is widely known as the “Tech Humanist.” She is helping humanity prepare for an increasingly tech-driven future with her signature strategic optimism. Her latest book, A Future So Bright: How Strategic Optimism Can Restore Our Humanity and Save the World is EXACTLY what we all need right now. Hit the play button to hear this amazing conversation as Kate catches up with Lou Diamond. Click here to watch the video of their LIVE recording of this interview from (9/9/21) Click the image below to grab yourself a copy of "A Future So Bright" ** CONNECT TO LOU DIAMOND & THRIVE LOUD
Have you ever wondered what it means to be a humanist in the age of technology? How can we put human values into a machine? How can we even know what those human values are? We asked Kate O'Neill, founder of KO Insights and author of the Tech Humanist, this question and found that there's a lot to work with when it comes to understanding humans and what they might want their machines to do. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com
About this episode's guest: Art Chang is a mayoral candidate for the city of New York. The son of Korean immigrants and father of 2 boys, Art Chang has spent the last 35 years working as a professional problem solver in NYC. He's built a dozen startups in the city, all focused on using technology […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live in a live-streamed video program before it's made available in audio format. Hosted by Kate O'Neill. About this week's guest: Art Chang is a mayoral candidate for the city of New York. The son of Korean immigrants and father of 2 boys, Art Chang has spent the last 35 years working as a professional problem solver in NYC. He's built a dozen startups in the city, all focused on using technology as a force for good. He built Casebook, the first web-based software platform for child welfare, which is now the system of record in the State of Indiana. He put Queens West -- the LIC waterfront -- in the ground with climate change in mind, making it one of only 2 developments in the city to not lose power during Hurricane Sandy. He also co-created NYC Votes with the Campaign Finance Board to improve participation in our local democracy. He has had the privilege to work at some of NYC's most important institutions, such as CUNY, the Brooklyn Public Library, the City Law Department, and Brooklyn Tech, giving him the tools and knowledge to make real solutions for the challenges faced by the people of New York City. Visit www.Chang.nyc to learn more and join #TeamChang. He tweets as @Art4MayorNYC. This episode streamed live on Thursday, June 17, 2021.
Manuel Alzuru is a humanist, solarpunk and engineer deeply embedded in the blockchain world. He's also a DAO + governance enthusiast, data-driven tech optimist and on a mission to create tools for the creator and giving economy through new social platform Doin Gud.We traverse the topics of:Manu's journey into Web3Optimizing human and social capitalNFTs as the new internet 'cookies' + attached to identitiesManu's learnings after catching COVID in early 2020 (FightPandemics)Using NFTs and crypto to enact social change in the worldLeveraging DAOs as a tool for human coordinationWhere the 'crypto renaissance' is headedTransparency and cancel cultureDoin Gud and smiling moreTune in, share with your amigos, and #tiltwithusBy the way, the SUPER rad new sonic
Kate O'Neill is known as the Tech Humanist. She is the founder of KO Insights, a strategic advisory firm committed to improving the human experience at scale, even and especially in data-driven, algorithmically optimized, and AI-led interactions. Kate regularly keynotes industry events, advocating for humanity's role in an increasingly tech-driven future. Her world-leading clients have included Google, Adobe, IBM, Yale University, the city of Amsterdam, and the United Nations.In This Episode:The intersection between technology, business, and humanityAlways being open to changeThe both/and mindsetFinding gratitude after major loss
Kate founded {meta} Marketer in 1999, developed Toshiba America‘s first intranet, is the author of 4 books including her latest, “Tech Humanist” (which was featured at CES 2019), and was recently named to the 2020 Thinkers50 Radar, a global ranking of management thinkers. Episode Timeline 6:00 And so it begins... 8:45 I got indoctrinated into MUDs 11:00 I remember getting tingles on the back of my neck! 14:05 What made you want to be a part of the Internet? 16:00 Toshiba recruited me to come to San Francisco 19:15 The next interesting pivot was Netflix... this '98 22:15 I was still drive by curiosity about everything! 25:00 "Relevance" was what we were going for 27:15 Relevance is a form of respect 29:00 Discretion is too... Balance how much we know about you 33:00 What got your juices going in those early days? 36:00 You could be a generalist 40:30 Impact of technology on people... The Future of Work 41:45 What have you retained from those early days? 45:25 I just became allergic to mangos! 46:30 You're going to love this... 49:30 I was at Magazines.com leading up customer experience... 52:20 So where are we now?
Today’s episode comes from Thom Singer’s widely successful podcast Making Waves at C-Level in which he interviews technologist, keynote speaker, and author Kate O’Neill about her outlook on tech, humanity, and careers. Kate is known as both the “Tech Humanist” and “Optimistic Futurist,” and is helping humanity prepare for an increasingly tech-driven future. Listen in as they discuss where to start your conversations around digital transformation and the steps you can take to always be advocating for both the user and the customer. On today’s podcast, you will learn about: The connection to humanity in technology It is essential to remember the humans that will be using the products that technology creates. Company jargon should not be used when describing products to customers. Consider what customers are going to do with the product and align company strategies with that use. Kate shares her experience with merging technology and humanity at early Netflix. Conversations leaders have about digital transformation Successful companies are consistently looking ahead to the next technological advancement. Identify what your company is trying to do as well as what customers want to do with your products. Growing your ideal career Identify companies that interest you and take the initiative to reach out to them. Highlight the skills and expertise that you can bring to a company. If a position doesn’t exist yet, ask the company if they can create one for your expertise. The powerful role of strategic optimism Strategic optimism will help identify the direction your company should be moving. Consider not only what could go wrong but what to do when things go right. Set yourself and your team up for success by creating contingencies for every situation, both bad and good. “Tech Humanist: How You Can Make Technology Better for Business and Better for Humans” by Kate O’Neill Making Waves at C-Level Podcast with Thom Singer Do you have an example of extraordinary efforts or innovation during these unprecedented times? We would love to hear your story and possibly interview you for an upcoming episode. Please reach out to us at www.DigitalEnterpriseSociety.org.
Kate O’Neill, known as both the “Tech Humanist” and “Optimistic Futurist,” is helping humanity prepare for an increasingly tech-driven future. Her research, writing, speaking, and advocacy all concentrate on the impact of data and emerging technologies on current and future human experiences — from both a business perspective, in terms of innovation and digital transformation strategy, and a general perspective, in terms of humanity overall. The Pioneering Path to the Present Kate’s expertise in data-based business models, integrated experience strategy, and human-centric digital transformation comes from more than 25 years of experience and entrepreneurship leading innovations across technology, marketing, and operations in category-defining companies. She was one of the first 100 employees at Netflix, where she created the first content management role and helped implement innovative dynamic e-commerce practices that became industry standard; was founder & CEO of [meta]marketer, a first-of-its-kind analytics and digital strategy agency; developed Toshiba America‘s first intranet; led cutting-edge experience optimization work at Magazines.com; and has held leadership and advisory positions in a variety of digital content and technology start-ups, consultancies, and agencies. Kate is now founder and CEO of KO Insights, a strategic advisory and consultancy firm committed to improving human experience at scale. Through KO Insights, Kate speaks, writes, advises, and advocates on a range of strategic challenges and ethical issues: big data, privacy, emerging tech trends in retail and other industries, intelligent automation and the future of work, digital transformation due to COVID-19, the role of technology in dealing with climate change, managing change at exponential scale, and more. Her approach is consistently “both/and”: business-savvy and human-centric. She advises business and civic leaders on building data-led and technology-driven human experiences that are respectful as well as successful, and helps people overall understand the impact of the data and emerging technologies affecting their lives more and more each day. https://thomsinger.com/podcast/kate-oneill
“Purpose is the shape meaning takes in business.” This should be the motto for 2021 for ALL companies. From Facebook to Google to The Wing, the entrepreneurial ecosystem has scaled bad practices and behaviors that have hurt many people. I don't think these founders had bad intentions. Quite the contrary. I truly believe that each of these founders had a vision for helping people that turned sour while scaling. How? We perpetuate growth for growth's sake to saturate markets quickly. While “intentions” may be to serve more people while yielding BIG returns, many of these companies forget to put the very people they claim to serve at the center and harm them in the process. Remember when Google used the motto “Don't do evil”? They don't use it anymore. As we forge into 2021, I hope all companies can get clear on who they serve and the purpose of serving them. Otherwise, why should what you do matter to us? Kate O'Neill, founder of the Tech Humanist will help you answer that very question. Here's what you'll learn: How companies scale unintended consequences. How to be a tech humanist. The 3 things we need to understand to scale technology that creates meaning and meaningful experiences. What to do when your company f*cks up. Best practices for centering humanity during scale through purpose. Subscribe & Rate Now. #getsshitdonepodcast Learn More About the Get Sh!t Done: shegetsshitdone.com Have feedback, a show topic you want us to cover, or just want to say hi: tribe@shegetsshitdone.com
About this episode's guest: Futurist, speaker, and author Cathy Hackl is a globally recognized augmented reality, virtual reality and spatial computing thought leader. She's been named one of the top 10 Tech Voices on Linkedin for two years in a row, the highest honor on the platform. She currently works as part of the Enterprise […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this week's guest: Futurist, speaker, and author Cathy Hackl is a globally recognized augmented reality, virtual reality and spatial computing thought leader. She’s been named one of the top 10 Tech Voices on Linkedin for two years in a row, the highest honor on the platform. She currently works as part of the Enterprise team at one of the industry's top OEMs. Prior to that, Cathy was the lead futurist at You Are Here Labs, where she led agencies, brands and companies in applying Augmented Reality and Virtual Reality for marketing and training working with brands like AT&T & Porsche. Hackl worked as a VR Evangelist for HTC VIVE during the launch of their enterprise VR headset and during the company’s partnership with Warner Brothers’ blockbuster, Ready Player One. She's the co-author of Marketing New Realities, the first VR AR marketing book ever written. She also worked as Chief Communications Officer for cinematic VR studio Future Lighthouse, where she collaborated on projects with Sony Pictures Entertainment, Oculus, Beefeater, and William Morris Endeavor. Hackl has been featured in media outlets like Forbes, Barron’s, Salon, VentureBeat, Digiday, Tech Target, CMO.com, and Mashable. She is a global advisor for VR AR Association and was recognized in 2016 by NBC News as one of the top Latina women working in VR. Before working in spatial computing and technology, she worked as a communicator at media companies such as CNN, Discovery, and ABC News and was nominated in 2007 for an EMMY Award for her storytelling work. She's also the creator of the world’s first holographic press release and loves all things spatial computing, artificial intelligence and futurism. Cathy is currently working on her second book The Augmented Workforce: How AI, AR, and 5G Will Impact Every Dollar You Make. She’s co-authoring the book with John Buzzell. She tweets as @CathyHackl. This episode streamed live on Thursday, November 12, 2020.
About this episode's guest: Caleb Gardner, who in his more than a decade of experience in digital leadership, entrepreneurship, and social impact, has worked for a variety of organizations in the public and private sectors, including at prestigious professional service firms like Bain & Company and Edelman. During the second Obama Administration, he was the […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Caleb Gardner, in his more than a decade of experience in digital leadership, entrepreneurship, and social impact, has worked for a variety of organizations in the public and private sectors, including at prestigious professional service firms like Bain & Company and Edelman. During the second Obama Administration, he was the lead digital strategist for President Obama’s political advocacy group, OFA. He brought his unique insights to growing one of the largest digital programs in existence, with a millions-strong email list and massive social media following—including the largest Twitter account in the world. Now as a founding partner of 18 Coffees, a strategy firm working at the intersection of digital innovation, social change, and the future of work, he’s helping forward-thinking companies and nonprofits adapt and evolve to meet the challenges of today’s economy. He speaks, trains, and leads workshops around the world on topics related to change, including strategy in a mission economy, technology and innovation for a better world, and change management at the speed of digital. He tweets as @CalebGardner. This episode streamed live on Thursday, November 5, 2020.
About this episode's guest: Yaël is a thought leader, democracy activist and strategist working with governments, tech companies, and investors focused on the intersection of ethics, tech, democracy, and policy. She has spent 20 years working around the globe as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Yaël is a thought leader, democracy activist and strategist working with governments, tech companies, and investors focused on the intersection of ethics, tech, democracy, and policy. She has spent 20 years working around the globe as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, a corporate social responsibility strategist at ExxonMobil, and the head of a global risk firm. Currently, she is a Visiting Fellow at Cornell Tech's Digital Life Initiative, where she explores technology's effects on civil discourse and democracy and teaches a multi-university course on Tech, Media and Democracy. Yaël has become a key voice and public advocate for transparency and accountability in tech, particularly where real-world-consequences affect democracy and societies around the world. Her recent TED talk addresses these issues and proposes ideas for how government and society should hold the companies accountable. Yaël travels internationally as a keynote speaker at any number of venues seeking informed, inspirational women to help make sense of our world's most difficult challenges. She can be booked through the Lavin Agency. Yaël was named to Forbes' 2017 list of “40 Women to Watch Over 40”. She is also an Adjunct Professor at NYU's Center for Global Affairs, a member of the Council on Foreign Relations, and she provides context and analysis on national security, elections integrity, political and foreign affairs in the media. She has been published in the New York Times, the Washington Post, Brookings Techstream, TIME, WIRED, Quartz and The Huffington Post, has appeared on CNN, BBC World News, Bloomberg News, CBS News, PBS and C-SPAN, in policy forums, and on a number of podcasts. She earned an M.A. in International Affairs from the Johns Hopkins School of Advanced International Studies (SAIS). More than anything, she is passionate about using her background and skills to help foster reasoned, civil discourse. She tweets as @YaelEisenstat. This episode streamed live on Thursday, October 29, 2020.
About this episode's guest: Abhishek Gupta is the founder of Montreal AI Ethics Institute (https://montrealethics.ai ) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He represents Canada for the International Visitor Leaders Program (IVLP) administered by the US State Department as an expert on the future of […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Abhishek Gupta is the founder of Montreal AI Ethics Institute (https://montrealethics.ai ) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He represents Canada for the International Visitor Leaders Program (IVLP) administered by the US State Department as an expert on the future of work. He additionally serves on the AI Advisory Board for Dawson College and is an Associate Member of the LF AI Foundation at the Linux Foundation. Abhishek is also a Global Shaper with the World Economic Forum and a member of the Banff Forum. He is a Faculty Associate at the Frankfurt Big Data Lab at the Goethe University, an AI Ethics Mentor for Acorn Aspirations and an AI Ethics Expert at Ethical Intelligence Co. He is the Responsible AI Lead for the Data Advisory Council at the Northwest Commission on Colleges and Universities. He is a guest lecturer at the McGill University School of Continuing Studies for the Data Science in Business Decisions course on the special topic of AI Ethics. He is a Subject Matter Expert in AI Ethics for the Certified Ethical Emerging Technologies group at CertNexus. He is also a course creator and instructor for the Coursera Certified Ethical Emerging Technologist courses. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world that has made significant contributions to the Montreal Declaration for Responsible AI, the G7 AI Summit, AHRC and WEF Responsible Innovation framework, PIPEDA amendments for AI impacts, Scotland’s national AI strategy and the European Commission Trustworthy AI Guidelines. His work on public competence building in AI Ethics has been recognized by governments from North America, Europe, Asia, and Oceania. More information on his work can be found at https://atg-abhishek.github.io He tweets as @atg_abhishek. This episode streamed live on Thursday, October 22, 2020.
About this episode's guest: Neil Redding is Founder and CEO of Redding Futures—a boutique consultancy that enables brands and businesses to engage powerfully with the Near Future. His rare multidisciplinary perspective draws on the craft of software engineering, the art of brand narrative and expression, and the practice of digital-physical experience strategy. Prior to founding […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Neil Redding is Founder and CEO of Redding Futures—a boutique consultancy that enables brands and businesses to engage powerfully with the Near Future. His rare multidisciplinary perspective draws on the craft of software engineering, the art of brand narrative and expression, and the practice of digital-physical experience strategy. Prior to founding Redding Futures, Neil held leadership roles at Mediacom, Proximity/BBDO, Gensler, ThoughtWorks and Lab49. He tweets as @neilredding. This episode streamed live on Thursday, October 15, 2020.
About this episode's guest: Ana Milicevic is an entrepreneur, media executive, and digital technology innovator. She is the co-founder and principal of Sparrow Advisers, a strategic consultancy helping marketers and C-suite executives navigate the data-driven adtech and martech waters. A pioneer of digital data management in advertising, Ana was responsible for the development of the […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Ana Milicevic is an entrepreneur, media executive, and digital technology innovator. She is the co-founder and principal of Sparrow Advisers, a strategic consultancy helping marketers and C-suite executives navigate the data-driven adtech and martech waters. A pioneer of digital data management in advertising, Ana was responsible for the development of the Demdex platform (now Adobe Audience Manager) from its early days through its successful acquisition and integration into the Adobe Digital Marketing suite. Prior to starting Sparrow she established Signal's Global Strategic Consulting group and helped Fortune 500 customers adopt advanced and predictive analytics across their marketing, ad ops, and digital content business units at SAS. Her consulting portfolio includes working for the United Nations, executing initiatives in 50+ countries, and advising companies on go-to-market strategies all around the globe. Ana is frequently quoted by media powerhouses like The Wall Street Journal and Business Insider (who in 2018 named her as one of 23 industry leaders working on fixing advertising) as well as industry trades like AdWeek, AdAge, Digiday, Marketing Magazine, AdExchanger, and Exchangewire. She is a sought-after speaker on topics of adtech, martech, innovation, customer experience, data management and new frontiers of technology. She tweets as @aexm. This episode streamed live on Thursday, October 8, 2020.
Julie and Casey chat with Tech Humanist, Optimistic Futurist, and in-demand keynote speaker Kate O’Neill about the future of technology. As the lines between the digital and physical worlds blur, as AI, automation, algorithms, and data mining increasingly affect our lives, how can we prepare for the future? Along the way, Kate gives us insight into virtual presenting, what it’s like to go viral, why you should think pretty hard before posting those “viral challenges”, and revealing hidden truths without feeding conspiracy theories. TOP TAKEAWAYS: Optimism is not about ignoring the bad stuff, it’s about envisioning and working towards the good. Virtual Keynotes are a great leveler and an opportunity for more immediate and purposeful audience connection. Every opportunity to be authentic is a strategic choice. How can we expect to build products for all of humanity if we don’t have a wide representation of human experience in the room? Data is important because humans create data, and because it represents what we find meaningful. We need to get savvier about how companies are using our data, and about what we give them to use. Sometimes, you gotta reject “A to Z” in favor of “A to . . . Kumquat” Kate O’Neill is known as “the Tech Humanist.” She is helping humanity prepare for an increasingly tech-driven future by teaching business how to make technology that’s better for humans. Kate has led innovations across technology, marketing, and operations for more than 20 years in companies from startups to Fortune 500s. Among her prior achievements, she created the first content management role at Netflix; developed Toshiba America’s first intranet; led cutting-edge online optimization work at Magazines.com; was founder & CEO of [meta]marketer, a first-of-its-kind analytics and digital strategy agency; and held leadership and advisory positions in a variety of digital content and technology startups. Kate is a favorite keynote speaker for audiences of leaders from companies such as Google, Etsy, Coca Cola, McDonald’s, Cisco, Adobe, Kelly Services, and Charles Schwab, as well as the city of Amsterdam, the University of Cambridge, and the United Nations. Her insights and expertise have been featured in outlets like WIRED, and she has appeared as an expert commentator on the likes of BBC, NPR, Marketplace, and NBC News. Kate now lives in New York City, where she writes prolifically and contributes to numerous outlets on an eclectic array of topics, but her primary focus as both a writer and speaker is on the future of meaningful human experiences. Kate’s most recent book is Tech Humanist: How You Can Make Technology Better for Business and Better for Humans, and she now hosts a weekly live program and podcast called The Tech Humanist Show. More about Kate: www.koinsights.com Follow Kate on Twitter: @kateo Watch/listen to The Tech Humanist: https://www.thetechhumanist.com
About this episode's guest: Sarah T. Roberts is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of […]
About this episode's guest: Sarah T. Roberts is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, Ontario for three years. On the internet since 1993, she was previously an information technology professional for 15 years, and, as such, her research interests focus on information work and workers and on the social, economic and political impact of the widespread adoption of the internet in everyday life. Since 2010, the main focus of her research has been to uncover the ecosystem – made up of people, practices and politics – of content moderation of major social media platforms, news media companies, and corporate brands. She served as consultant to and is featured in the award-winning documentary The Cleaners, which debuted at Sundance 2018 and aired on PBS in the United States in November 2018. Roberts is frequently consulted by the press and others on issues related to commercial content moderation and to social media, society and culture, in general. She has been interviewed on these topics in print, on radio and on television worldwide including: The New York Times, Associated Press, NPR, Le Monde, The Atlantic, The Economist, BBC Nightly News, the CBC, The Los Angeles Times, Rolling Stone, Wired, The Washington Post, Australian Broadcasting Corporation, SPIEGEL Online, and CNN, among many others. She is a 2018 Carnegie Fellow and a 2018 recipient of the EFF Barlow Pioneer Award for her groundbreaking research on content moderation of social media. She tweets as @ubiquity75. This episode streamed live on Thursday, October 1, 2020. Here's an archive of the show on YouTube: About the show: The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. Subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube for updates. Transcript 01:43all right01:44hey humans01:48how we doing out there come on in start01:50gathering around the uh the old digital01:52campfire01:54let me hear from those of you who are in01:55line uh right now tell me01:57tell me who's out there and tell me01:59where you're tuning in from02:01i hope you're starting to get your02:02questions and thoughts ready02:04for our guest i'm sure many of you have02:06already seen who our guest is and i'll02:07be reading her bio here in just a moment02:09so start thinking of your questions02:11about commercial content moderation and02:13what you want to02:14know about that and you know all that02:17kind of stuff02:18uh i hear sarah laughing in the02:19background it's not to laugh02:22really good valid questions i think i02:25was just snorting02:26honestly through my uh through my sinus02:29trouble02:30so uh welcome to those of you who are02:32all tuned in welcome to the tech02:34humanist show this is a multimedia02:36format program02:37exploring how data and technology shape02:39the human experience02:41and i am your host kate o'neil so i hope02:44you'll subscribe and follow wherever02:45you're catching this02:46so that you won't miss any new episodes02:49i02:50am going to introduce our guest here in02:51just a moment uh one one last shout out02:53if anybody's out there wanting to say hi02:56feel free02:56you are welcome to comment and i see a02:59bunch of you03:00online so feel free to tune uh03:03comment in and tell me who you are and03:05where you're tuning in from03:07but just get those you know type in03:08fingers warmed up because we're gonna03:10want you to03:10to weigh in with some questions and03:12comments as the show goes on03:14but now i'll go ahead and introduce our03:17esteemed guest so today we have the03:19very great privilege of talking with03:21sarah t roberts who03:22is an assistant professor in the03:24department of information studies03:26graduate school of education and03:28information studies at ucla03:30she holds a phd from the ischool at the03:32university of illinois urbana-champaign03:34my sister's school i went to university03:36of illinois chicago03:38prior to joining ucla in 2016 she was an03:40assistant professor03:42in the faculty of information and media03:44studies at western university in london03:46ontario for three years03:47on the internet since 1993 she was03:50previously an information technology03:52professional for 15 years and as such03:54her research interests focus on03:56information work and workers and on the03:58social03:59economic and political impact of the04:01widespread adoption of the internet in04:02everyday life right totally04:06so since 2010 the main focus of her04:08research has been to uncover the04:10ecosystem04:11made up of people practices and politics04:14of content moderation of major social04:16media platforms04:17news media companies and corporate04:19brands04:20she served as consultant tune is04:21featured in the award-winning04:22documentary04:23the cleaners which debuted at sundance04:26201804:27and aired on pbs in the united states in04:29november04:30 so roberts is frequently consulted04:33by the press and others on issues04:34related to commercial content moderation04:36and to social media society and culture04:38in general04:39she's been interviewed on these topics04:41in print on radio04:42on television worldwide and now on the04:44tech humanist show04:45uh including the new york times04:47associated press npr04:48le monde the atlantic i mean this list04:50is going to go on and on so04:52buckle in folks the economist bbc04:55rolling stone wired and picking and04:57choosing now it's a really really04:59impressive list of media05:00she's a 2018 carnegie fellow and a 201805:04recipient of the eff barlow05:06pioneer award for her groundbreaking05:08research on content moderation05:10of social media so audience again please05:12start getting your questions ready for05:13our outstanding guest05:15please do note as a live show i well05:17i'll do my best to vet comments and05:19questions in real time05:20we may not get to all of them but very05:23much appreciate05:24you being here tuned in and05:25participating in the show so with that05:27please welcome uh our dear guest05:31sarah t roberts and you are live on the05:34show05:34sarah thank you so much for being here05:37thank you uh05:38thanks for the invitation and thanks to05:40your audience and05:41uh all those interested folks who are05:44spending time with us today i'm really05:45grateful05:46for the opportunity we've already got uh05:48david polgar05:49saying excited for today's talk hey our05:52buddy05:53dave drp05:54[Laughter]05:56all right so i wanna talk right away05:59about your um06:01your book behind the screen i i hadn't06:03had a chance to read and until i was06:05preparing for the06:06show and it was it was wonderful to get06:07a chance to dig into your research06:09so tell us a little bit about that came06:11out last year is that right06:13um yeah it just just a little over a06:15year ago uh06:16came out on on yale university press06:19um you know the academic06:23publishing cycle is its own beast it's06:25its own world06:26it uh as it relates to06:29um kind of like journalism and and06:31mainstream press timelines it's much06:33slower06:34that said uh i wrote the book in about a06:37year which is about a normal06:39a normal cycle but it took about eight06:42years to put together the research that06:44went into the book06:46and this is because when i started my06:48research in 201006:50which you know we say 2010 it seems like06:53yesterday that was a decade ago now06:55you know if we're in terminable 202006:59you know which is which is a million07:01years long so far but07:03back in 2010 when i started looking into07:05this topic as a07:07as a doctoral researcher at the07:09university of illinois07:10uh you know there were a lot of things07:12stacked against that endeavor07:14including the fact that i was a doctoral07:16student at the university of illinois i07:17had no cachet i had very few07:20like material resources um you know to07:23finance07:24a study that would require uh07:27at the end of the day required going07:29around the world quite literally07:32but maybe the biggest barrier at the07:34time was the fact07:36that i was still fighting an uphill07:38battle trying to tell people07:40that major mainstream social media07:43platforms07:44were engaged in a practice that is now07:47weirdly um you know a phrase that you07:51might say around the dinner table and07:52everyone would get which is content07:54moderation07:55and that further when i would um raise07:58the issue08:00and and bring up the fact that firms08:01were engaged in this practice which08:04you know has to do with the adjudication08:06of people's08:08self-expression online and sits08:10somewhere between users08:13and the platform and then the platform's08:15recirculation of users material08:18uh you know people would argue with me08:20at that point08:22about the fact that that practice would08:24even go on08:25and then when i would say that uh you08:27know kind of offer08:28incontrovertible proof that in fact it08:30did go on uh08:32then we would uh find ourselves in a08:34debate about whether or not08:36it was a legion of human beings08:40who was undertaking this work or uh in08:43fact it was computational08:45now in 2010 in 2020 the landscape is08:48complicated but in 201008:51the technology and the sort of08:53widespread adoption08:54of of computational uh08:58automated let's say algorithmic kinds of09:01content moderation or machine learning09:03and forum content moderation was not a09:05thing09:05it was humans and so i had to start the09:09conversation09:10so far below baseline09:14that it you know it took uh it took09:17quite a lot of effort just to get09:19everybody on the same page to discuss it09:22and you know when i'm talking about09:24uh engaging in these conversations i09:27mean just like trying to vet this as a09:29as an appropriate research topic at the09:32graduate school you know what i mean09:34like to get faculty members09:36many of whom were world experts in in09:39various aspects of uh of the internet or09:42of09:42media or information systems themselves09:46um it was new to them too that was did09:49you originally frame it was it it's a09:51question of how09:52is this done or what was the original09:54framework of that question yeah09:56so i'll tell you a little bit about the09:57origin of why i got interested10:00and it's something that i write about in10:01the book because i think it's so10:03important to acknowledge kind of those10:06those antecedents i had read i was10:08actually teaching down at the university10:10of illinois in the summer10:12of 2010 and i was on a break from10:15teaching and10:16you know probably drinking a latte which10:18is what i'm doing right now10:19and um and uh uh reading the paper i was10:23reading the new york times and there was10:24a very small10:26uh but compelling article in the new10:28york times about a group of workers10:30who were there there were a couple of10:32sites they mentioned but there was in10:33particular a group of workers in rural10:35iowa well here i was sitting in rural10:38central illinois thinking about this10:40group of workers in rural iowa as10:42profiled in this piece10:44who were in fact engaging in what we now10:46know as commercial content moderation10:48they were working10:49in effectively a call center uh10:53adjudicating content for unnamed kind of10:55you know10:56media sites websites and social media10:59properties11:00and i kind of circulated that article11:03around i shared it with friends i shared11:05it with my colleagues and i shared it11:06with professors and11:07the argument that i made was that it was11:10it was multifaceted first of all it11:12sounded like a miserable11:14job and guess what that has been borne11:16out it is a11:17very difficult and largely unpleasant11:20job11:21uh so i was captivated by that fact that11:24there were these you know11:25unnamed people who a generation or two11:28ago would have been on a family farm11:30who were now in the quote unquote11:32information economy but seemed to be11:34doing11:34a drag just awful work11:38uh but also there was this bigger issue11:41of11:42uh you know really having this this big11:44reveal11:45of the of the actual11:48ecosystem an unknown here for unknown11:51portion of the social media ecosystem11:54effectively letting us know how the11:56sausage was being made right11:58and yet if you were to look at any of12:01the12:02the uh the social media platforms12:05themselves or any of the discourse at12:06really high levels in12:08industry or in regulatory bodies this12:11was not12:12this was a non-starter but i was was12:14arguing at the time12:16that how content was being adjudicated12:18on the platforms12:20under what circumstances under what12:23conditions and under what policies was12:25in fact12:27maybe the only thing that mattered at12:29the end of the day12:30right now in 2010 that was a little bit12:32of a harder case to make12:34by 2016 not so much after we saw the uh12:38the ascent of donald trump in the united12:40states we saw brexit12:42we saw uh this the rise of bolsonaro and12:45in brazil largely12:46uh attributed to um12:49social media campaigns there and kind of12:52discontinued sustained12:54support through those channels uh and12:57here we are in 2020 where uh13:00we might argue or we might claim that13:02misinformation and disinformation online13:04is one of the primary13:06concerns of civil society today13:09and i would put front and center13:13in those all of those discussions13:16the fact that social media companies13:18have this incredible immense power13:20to decide what stays up and what doesn't13:24and how they do it and who they engage13:27to do it13:28should actually be part of the13:30conversation if not13:31i would argue that it's a very13:33incomplete conversation so when i talk13:35about like the13:36scholarly publishing cycle it took a13:39year to put the book out right but it13:40took eight years to amass the evidence13:44to um to do the to the interviews and13:47media that you mentioned13:48to converse with industry people at the13:51top levels eventually but13:52you know starting at the bottom with the13:54workers themselves to find workers who13:56are willing13:56to talk to me and break those13:58non-disclosure agreements that they were14:00under um and to kind of create also14:04a a locus of activity for other14:07researchers and scholars and activists14:09who are also interested in in uncovering14:12uh this area and really sort of create14:14co-create a field of study so that's14:17what took eight years it took a year to14:18get the book out14:19um but all that legwork of proving in a14:22way14:23that this mattered took a lot longer i14:25don't have to make that same case14:27anymore14:27as i'm sure you you can imagine um14:30people people are interested they're14:33concerned14:34and um they want to know more they're14:36demanding a lot more14:38um from firms as users14:41you know as people who are now engaged14:43in social media in some aspect14:45of their lives every day need i say more14:48about zooming14:49constantly which is now our you know our14:52primary14:53medium of connection for so many of us14:55in our work lives even14:57yeah hey we already have a question from15:00our buddy drp david ryden-polgar let me15:04uh15:04put this against the background we can15:06actually see it here uh15:08he says sarah would love to hear your15:10thoughts on section 2315:12230 and how any potential changes would15:15impact content moderation15:16so we're going right in right deep yeah15:19really15:20so um let me try to flush that out a15:22little bit15:24for others who aren't um you know inside15:26quite as as deep15:28um section 230 is15:31a part of the uh communications decency15:34act which goes back to 1996 but15:36effectively what what anyone needs to15:38know about section 230 is that15:40it's the it it's sort of the legal15:42framework15:43that informs social media companies15:48rights and responsibilities around15:51content15:52when we think about legacy media um15:55so-called uh broadcast television for15:58example or other other forms of of media16:01that we consume16:02you know i always bring up the the16:04example of george carlin who16:06famously um uh16:10you know made a career out of the seven16:12dirty words that you couldn't say16:13on radio right so there are all kinds16:16of governing uh16:19legal and other kinds of norms about16:22what is allowed and disallowed in some16:24of these legacy media16:26when it comes to social media however16:30there is a pretty16:35drastically contrasted permissiveness16:38that is in place uh that16:41seeds the power of the decision-making16:44around16:45what is allowable and what is not16:46allowable to the platforms themselves so16:49this is a really different kind of16:50paradigm right16:52and it's section 230 that allows that16:54that's the16:55that's the precedent that's the that's16:57the guidance uh16:58legally that uh that provides that kind17:01of17:02uh both responsibility and discretion17:05and what it does is it allows the17:07companies17:08um to make their own decisions17:12effectively17:13about what policies they will follow17:15internally now this doesn't go for17:17every single piece of content you know17:18one of the the biggest examples that17:21uh that this does not cover is child17:24sexual exploitation material which is17:25just illegal full stop it doesn't matter17:28if platforms wanted to traffic in that17:30material or not it's illegal17:32but beyond that just to certain to a17:35certain extent what section 230 allows17:38is for platforms to redistribute17:42effectively material that other people17:44submit17:45uh without being held liable for that17:47material17:48and so if we think about that that's17:50actually the business model of social17:51media17:52the business model of social media is to17:54get other people to create content17:56upload it circulate it and engage with17:59it download it18:00and effectively the platforms have um18:03you know argued and claimed that they18:04are really18:05you know don't kill the messenger right18:07like they're just like the18:08the the apparatus by which this material18:10gets shared18:12i think that um18:15you know at one time that really made18:16sense particularly when the18:18when this uh when the communications18:20decency act was passed and this goes18:22back in18:23into the mid 90s when what was18:26kind of imagined as needing this this18:29uh reprieve from liability was an isp an18:33internet service provider18:35which at that time uh i guess the most18:38imaginative version of that you could18:40think of would be america online for18:41those of you who18:42remember that on the program shout out18:45to the aol days yeah18:47right aol like all the you know the18:49discs and cd-roms you got and used as18:51coasters18:52um but you know back in that time but an18:55internet service provider really was a18:57pass-through in some cases you know i18:58knew a guy who ran an isp locally19:01he really just had a room with a with a19:03huge internet pipe coming in19:06and a wall of modems and you would dial19:08up through your modem and connect19:10through and then be on the internet to19:11some other service19:12so that was the model then but the model19:15now19:15uh is you know multi-billion dollar19:19transnational corporations19:21uh who have immense power in decision19:24making around content19:26and yet are are uh19:29in the american context at least largely19:32not liable for those decisions19:34uh legally or or otherwise um19:38making incredibly powerful19:42decisions about what kind of material we19:45all see and engage in19:47and what is permissible and what is not19:49online and they do that at their19:50discretion well if they're doing that at19:52their discretion19:54do you think that they're largely going19:56to um19:58fall into a mode of altruism and like20:01what's best20:01for civil society are they going to look20:03at their bottom line20:05and their shareholder demands and20:07respond to that i mean20:09the audience yeah i mean frankly20:12publicly traded companies20:13have a legal mandate to respond to their20:15shareholders and to generate revenue for20:17them so20:18um when those things are at odds when20:20when those things are aligned with20:22what's good for you know20:23america is good for uh facebook's20:26internal policies around content20:28moderation that works out great20:29but if there's you know if ever those20:32two pathways should diverge20:34we know which one they're going to fall20:35under and there's just there's very20:37little20:37um legal consequence or legal uh20:41expectation for uh reporting out on how20:46uh these decisions get made the way that20:48that20:49we have seen more decisions getting uh20:52publicly20:53unveiled through things like um20:56the publication of of what had been21:00previously kind of closely held secret21:03policies internally is through public21:06pressure21:06through the pressure of civil society21:08groups and advocacy groups through the21:10pressure21:11of the public through the pressure and21:13the constant threat of21:15you know things like reform to section21:17230 or other kinds of21:19regulation so it's a very interesting21:23moment and it's interesting to bring up21:24section 230 because21:26again a couple of years ago i had21:28colleagues um21:30who are in uh legal studies and who are21:34you know law professors essentially tell21:36me that 230 would soon be rendered21:38moot anyway because it's just it's it's21:41you know based on um on21:45well it should be solely relevant in the21:47united states right in the jurisdiction21:49of the united states21:50and so because these platforms were21:52going worldwide21:54uh you know there21:57it would be rendered mood well i would21:59say it's actually been the opposite22:00that's right that what is happening is22:02that section 230 is getting bundled up22:04as the norm22:06and is now being promulgated either just22:09through uh through the process of these22:13platforms going global but kind of22:14keeping their americanness and22:16keeping their um their response their22:20you know business practices largely22:22responsible to american laws first and22:24foremost22:25but also even to the point that uh you22:28know it recently22:29has become known i think more and more22:32to people like me who aren't legal22:34scholars but who have a great interest22:36in how this stuff goes down that section22:39230 like language22:41is being bundled up and put into trade22:44agreements22:45uh at the nation state level or22:48you know region level with the united22:50states and trading partners and we know22:52that22:53you know these these trade agreements22:56which have been you know huge hugely22:57politically22:59uh problematic and were a major issue in23:03fact of the 2016 election23:05uh you know they're they're they're23:07anti-democratic i mean how do you even23:09know what's in a trade agreement they're23:10totally secret23:12uh but i i learned while watching a uh23:15uh house uh subcommittee23:19uh convening about section 230 from23:22a highly placed google executive23:26that in fact their their lobbyists are23:28pushing for this kind of language in23:31in these trade agreements so we see that23:33instead of 230 becoming less relevant23:35because of the globalization23:37of american social media platforms it's23:39actually becoming a norm that is now23:42being23:43first of all it was sort of like softly23:45reproduced just because of the spread of23:47these american platforms and23:49how they were doing business but now23:50it's actually becoming codified23:52through other means means like like23:55trade agreements that the public has23:57really no23:58mechanism to intervene upon and i think24:00that's really worrisome24:02what about those mechanisms where the24:04sorry what were you gonna say24:06no okay i was just gonna say that's one24:07of my short and concise professorial24:09answers24:11let me drink a coffee well david24:14uh thanks you for that uh great24:17historical overview and i'm sure24:18the rest of our viewers and listeners do24:20too i i wonder about the ones24:22the the examples that don't have that24:25kind of24:26uh consumer involvement so i'm wondering24:28about for example24:29you know youtube and it's kids content24:32and24:33and so there have been a lot of changes24:35it seems like24:36with regard to that that platform and24:38that subject over the24:40over the last few years so can you maybe24:42give us an overview of24:43how that has gone down um24:46well i think that you know youtube is24:49such an interesting example24:51to talk about for for many reasons uh24:53for its reach and pervasiveness you know24:56it's a24:56market leader for sure it's globality i24:59would also say that youtube is25:01particularly interesting because when we25:04think about25:05uh social media content as being25:10monetized there is no greater25:13and more direct example than youtube25:15where it actually pays people who are25:17really highly successful on the platform25:19for content right25:20so like when there's no kind of like a25:23metaphor there about monetization it is25:25literally monetized right25:27um and this you know just to kind of tie25:30this back to the section 23025:31conversation25:32when we imagined isps as just path25:35pass-throughs you know that was one25:37thing but here we have25:39these huge companies like youtube and25:40others involved actively25:43in production so that kind of like25:46firewall between just being an25:48intermediary and actually being actively25:50engaged in producing media25:51has gone but the there's like a legacy25:54legal environment that it still25:56informs it so youtube you know they pay25:58producers they have these like26:01uh pretty extraordinary studios in26:05in major uh in major26:08cities around the world including la26:10where i live26:12uh they you know they are kind of the26:15go-to outlet and people26:18want to participate in youtube for all26:20sorts of reasons but there's certainly26:21you know a dollar sign reason that26:24people get involved26:25and you bring up this issue of kids26:27content26:28um again here's where we see sort of26:31like the softening and the eroding of26:33regulation too it26:35started it's it's not just youtube i26:36have to confess it's not just26:38social media companies that have eroded26:40uh you know child protections around26:42um media that that goes back to the you26:45know 40 years ago in the reagan26:47administration when there used to be26:48very stringent rules around26:50uh saturday morning cartoons for example26:52and advertising to children that could26:54go on26:55during that time uh shout out to my26:58colleague molly neeson who has worked27:00extensively on that27:01on that particular topic and that27:02erosion so27:05i see uh on on youtube again27:08a lot of the pressure to kind of reform27:11and27:11i think when you're talking about kids27:13content you're talking about27:15some of like some like really disturbing27:17and weird content that was showing up27:20um you know kind of like cheaply made27:22unknown27:23weird creepy sometimes not really27:25clearly27:27necessarily uh27:30benevolently made like you know27:33sometimes creepy sexual undertones27:36uh other kinds of stuff going on you27:38know really and really no way to know27:40that's part of the problem no way to27:42know right um27:43and then uh the massive problem of27:46trying to27:48moderate that material right um you know27:51i think of it27:52as like the the classic story of the the27:55whole27:56springing through the the dyke holding27:58the water back you know27:59you plug one hole another one springs28:02open28:02so it's a little bit falls down so the28:05whole wall28:06and then your inundated that's right28:07that's right and so28:09you know that is a good metaphor to28:10think about the problem of these like28:12kind of isolated28:14uh hot spots that explode on platforms28:17as a new social issue or maybe a new28:21uh a geopolitical conflict erupts28:25somewhere in the world it's you know28:26gets meted out and replicated on social28:28media and attention gets drawn to it28:31and so i think this issue of child28:34content and its kind of exploitive28:35nature and28:36strange nature in some cases was28:38something that advocacy groups and28:40others brought attention to28:41and the platform had to reconfigure and28:44focus on it28:45now i mentioned earlier that you know28:47back in 2010 it really was humans who28:49were doing this work almost exclusively28:50but by 202028:52we are using computational tools28:55to try to deal with content as well28:57although i28:58i'll repeat the quote that i once heard29:00from a reporter29:02who who heard it from a an engineer at a29:05company that shall not be named but it29:06might sound like29:08um you know boo-boob let's say might29:10rhyme with that29:11uh and the quote was uh whatever the29:14algorithm is doing it's29:15not watching the video so you know29:17they're using these computational29:19mechanisms to do all kinds of other29:21stuff but it's not like29:22an algorithm can watch and sense make29:25out of a video it has to look at other29:26stuff29:28so that's an interesting point though29:30too and i want to follow up on that with29:31a question about29:32you know do you do you personally29:34advocate for more29:35ai in the mix of con of content29:38moderation such as you know facebook29:39recently made an announcement that they29:40were using29:41ai to simulate bad actors so that they29:44could train their moderation29:45systems automated moderation systems to29:47more effectively recognize it do you29:49think that that ultimately29:50will work and will benefit the humans29:52who are part of this ecosystem or29:54is it likely to produce unintended ill29:56effects so i mean that's a really great29:59question because that's sort of like the30:0164 000 question about my work if30:04you know one would one would think if my30:05concern is the welfare of workers30:08which has always kind of been my cut in30:10on this topic and where i start and30:11where i come back to an end30:13um then hey wouldn't it be great if30:15tomorrow we could just flip that switch30:16and go30:17to those uh purely computational means i30:20think that30:21in theory right in theory but i think30:24there are a lot of red flags there30:26you know one red flag is that if it's30:29been this difficult30:30as and i kind of laid the groundwork for30:32that at the at the front end of the show30:34to unpack and uncover uh30:37the ecosystem involving humans and i30:39have to say30:40the majority of my work has been30:43reliant upon the willingness of human30:46beings involved in the system30:48to leak essentially to break30:51their non-disclosure agreements and to30:54you know essentially snitch on what they30:56felt was30:58problematic also sometimes what they31:00felt was good about the work they did31:02how do you get uh an algorithm or a31:04machine learning based tool31:06to call a journalist or31:09uh you know do an interview with a31:11researcher31:13i don't know how to do that you know the31:14closest thing we could come to is31:16getting access to it and looking31:18at code but that's not easy to do and31:20it's much harder to do31:22than finding uh and i cannot stress the31:25difficulty of what it was like31:27in the early days to find people willing31:29to talk to me so31:30you know you can't do that with ai how31:32do we how do we audit those tools how do31:34we31:35how do we you know what's the check on31:37power that the firms have with those31:39tools31:40in terms of how they're set up and what31:42they keep in and what they keep31:43out it also sounds like a potentially31:46even greater violation31:47of uh that non-disclosure if someone31:50leaks a bit of code31:51rather than just tell their own personal31:53story oh for sure i mean and and31:56you know the the other thing too that31:58that comes to mind for me is32:00the nature of how these tools work32:03and you know a great worry and i think a32:05legitimate worry of many people in the32:07space32:07is that uh they32:11the tendency to use those tools would be32:13to32:14uh calibrate them32:17to be even uh less permissive let's say32:21or to you know because of their nature32:23they would have less of an32:24ability to look at a given piece of32:27content32:28and you know see that it violates abc32:31policy but understand it in the context32:34of you know again32:35a cultural expression or um32:38you know an advocacy piece around a32:41conflict zone32:42and then make an exception so what we32:44would see32:45is uh more conservative and greater32:49false positives around material that32:52quote unquote is disallowed right32:55again all of this adjudicating to the32:58logic that the firms themselves create33:00which for um for many years itself was33:03opaque33:05uh so this is you know it's not as easy33:08to say unfortunately if we could just33:10get those darn algorithms right if we33:11could just get33:12you know machine learning to get33:13sophisticated enough we could33:16take out the human element and and33:18basically33:19you know save people from having to do33:21this work33:23unfortunately i think it's more33:24complicated than that and i would say33:26that33:26you know bringing up the idea of33:29training machine learning tools as you33:30did33:31one of the gross ironies of this whole33:33thing that i've been33:34monitoring is that uh33:38content moderation commercial content33:40moderation for these major platforms33:42is its own kind of self-fulfilling uh33:46industry that begets uh sub industries33:49in and of itself33:49so that when machine learning tools have33:52come on what needs to happen33:54is that people need to sort data sets to33:56create data sets for the machine33:58learning tools to train on33:59and they need to be themselves trainers34:02and classifiers for the machine learning34:04tools so now we have a whole new stratum34:06of people34:07working to train machine learning34:09algorithms which has them essentially34:11doing a certain kind of content34:12moderation34:13it's a lot easier that cottage industry34:14of evil ai34:16spawn it's like anything like34:19how are we gonna make the ai bad enough34:21to train our ai34:23uh automation systems to recognize that34:25so that we can keep a good environment34:27but then you've got this whole cottage34:29industry around the bad34:30ai seems like a very awkward way of34:32going34:33so you know as someone who monitors like34:36like hiring trends and things like that34:37too34:38i was i was watching companies looking34:41for people to to come be34:42classifiers on data sets which is just34:44moderation before the fact right34:46yeah you know you talked about that in34:48the book too you have34:50you presented a taxonomy of sorts of34:52labor arrangements from34:53in-house moderators to what you call34:56micro labor you know looking at34:58mechanical turk and things like that can34:59you walk us through that a little bit so35:01that we can become familiar with what35:02the35:02the human issues relative to each level35:06yeah one of the one of the early35:07insights i had when i was trying to35:09figure out the contours of this industry35:11from35:11you know the outside and it reminds me35:13of that parable of you know35:15people feeling different parts of the35:16elephant without really being35:18being able to see it and they don't35:19really they don't really get the big35:21picture35:22um was that you know what i was35:24considering as being kind of like a35:26monolithic35:27practice really wasn't it was happening35:28in all kinds of different places and in35:30different guises35:32including using different names like35:33there was no kind of cohesive name to35:35call35:36this this work practice so i started out35:38kind of knowing about these workers35:40in in iowa that i reference in the book35:42and i referenced today35:44who were working in a call center and it35:46turned out that call centers were really35:48a prevalent way35:50that this work was going that it was um35:53you know kind of at somewhat of a remove35:55geographically and organizationally so35:57it'd be kind of like a third party35:59contracted out group of workers36:00somewhere in the world36:02when i started out i knew about the36:03workers in places like iowa florida etc36:06but i soon came to know about workers in36:08places like india36:09or in malaysia or of course key to the36:12book in the philippines36:13so that um that that call center36:16environment for content moderation work36:18is really prevalent36:20and it's global but there are also36:23workers who36:24uh prior to covid we're going every day36:26for example in the bay area down from36:28san francisco on the36:30company buses um and going on site to36:33companies36:34that i describe in the book one that has36:36the you know36:37pseudonym of megatech and is a stand-in36:40for36:40any number of companies in fact i'll36:42just tell you a little anecdote that36:44i've met a lot of people from industry36:46who like over cocktails after meetings36:48will come up to me36:49all from different companies and say36:52we're mega tech aren't we and it's like36:54you know like at least six different36:56corporations think they're making36:57answers36:58yes yes sounds right yeah that tells you37:01something37:02so um you know these people were on site37:05workers they were37:06in you know the belly of the beast37:07essentially they were working37:09in places where there was also uh37:11engineering product development37:13marketing37:14uh communications you know soup to nuts37:16uh37:17although interestingly enough they were37:20also contractors in the case of the37:21books so37:22they still had this differential and37:24lesser status even though they were37:26going on site37:27to the corporate hq you know it still37:31wasn't quite the right badge caller as37:33they described it to me although they37:35thought about the people who were37:36working as contractors and call centers37:38as another kind of worker37:40even though they were essentially very37:43very similar37:44then we had people that i encountered37:47who were37:48you know very entrepreneurial and37:50especially in in sort of the early days37:52were37:52developing a model that looks almost37:56like an ad agency they were37:58independent companies that were starting38:00to specialize in providing content38:02moderation services38:03to other companies and it was a boutique38:05kind of service38:06a specialty service and they would often38:09offer38:10social media management across the board38:13so not only were they offering38:14the removal of content in some cases but38:16they would even38:18offer again in that advertising model38:20the generation of content38:22because believe it or not sometimes you38:24know your auto parts company's facebook38:26page just doesn't38:27generate a lot of organic interest and38:29so you hire a company to come post about38:31how awesome your auto parts company is38:34um likewise if there's a you know as38:37somebody once38:38told me and it's in the book too if you38:40open a hole on the internet it gets38:41filled with38:43bleep with uh you know if you have38:46a web page or you have a facebook page38:48and there's no activity38:49that's like organic or really about what38:51it's supposed to be about i guarantee38:52you that somebody will be posting38:54invective racist comments and so on38:56these boutique firms said38:58to usually to smaller companies hey39:00we'll manage the whole thing we'll39:01delete that stuff39:02we'll generate new stuff for you it'll39:04look organic nobody will really know39:06that that's what we're doing39:07and they were having great success when39:09i talked to them was that generally39:11filed under this sort of banner of user39:12generated content39:14or was it called other things generally39:16um39:17you know it was kind of like a social39:19media management is how they would call39:21couch that and how they would pitch it39:25and uh you know it was like uh hey39:28company x you your business has nothing39:31really to do with social media that's39:33not39:33you know your primary business let us39:35handle it for you39:36and a lot of companies jumped at the39:38chance to kind of outsource that and not39:40deal with it39:41an interesting thing in that kind of39:43bucket of39:44of the taxonomy that you mentioned is39:46that those companies39:48uh in some cases got bought up by39:52ad firms or ad firms have started doing39:54this service as well39:56or they become really really big and39:58successful so there's like a few that40:00kind of40:01uh uh rose to the top and have survived40:05and then you already mentioned this40:07really interesting and and kind of40:09worry some arena where this work goes on40:12which is in the micro labor realm40:14the amazon mechanical turk model40:17uh which is effectively you know digital40:19piece work it's people40:21adjudicating a bit of content here40:23they're often40:25paid a per view or per decision40:28uh and then they try to aggregate enough40:30to make that make sense for them40:31financially40:33and it it turns out although that's40:36supposed to be an anonymous relationship40:38you know savvy mechanical turkers they40:40can figure out who they're working for40:42because a lot of times40:43you know they'd receive a set of of40:46images or other content to adjudicate40:48and like you know the interface was40:50obvious41:00[Music]41:02before and you get those guidelines41:04again then you know yeah41:06that's right so you know i i came to41:09know some folks who were41:10uh you know who themselves sort of began41:13to specialize within41:14mechanical turk and other platforms on41:17this kind of thing and they would seek41:18out this work because they got good at41:20it like you said41:21and they got good at knowing the41:22internal policies and juggling them for41:24all these different firms and41:26began to specialize in this work on that41:28platform41:29i was wondering you know when thinking41:31about this as you mentioned earlier41:33about the41:34the consequences of misinformation41:36especially as we41:37are deep in the process of the us41:40presidential election cycle and41:42i say the u.s because i want to be41:43sensitive to the fact that there are41:44global viewers but i feel like everyone41:46in the world is kind of41:48you know hooked into the u.s41:49presidential election right now41:51and we're all like yeah aren't they41:53right and we're all being subject to41:55you know all of this uh well the the41:58dumpster fire of it all but also the42:00misinformation that accompanies it42:02and so i wonder how should people think42:04and understand the difference between42:07content on social media and content in42:09news media42:10and what are some of the differences in42:12approaches to moderating42:14harmful content and you know kind of42:16just thinking about42:18the access to you know free access to42:21information you know this is kind of a42:23big42:24muddy question i'm not sure i'm42:26articulating very well but42:27hopefully you see the direction of of42:29the um42:30the question that i'm asking her yeah i42:34i'll i'll do my best to respond and we42:36can42:36you know we can you can offer guidance42:40yeah as i go i mean i i think your42:43question in essence is what the hell42:45right yeah42:48information misinformation42:50disinformation the election42:52what the hell and so i think you speak42:54for a global audience when you pose that42:56question and42:58you're right about the u.s election i43:00know uh friends and colleagues who were43:02up early in australia watching it and43:04you know as mortified as we were by the43:06the behavior on display43:08and the other night yes the debate and43:11the kind of the nadir43:12of uh you know american politics in my43:15lifetime is how i described it43:17um you know i i often43:20bring up the the rise of social media43:24as a force in again in american civic43:27life43:29that it's important to not think about43:31it having happened in a vacuum or having43:33happened43:34uh without without43:37um other forces at play and in the other43:40part of my life i43:42am a professor in a program that trains43:44and prepares43:45people for careers and information43:47professions primarily in librarianship43:50and so i know something about the way43:53in which we've seen a gross43:57erosion of the american44:00public sphere and opportunities for44:03people to become informed44:06in places that traditionally have been44:10more transparent more committed to the44:13public good44:13not-for-profit i'm thinking about44:16institutions like public schools44:18and institutions like public libraries44:21so if we were to take44:24you know uh funding a funding graph or44:28something like that and put them44:29together about expenditures or44:31where where money goes in our society we44:34would see44:35you know that off the cliff kind of44:37defunding44:38of of these uh institutions that i just44:41mentioned44:42while we see a rise in social media44:46and what i think that suggests at least44:49to me is that44:50it's not that the american public44:51doesn't have a desire to be informed44:54or to have information sources and i44:56would add to that by the way44:57it's not necessarily in the public44:59sphere in the same way45:00but we have seen total erosion in45:04regional and local journalism too right45:06during the same time right45:08into mega media that's right mega media45:11which45:12you know came about by the shuttering of45:14local news45:15and it there was a time when you know45:17cities like mine i come from madison45:19wisconsin 25045:21000 people yeah they yeah they might45:24have had a a45:25a reporter in dc you know what i mean45:28for our local paper the capitol times45:30which went the way of the dodo some45:33some years ago and that that local paper45:35no longer exists in a print form45:38so there's a whole i mean we could do a45:40whole show on this and you probably45:42shouldn't have me on for the show so45:44apologies to to the users that this45:46isn't my total area of expertise but i'm45:48just trying to connect some dots here45:50for people to make sense of it right45:52right and you know when we think about45:53the differences between social media45:55information circulation and something45:58like journalism46:00agree or disagree with what you read in46:02in in the newspaper or you hear on the46:05news46:06of your choice but there are things46:09there that are not present46:10in the same way in the social media46:12ecosystem uh46:13you know an author name a set of46:16principles by which46:18uh the journalists46:21at least pay lip service to but most of46:24them46:25live by you know that they have been46:27educated46:28to uh to serve and then do so46:31in their work there's editorial control46:34that before stories go to print they46:37have to go through a number of eyes46:38there's fact checking if you've ever you46:41know i've been on the46:42the the side of having been interviewed46:44for journalistic pieces and i get phone46:46calls from fact checkers to make sure46:48that the journalists got46:49right what i think yeah right46:52you think that did you really say xyz46:55yes i did that doesn't exist and you46:57know46:58your your your racist uncle47:00recirculating47:01um god knows what from whatever outlet47:04that is just go those those47:08what we might think of barriers to entry47:10but we also might think of as safeguards47:11are just gone47:13and with all of the other institutions47:16eroded that i mentioned47:17you know public schooling library public47:20libraries and so on the mechanisms that47:22people might use to47:24vet material to understand what it means47:27when they look at a paper of record47:29versus47:32a dubious outlet let's say a dubious47:34internet based outlet47:36and how those uh sources differ those47:39mechanisms to to learn about those47:41things have been eroded as well47:43um is there even a civics class anymore47:45in pu
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Sarah T. Roberts, who is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, Ontario for three years. On the internet since 1993, she was previously an information technology professional for 15 years, and, as such, her research interests focus on information work and workers and on the social, economic and political impact of the widespread adoption of the internet in everyday life. Since 2010, the main focus of her research has been to uncover the ecosystem - made up of people, practices and politics - of content moderation of major social media platforms, news media companies, and corporate brands. She served as consultant to and is featured in the award-winning documentary The Cleaners, which debuted at Sundance 2018 and aired on PBS in the United States in November 2018. Roberts is frequently consulted by the press and others on issues related to commercial content moderation and to social media, society and culture, in general. She has been interviewed on these topics in print, on radio and on television worldwide including: The New York Times, Associated Press, NPR, Le Monde, The Atlantic, The Economist, BBC Nightly News, the CBC, The Los Angeles Times, Rolling Stone, Wired, The Washington Post, Australian Broadcasting Corporation SPIEGEL Online, and CNN, among many others. She is a 2018 Carnegie Fellow and a 2018 recipient of the EFF Barlow Pioneer Award for her groundbreaking research on content moderation of social media. She tweets as @ubiquity75. This episode streamed live on Thursday, October 1, 2020.
About this episode's guest: Marcus Whitney is Founding Partner of Jumpstart Health Investors, the most active venture capital firm in America focused on innovative, healthcare companies with a portfolio of over 100 companies. He is also co-founder and minority owner of Major League Soccer team, Nashville Soccer Club. Marcus is the author of the best-selling […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Marcus Whitney is Founding Partner of Jumpstart Health Investors, the most active venture capital firm in America focused on innovative, healthcare companies with a portfolio of over 100 companies. He is also co-founder and minority owner of Major League Soccer team, Nashville Soccer Club. Marcus is the author of the best-selling book Create and Orchestrate, about claiming your Creative Power through entrepreneurship. He is also the producer and host of Marcus Whitney LIVE, an interview show live-streamed M-F 12 Central on Facebook, YouTube, LinkedIn, Twitter and Twitch, and Marcus Whitney’s Audio Universe, a podcast on all major platforms. Marcus is a member of the board of the Country Music Hall of Fame® and Museum, the Nashville Convention and Visitors Corporation, Instruction Partners and an Arts Commissioner for the city of Nashville. He has been listed in the Upstart 100 by Upstart Business Journal, Power 100 by Nashville Business Journal, and has been featured in Inc., Techcrunch, Fast Company, and The Atlantic. He tweets as @MarcusWhitney. This episode streamed live on Thursday, September 24, 2020.
About this episode's guest: Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance.Foreseeing trends and anticipating disruptions, she's […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance. Foreseeing trends and anticipating disruptions, she’s committed to diverse and inclusive AI strategy development; using AI to empower and transform communities and cultures; securing diverse and inclusive participation in the 4IR, helping companies navigate the AI landscape and developing future AI leaders. A multicultural cross-connector of multiple fields and an innovative collaborator, her passion is forming connections and unifying people and technologies; enhancing quality of life and economic prosperity. She’s also a criminal psychologist, therapeutic jurisprudence and rehabilitation specialist, substance abuse therapist, crisis intelligence, crisis communication and media specialist, creative science communicator and journalist. She has a solid background in government relations, public affairs, reputation management and litigation PR. A sought after thought-leader, inspirational motivational speaker and mentor, Ms. Cummings is also a Columbia University community scholar. She tweets as @CummingsRenee. This episode streamed live on Thursday, September 17, 2020.
About this episode's guest: Rahaf Harfoush is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. She is the Executive Director of the Red Thread Institute of Digital Culture and teaches “Innovation & Emerging Business Models” at Sciences Politique's school of Management and Innovation in Paris. […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Rahaf Harfoush is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. She is the Executive Director of the Red Thread Institute of Digital Culture and teaches “Innovation & Emerging Business Models” at Sciences Politique’s school of Management and Innovation in Paris. She is currently working on her fourth book. Her third book, entitled “Hustle & Float: Reclaim Your Creativity and Thrive in a World Obsessed with Work,” was released in 2019. She has been featured by Bloomberg, The CBC, CTV, and Forbes for her work on workplace culture. Formerly, Rahaf was the Associate Director of the Technology Pioneer Program at the World Economic Forum in Geneva where she helped identify disruptive-startups that were improving the state of the world. Rahaf is the co-author of “The Decoded Company: Know Your Talent Better Than You Know your Customers” Her first book, “Yes We Did: An Insider’s Look at How Social Media Built the Obama Brand,”chronicled her experiences as a member of Barack Obama’s digital media team during the 2008 Presidential elections and explored how social networking revolutionized political campaign strategy. Rahaf has been named "one of the most innovative women in France,” "one of the top future thinkers to shape the world,” "a Young Global Changer,” and a “Canadian Arab to Watch.” Rahaf’s writing has been featured in HBR, Wired, The Globe and Mail, Fast Company, and many more. She is a frequent commentator on France24 and the CBC. In her spare time, Rahaf enjoys instagramming too many pictures of her dog Pixel, learning how to play the ukulele and working on her first novel. She tweets as @RahafHarfoush. This episode streamed live on Thursday, September 10, 2020.
About this episode's guest: John C. Havens is Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He is also executive director of the Council on Extended Intelligence (CXI). He previously served as an EVP at a top-ten global PR firm, where he counseled clients like Gillette, HP, and Merck […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: John C. Havens is Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He is also executive director of the Council on Extended Intelligence (CXI). He previously served as an EVP at a top-ten global PR firm, where he counseled clients like Gillette, HP, and Merck on emerging and social media issues. John has authored the books Heartificial Intelligence and Hacking Happiness and has been a contributing writer for Mashable, The Guardian, and The Huffington Post. He has been quoted on issues relating to technology, business, and well being by USA Today, Fast Company, BBC News, Mashable, The Guardian, The Huffington Post, Forbes, INC, PR Week, and Advertising Age. John was also a professional actor in New York City for over 15 years, appearing in principal roles on Broadway, television, and film. He tweets as @JohnCHavens. This episode streamed live on Thursday, September 3, 2020.
About this episode's guest: Dorothea Baur is a leading expert & advisor in Europe on ethics, responsibility, and sustainability across industries such as finance, technology, and beyond. Her PhD is in NGO-business partnerships, and she's been active in research and projects around sustainable investment, corporate social responsibility, and increasingly, emerging technology such as AI. She's […]
About this episode's guest: Kaitlin Ugolik Phillips is the author of The Future of Feeling: Building Empathy in a Tech-Obsessed World. She is a journalist and editor whose writing on law, finance, health, and technology has appeared in the Establishment, VICE, Quartz, Institutional Investor magazine, Law360, Columbia Journalism Review, Lithub, Scientific American, NY Post, VICE, […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Dorothea Baur is a leading expert & advisor in Europe on ethics, responsibility, and sustainability across industries such as finance, technology, and beyond. Her PhD is in NGO-business partnerships, and she’s been active in research and projects around sustainable investment, corporate social responsibility, and increasingly, emerging technology such as AI. She’s also been developing an audit system for contact tracing against the background of COVID-19 as a ForHumanity Fellow. She is founder and owner of Baur Consulting AG, and among her many distinctions, has been named as one of the "100 brilliant women in AI ethics." She tweets as @DorotheaBaur. This episode streamed live on Thursday, August 27, 2020.
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Kaitlin Ugolik Phillips is the author of The Future of Feeling: Building Empathy in a Tech-Obsessed World. She is a journalist and editor whose writing on law, finance, health, and technology has appeared in the Establishment, VICE, Quartz, Institutional Investor magazine, Law360, Columbia Journalism Review, Lithub, Scientific American, NY Post, Salon, and Narratively, among others. She writes a blog and newsletter about empathy featuring reportage, essays, and interviews. She tweets as @kaitlinugolik. This episode streamed live on Thursday, August 20, 2020.
About this episode's guest: Dr. Safiya Umoja Noble is an Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies where she serves as the Co-Director of the UCLA Center for Critical Internet Inquiry. She is the author of a best-selling book on racist and sexist algorithmic bias in […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Dr. Safiya Umoja Noble is an Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies where she serves as the Co-Director of the UCLA Center for Critical Internet Inquiry. She is the author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). She tweets as @safiyanoble. This episode streamed live on Thursday, August 13, 2020. Highlights: 2:00 What has it been like in your life and work to have authored a category-defining book? 4:16 how the conversation has changed 6:57 career arc 7:06 theater! 09:09 influences 10:55 audience question: when you're teaching on this, what activities resonate with your students 16:36 "what the humanities and social sciences do is they give you a really great vocabulary for talking about the things you care about and for you know looking at them closely" 17:36 algorithms offline? 19:38 what is the Center for Critical Internet Inquiry at UCLA doing? (site: c2i2.ucla.edu) 20:17 big announcement! 29:07 the challenges for companies want to address the oppression in their own tech 47:56 what makes you hopeful? (BEAUTIFUL answer)
About this episode's guest: David Ryan Polgar is a leading voice in the areas of tech ethics, digital citizenship, and what it means to be human in the digital age. David is a global speaker, a regular media commentator for national & international press, and a frequent advisor & consultant on building a better tech […]
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: David Ryan Polgar is a leading voice in the areas of tech ethics, digital citizenship, and what it means to be human in the digital age. David is a global speaker, a regular media commentator for national & international press, and a frequent advisor & consultant on building a better tech future. He is the co-host/co-creator of Funny as Tech, a NYC-based podcast & occasional live show that deals with our messy relationship with technology, and is the founder of All Tech Is Human, an accelerator for tech consideration & hub for the Responsible Tech movement. David serves as a founding member of TikTok's Content Advisory Council, along with the Technology & Adolescent Mental Wellness (TAM program). He tweets as @techethicist. This episode streamed live on Thursday, August 6, 2020. Episode highlights: 1:20 David Ryan Polgar intro 3:21 weird coincidence?! 4:40 and a tornado?! 6:05 previous podcast discussion — will update here with a link when it goes live! 7:23 attorney and educator?! 10:44 "no application without representation" 11:56 the politics of technology 15:55 impact over intent 16:25 social media and free speech online 21:13 content moderation: humans and AI 24:32 the role of friction in tech 27:32 distinguishing between thought and action in law 28:24 "your unfiltered brain is not what should be out on the internet" 28:50 brain to text 30:59 "are we an algorithm" 37:14 "do we even want these systems" 46:05 "I wanted to put the agency back on us" 46:28 "the future is not written" 53:55 "everybody needs to add their voice" 54:54 How can people find you and follow your work? (alltechishuman.org, hello@alltechishuman.org; funnyastech.com; @techethicist; David Ryan Polgar on LinkedIn; techethicist.com; davidryanpolgar.com)
The Tech Queen joins us to talk about the early days of the world wide web, digital transformation, strategy articulation, and what the future of live events could look like!
Today we are talking with Kate O’Neill. Kate is a highly engaging keynote speaker, strategy consultant. She is also the author of Tech Humanist and Pixels and Place. Kate is known for helping clients prepare for uncertainty at scale in an increasingly tech-driven future. I am your host Zachary Alexander, Subscription Service Designer at SubscriptionMaker.net. […] The post An Interview with Kate O’Neill appeared first on SubscriptionMaker.
About this episode's guest: Calli Schroeder is an attorney in privacy/data security, as well as a privacy advocate and self-described “die-hard nerd.” Her brief stint as a wedding singer led her to wonder if she had violated copyright law, an interest which transitioned to tech law broadly and privacy law specifically. While in law school, […]
Summary The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Calli Schroeder is an attorney in privacy/data security, as well as a privacy advocate and self-described "die-hard nerd." Her brief stint as a wedding singer led her to wonder if she had violated copyright law, an interest which transitioned to tech law broadly and privacy law specifically. While in law school, Schroeder interned for FTC Commissioner Julie Brill, published an article on consent issues and IRBs in the Colorado Technology Law Journal, among other accomplishments distinctions. She developed a focus on consumer protection issues, surveillance, data breaches, and freaking people out at parties. Among the certifications she holds are privacy designations for the U.S., Europe, and Canada. She tweets as @Iwillleavenow. This episode streamed live on Thursday, July 30, 2020. Episode highlights: 3:24 how did Calli's curiosity in copyright law lead to a career in tech and data privacy law? 8:48 philosophy and theology - how privacy affects the nature of society and humanity 19:30 Live Audience Question: "Should social media platforms offer data portability to move valuable data across different platforms"? 33:00 how we can expect to see regulations roll out in the U.S. 34:37 What makes you optimistic? 37:45 Live Audience Question: EU Court of Justice opinion (see: https://www.bbc.com/news/technology-5... for context) & how she's advising clients to go forward 39:49 HARK! A DOG BARKS! 40:07 Pre-Show Audience Question: "I'd love to hear more about what meaningful consent looks like with data privacy. I wonder if there's anything we can borrow anything from sex ed about consent being affirmative, revocable, or centering care and agency?" 50:11 privacy and advocacy for the best futures possible
The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a livestreamed video program before it's made available in audio format. Hosted by Kate O’Neill.
Being isolated has been a real challenge for me. I am fortunate to be with my family, but relating to my co-workers and partners only via screens and phones has been taxing. I've always considered myself some kind introvert but now I recognize the energy I get from sharing space and experience with people. This may be the most productive I've ever been at work, and typically such sprints have coincided with periods I remember being happy. For a variety of reasons, this isn't working out that way. We're making things together, but something is missing. The energy of others may signal additional meaning to the work. Needless to say, I'm a little turned around. If you're having similar experience, you'll enjoy this chat with Kate O'Neill, author, strategist, futurist and founder of KO Insights. She has been working in technology for more than 25 years, and thinking about the way people relate through it, alongside it and in spite of it. We discuss the luxury of the tools we have at this moment, and the way we may be bending to the tools versus shaping the tools to us, giving them more power as platforms than the intent.Kate is everywhere. Her brand new show, The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. You can find it on Twitter or on the YouTube channel. Her newest book, Tech Humanist is a classic. I’ll bet you want an AI generated transcript of this episode: Get it here: http://adampierno.com/moving-forward-with-empathy-kate-oneill/ Get full access to The Strategy Inside Everything at specific.substack.com/subscribe
You'll enjoy this chat with Kate O'Neill, author, strategist, futurist and founder of KO Insights. She has been working in technology for more than 25 years, and thinking about the way people relate through it, alongside it and in spite of it. We discuss the luxury of the tools we have at this moment, and the way we may be bending to the tools versus shaping the tools to us, giving them more power as platforms than the intent. Kate is everywhere. Her brand new show, The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. You can find it on Twitter or on the YouTube channel. Her newest book, Tech Humanist is a classic. I'll bet you want an AI generated transcript of this episode: Get it here: http://adampierno.com/moving-forward-with-empathy-kate-oneill/ Hey, you might be a listener and have no idea, but I've written some books. You can find Under Think It (a marketing strategy handbook) and Specific (a book about trying to build brands in a world that doesn't want any more of them). You can read some fiction I've written here, for free. --- Send in a voice message: https://anchor.fm/adam-pierno/message Support this podcast: https://anchor.fm/adam-pierno/support
About this episode's guest: Rumman Chowdhury's passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to […]
Summary The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI. She tweets as @ruchowdh. This episode streamed live on Thursday, July 23, 2020. Episode highlights: (Part 1) 3:17 how Rumman's background in political science shapes her thinking in AI 3:28 "quantitative social science is math with context" 3:58 "often when we talk about technologies like artificial intelligence… we've started to talk about the technology as if it supersedes the human" 4:11 Rumman mentions her article "The pitfalls of a ‘retrofit human’ in AI systems": https://venturebeat.com/2019/11/11/the-pitfalls-of-a-retrofit-human-in-ai-systems/ 4:56 What is the core human concept that shapes your work? 5:25 "I recognize and want a world in which people make decisions that I disagree with, but they are making those decisions fully informed and fully capable." 5:49 A DOG ALMOST APPEARS! 7:18 transparency and explainability in Responsible AI 8:17 on the cake trend: "reality is already turned upside on its head — I want to be able to trust that the shoe is a shoe and not really a cake" :) 9:04 on the critiques of Responsible AI, "cancel culture," and anthropomorphizing machines 11:11 Responsible AI is not about having politically correct answers; her role leading Responsible AI is part of core business functions 12:00 Responsible AI is about serving the customers, the people; credit lending discrimination example 12:40 need for discussion that's bigger than profitability and efficiency; humanity and human flourishing 13:27 "human flourishing — creating something with positive impact — is not at odds with good business" 15:21 "I think sometimes people can get overly focused on value as revenue generation; value comes from many, many different things" 17:05 a political science view on human agency relative to machine outcomes 19:22 AI governance 20:34 "constructive dissent" 21:13 the "human in the loop" problem 25:14 algorithmic bias 29:20 "building products with the future in mind" 29:44 are there applications of AI that fill you with hope for the good they could potentially do? (Part 2) 0:45 how can we promote humanity and human flourishing with AI and emerging technologies? 1:16 what can businesses do to enable Responsible AI 1:22 "I have a paper out… where we interview people who work in Responsible AI and Ethical AI… on what companies can do" (see: https://arxiv.org/abs/2006.12358) 6:22 what can the average human being do 8:40 where can people find you? on Twitter: https://twitter.com/ruchowdh on the web: http://www.rummanchowdhury.com/
About this episode's guest: Dr. Chris Gilliard is a writer, professor and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology. He is an advocate for critical and equity-focused approaches to tech in education. His work has been featured in The Chronicle of Higher Ed, EDUCAUSE Review, Fast Company, […]
Companies like Google, Etsy, Cisco and more look to Kate O'Neill, the "Tech Humanist," for optimism about the role of technology in the world along with a firm reality check. Kate is founder and CEO of koinsights.com. Her insights help corporate and cultural leaders rethink how to succeed LONG-Term, by taking a human-centric approach to digital transformation and readiness for the future. If you are seeing this before April 24, 2020, check out Kate at Speak Aid 2020 https://pheedloop.com/speakaid2020/site/Kate O'Neill is very active on Twitter. I follow her closely, I suggest you do the same https://twitter.com/kateo. I also highly recommend her book Tech Humanist: How You Can Make Technology Better For Business & Better For Humans. With all the webinars and other productions flooding your emails, this may be the BEST time investment you make at this point in time.Let me know what you think by calling or texting me at 570 815 1626. My email is mwolff@businessbuildersmedia.com. Get all our Business Builders Shows at businessbuildersmedia.com Thanks for listening! See acast.com/privacy for privacy and opt-out information.
UXAUS2019 Day 2 With interactive experiences increasingly becoming automated, algorithmically optimized, and driven by artificial intelligence, how do we ensure that we don't accidentally create absurd, out-of-proportion, or even harmful interactions? How can we ensure, in other words, that the experiences we create for humans are as meaningful as possible? As Kate O'Neill, author of Tech Humanist, points out, it's critically important that we do what we can to make our work matter, because our every design decision has a chance of reaching massive scale. Kate will examine how to bring meaning to even the smallest design decisions, to every interaction and every system, and ultimately, through this focus, how to bring more meaning to our work.
Guest: Kate O'Neill, The Tech Humanist. Hear the full interview on Beetle Moment Marketing Podcast ep. 56: https://beetlemoment.com/podcastSubscribe to this free Flash Briefing on Alexa See acast.com/privacy for privacy and opt-out information.
How do we design technology that is both smart for business and good for people? This conversation asks questions about our approach for voice and AI, oncoming voice tech issues such as deep fakes, and privacy issues such as data mining by Facebook and other tech companies. Author and keynote speaker Kate O'Neill is known around the world as The Tech Humanist. Hear her great approach to keeping technology human and what it will take for emerging technology to be successful from a business standpoint.Timestamps:03:15 How do we approach voice design from a human centric way that is also good for business?04:30 Weather skill example - take context about what someone using the skill needs, like an umbrella05:20 Business might build voice tech or other tech in order to check a box but it’s better to build for the person on the other end06:00 Don’t ask “What’s our AI strategy?” - steak back and say “What are we trying to accomplish as a business? 07:00 Who are we building for and how can we serve their needs?”06:20 Create alignment and relevance between the business and people outside it07:10 Avoid unintended consequences of technology as it becomes capable of such scale07:35 Google Translatotron and deep fakes: Translatotron translates spoken word into another language while retaining the VOICE of the original speaker. Read more: https://bigthink.com/surprising-science/translatotron.08:40 Google would now have your voice - what will they do with it? Voice synthesis and deep fakes - the terrifying possibilities (overall: cool but scary)How we should approach technology such as the Babelfish (Hitchhiker’s Guide) - simultaneous translation that does not lose integrity originating from the sound of your voice. But one step further: there is sampling of your voice that is sufficient for ML (machine learning) and AI to synthesize your voice.09:30 Companies must govern themselves (e.g. Google)09:50 Government has a responsibility to regulate privacy and data models10:40 Kate doesn’t have smart speakers in her home because we don’t have a precedent for protecting user data, she says11:20 Facebook Ten Year Challenge (Kate’s tweet went viral in January 2019 over the ten year old photo trend next to current photos of themselves) - she pointed out that this data could be training facial recognition algorithms on predicting aging13:20 We have seen memes and games that ask you to provide structured information turn out to be data mining (e.g. Cambridge Analytics) - we have good reason to be cautious14:40 "Everything we do online is a genuine representation of who we are as people, so that data really should be treated with the utmost respect and protection. Unfortunately, it isn't always." - Kate15:00 Do we need government to regulate tech?16:10 “Ask forgiveness, not permission” is clearly the case with Facebook so why are users so forgiving?20:00 What does a future social network look like where there are fewer privacy and data mining and algorithm concerns?Extra info:Deep fake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.Deep fakes and voice emulation: idea of voice skins and impersonation for fraud:https://qz.com/1699819/a-new-kind-of-cybercrime-uses-ai-and-your-voice-against-you/"In March, fraudsters used AI-based software to impersonate a chief executive from the German parent company of an unnamed UK-based energy firm, tricking his underling, the energy CEO, into making an allegedly urgent large monetary transfer by calling him on the phone. The CEO made the requested transfer to a Hungarian supplier and was contacted again with assurances that the transfer was being reimbursed immediately. That too seemed believable."Subscribe to this podcast and listen free anywhere: beetlemoment.com/podcast See acast.com/privacy for privacy and opt-out information.
We know we're good at making technology. But are we good at making technology that pushes humanity in the right direction?That's the question we posed to Kate O' Neill. She's the author and speaker with an audacious goal: helping humanity prepare for an increasingly tech-driven future by guiding business and civic leaders to be both successful and respectful with human-centric data and technology, and by helping people better understand the human impact of emerging technologies. We talk to Kate about innovation - namely, what it is, and what it isn't. We also discuss whether or not all this technology we're building is actually good for us and pushing us in the right direction or not, and exactly how scared we ought to be about the rise of automation. It's a fascinating conversation, and we think you're in for a treat.
Kate O’Neill is known as “the Tech Humanist.” She is helping humanity prepare for an increasingly tech-driven future by teaching business how to make technology that’s better for humans. Kate has led innovations across technology, marketing, and operations for more than 20 years in companies from startups to Fortune 500s. Among her prior achievements, she created the first content management role at Netflix; developed Toshiba America’s first intranet; led cutting-edge online optimization work at Magazines.com; was founder & CEO of [meta]marketer, a first-of-its-kind analytics and digital strategy agency; and held leadership and advisory positions in a variety of digital content and technology startups. She’s written 4 books - and is now the founder and CEO of KO Insights. Kate connects with Lou Diamond in this educational, focused, fun and insightful conversation on Thrive LOUD that shows her strength as a speaker and 'rap goddess-like' passion for getting the words, technology and humans to all come together. *** Connect to Lou Diamond: www.loudiamond.net Subscribe to Thrive LOUD: www.thriveloud.com/podcast
With a focus on helping humanity prepare for an increasingly data and tech-driven future, Kate O’Neill, AKA the “tech humanist,” helps guide and inspire businesses to create truly meaningful human experiences. As a leading innovator across technology, marketing, and operations, Kate is a global keynote speaker, strategic advisor, and author of the recent, Tech Humanist: … The post A Human-Centered Approach to Technology and Learning with Kate O’Neill appeared first on Leading Learning.
Understanding what makes humans “human” is an essential question for any company today. Especially the ones embarking on a digital transformation -- or any tech initiative. This is the subject of Kate O'Neill’s book, Tech Humanist: How You Can Make Technology Better for Business and Better for Humans. O’Neill, an author, keynote speaker, and strategic advisor, joins the podcast to explain why we need a new type of leader: the Tech Humanist. And why human experience, perspective, and empathy are the best guides for digital innovation. Listen to this episode to learn: • What Best Buy, Starbucks, Apple, and Southwest Airlines get right about customer experience • Why the act of “creating meaning” is so important to designing digital experiences • The trade-offs between convenience and data privacy • Why we need “benevolent business,” not “benevolent robots” • Why measuring a company’s success based on profit alone is inherently one-dimensional • The flaws and risks in taking a tech-led, rather than a research-led, approach to digital transformation More information on Tech Humanist: www.koinsights.com/techhumanistbook/
Denise Howell speaks with author Kate O'Neill, "The Tech Humanist", about her new book, "Tech Humanist: How You Can Make Technology Better for Business and Better for Humans." They discuss her background, she was one of the first 100 employees at Netflix, her viral Tweets and article on the Ten Year Challenge, as well as reports that IBM used photos from millions of Flickr users, without their consent or knowledge, to train its facial recognition tech, and more. Host: Denise Howell Guest: Kate O'Neill (Tech Humanist) Download or subscribe to this show at https://twit.tv/shows/triangulation.
Denise Howell speaks with author Kate O'Neill, "The Tech Humanist", about her new book, "Tech Humanist: How You Can Make Technology Better for Business and Better for Humans." They discuss her background, she was one of the first 100 employees at Netflix, her viral Tweets and article on the Ten Year Challenge, as well as reports that IBM used photos from millions of Flickr users, without their consent or knowledge, to train its facial recognition tech, and more. Host: Denise Howell Guest: Kate O'Neill (Tech Humanist) Download or subscribe to this show at https://twit.tv/shows/triangulation.
Denise Howell speaks with author Kate O'Neill, "The Tech Humanist", about her new book, "Tech Humanist: How You Can Make Technology Better for Business and Better for Humans." They discuss her background, she was one of the first 100 employees at Netflix, her viral Tweets and article on the Ten Year Challenge, as well as reports that IBM used photos from millions of Flickr users, without their consent or knowledge, to train its facial recognition tech, and more. Host: Denise Howell Guest: Kate O'Neill (Tech Humanist) Download or subscribe to this show at https://twit.tv/shows/triangulation.
Denise Howell speaks with author Kate O'Neill, "The Tech Humanist", about her new book, "Tech Humanist: How You Can Make Technology Better for Business and Better for Humans." They discuss her background, she was one of the first 100 employees at Netflix, her viral Tweets and article on the Ten Year Challenge, as well as reports that IBM used photos from millions of Flickr users, without their consent or knowledge, to train its facial recognition tech, and more. Host: Denise Howell Guest: Kate O'Neill (Tech Humanist) Download or subscribe to this show at https://twit.tv/shows/triangulation.
Kate O'Neill, the Futurist, Author, Keynote Speaker, Fortune 500 Advisor, and Founder of KO Insights joins the show to share her journey from being one of the first 100 employees of Netflix to writing Tech Humanist and speaking at the UN. Hear what Netflix was like in the early days, how to align your company to make lives better, whether privacy even exists anymore, and why completely unplugging isn’t necessary. Connect with Kate on Twitter at @KateO and at KOinsights.com
Kate O'Neill is the author of Tech Humanist, and in our increasingly tech-driven world, Kate is helping businesses make better human experiences at scale. In this conversation, Kate shares why she believes meaningful questions are far more important than sensible answers, discusses the importance of business purpose (and why Disney's “create magical experiences” is a clear, instructive statement of operational purpose) and reminds us all that data is not objective. We also dig into Kate's professional journey from a BA in German and Masters in linguistics to being one of the first 100 employees at Netflix (she also built the first intranet at Toshiba in San Jose, CA). The theme underlying this thought-provoking conversation around the speed and direction of technological change is entirely human, as Kate notes “human experiences and human data are really propelling innovation forward”. Website: www.koinsights.comTwitter: @kateo Learn more about your show hosts at: www.jkellyhoey.co and www.martywolffbusinesssolutions.comThanks for listening. Call or text me with your comments or questions at 570 815 1626Marty Wolff, Founder of the Business Builders Show See acast.com/privacy for privacy and opt-out information.
If you use social media, you've probably noticed a trend across Facebook, Instagram, and Twitter of people posting their then-and-now profile pictures, mostly from 10 years ago and this year. WIRED OPINION ABOUT Kate O'Neill is the founder of KO Insights and the author of Tech Humanist and Pixels and Place: Connecting Human Experience Across Physical and Digital Spaces. Instead of joining in, I posted the following semi-sarcastic tweet: https://twitter.
Welcome to episode #648 of Six Pixels of Separation. Here it is: Six Pixels of Separation - Episode #648 - Host: Mitch Joel. This is the type of human being that the world (and business) desperately needs today. Kate O’Neill is helping humanity prepare for an increasingly tech-driven future by teaching business how to be successful with human-centric data and technology. Kate’s expertise comes from more than 20 years of experience and entrepreneurship leading innovations across technology, marketing, and operations. She created the first content management role at Netflix, was founder & CEO of [meta]marketer - a first-of-its-kind analytics and digital strategy agency, led the online optimization work at Magazines.com, developed Toshiba America‘s first intranet, and held leadership and advisory positions in a variety of other start-ups. Kate is now founder of KO Insights, a consultancy committed to improving human experience at scale. Kate is the author of four books including her latest, Tech Humanist (and we can't forget Pixels And Place). How do we make technology better for business and humans alike? Let's find out. Enjoy the conversation... Running time: 54:16. Hello from beautiful Montreal. Subscribe over at iTunes. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. or you can connect on LinkedIn. ...or on Twitter. Here is my conversation with Kate O’Neill. Tech Humanist. Pixels And Place. KO Insights. Follow Kate on Twitter. This week's music: David Usher 'St. Lawrence River'.
I’d only briefly met Kate O’Neill in person once before doing this interview. We’ve “known” each other for a while now on social media channels but had never really spent a lot of time taking in real life prior to sitting down for this interview. My loss… It's easy to see why Kate, the founder of KO Insights, is such a popular speaker, author (https://www.amazon.com/Kate-ONeill/e/B00JRD9ZAC), mentor, and consultant when it comes to human centric marketing and design. It's because she’s so, well, human. She’s real. No pretences, no BS, simply real. And in this day and age that is a rare commodity. Please do me one favor before you listen to this interview. Take a good look at this pic of Kate (her guest profile pic). Trust me, it will make all the sense in the world after you listen to the entire interview. It just might bring a tear to your eye as it did mine. Special Guest: Kate O'Neill.