The Tech Humanist Show

Follow The Tech Humanist Show
Share on
Copy link to clipboard

The Tech Humanist Show is a multi-media-format show often featuring a special guest and exploring topics like digital culture, data privacy, automation, human interfaces, artificial intelligence, and more. It's "everything about data and technology that shapes the human experience."

Kate O'Neill


    • Nov 19, 2024 LATEST EPISODE
    • every other week NEW EPISODES
    • 46m AVG DURATION
    • 67 EPISODES


    Search for episodes from The Tech Humanist Show with a specific topic:

    Latest episodes from The Tech Humanist Show

    Embracing Paradox: Wendy K. Smith on Navigating the Both/And in Tech Leadership

    Play Episode Listen Later Nov 19, 2024 50:56


    In this insightful episode of The Tech Humanist Show, host Kate O'Neill welcomes Wendy Smith, the Dana J. Johnson Professor of Management at the University of Delaware. Wendy is an expert in strategic paradoxes and brings a wealth of knowledge from her groundbreaking work and acclaimed book, “BOTH/AND Thinking.” Wendy dives deep into the pitfalls […]

    Tech Agnostic: Rethinking Technology’s Role in Society with Greg M. Epstein

    Play Episode Listen Later Nov 12, 2024 39:46


    In this enlightening episode of The Tech Humanist Show, host Kate O'Neill dives deep into a thought-provoking conversation with Greg Epstein, the humanist chaplain at Harvard and MIT, and a New York Times best-selling author. Known for his impactful role as the Twitter chaplain during the pandemic, Greg now channels his passions through LinkedIn and […]

    The Embodied Voice: Embracing the Humanity in Communication with Casey Erin Clark

    Play Episode Listen Later Oct 31, 2024 28:59


    In this engaging episode of The Tech Humanist Show, host Kate O'Neill sits down with the multifaceted Casey Erin Clark, co-founder of Vital Voice Training, to explore the profound connections between voice, human communication, and technology. With AI and voice technologies becoming increasingly prevalent, Casey shares her unique perspective on how these developments impact our […]

    AI Ethics and the Future of Higher Education with Dr. Ravit Dotan

    Play Episode Listen Later Oct 24, 2024 36:47


    Renowned AI ethicist and philosopher, Ravit Dotan joins host Kate O'Neill as they dive deep into the world of artificial intelligence and ethics. The conversation covers the significant changes in public perception of AI, sparked by direct interaction and the influence of sci-fi-like misconceptions. Dotan addresses the ethical pitfalls of large language models, including the […]

    Immersive Worlds – Cortney Harding on VR, AR, and the Metaverse

    Play Episode Listen Later Oct 16, 2024 38:01


    Kate O'Neill is joined by Cortney Harding— a visionary leader in virtual and augmented reality (VR and AR), founder, and CEO of Friends with Holograms. Together, they explore how immersive technologies are transforming human experiences and the innovative projects that Harding's agency has spearheaded. Cortney shares insights into Friends with Holograms' award-winning work, including a […]

    The Math Behind Social Media: How Algorithms Shape Our Digital Lives — featuring Noah Giansiracusa

    Play Episode Listen Later Oct 1, 2024 54:49


    In this insightful episode of The Tech Humanist Show, host Kate O'Neill sits down with Noah Giansiracusa, an associate professor of mathematics at Bentley University and visiting scholar at Harvard, to unravel the fascinating interplay between mathematics, social media algorithms, and their impact on society. Noah dives into his journey, which began during the pandemic, […]

    The Metaverse and Human Experiences with Sara M. Watson

    Play Episode Listen Later Sep 24, 2024 38:15


    When we talk about the metaverse as we imagine it taking shape, we don't always have the vocabulary to describe how we should be able to navigate it safely and enjoyably. Join host Kate O'Neill on The Tech Humanist Show as she engages in an enlightening conversation with technology critic and industry analyst Sara M. […]

    Binging Netflix with Dr. Joel Mier

    Play Episode Listen Later Sep 17, 2024 56:21


    In this compelling episode of The Tech Humanist Show, host Kate O'Neill sits down with Dr. Joel Mier, former head of marketing at Netflix, to explore the complex intersection of data, empathy, and customer-centricity in business. The conversation opens with a deep dive into the challenges of quantifying empathetic understanding and the importance of tying […]

    Navigating Water Challenges in the Tech Age: Insights from Industry Experts

    Play Episode Listen Later Apr 8, 2024 50:51


    In this riveting episode of The Tech Humanist Show, Kate O'Neill sits down with Aimee' Killeen and Chuck Greely, two pivotal figures in the water sector, to explore the confluence of technology, environmental science, and water management. Aimee' Killeen, President of the Water Environment Federation (WEF) and COO of Providence Engineering and Environmental Group, shares […]

    Technofeudalism: What Killed Capitalism with Yanis Varoufakis

    Play Episode Listen Later Feb 6, 2024 47:09


    In this episode of The Tech Humanist Show, we welcome Yanis Varoufakis, an economist, politician, author, and former Minister of Finance in Greece. Known for his insightful critique on the state of capitalism, Yanis opens up important dialogues that challenge our contemporary understanding of the economy. Our discussion centers around his new book, “Technofeudalism: What […]

    Cities, data, and tech – with Nashville mayoral candidate Freddie O’Connell

    Play Episode Listen Later Sep 9, 2023 39:11


    Cities are like test beds for how data and tech can empower citizen experience. So it's interesting to consider leading a growing city as someone with a background in technology. In this episode, Kate talks with her old friend Freddie O'Connell about his run for mayor of Nashville, and about how data and technology figures […]

    Balancing Economic Needs and Environmental Stewardship: Lessons from Nevada’s Mining Industry

    Play Episode Listen Later Sep 5, 2023 20:53


    Dive deep with us into the world of mining, sustainability, and the delicate balance between economic needs, environmental responsibility, and future potential. Our interview with Dana Bennett, president of the Nevada Mining Association, sheds light on the complex landscape of the mining industry, in Nevada but also globally, and its implications for communities and technology. […]

    The Tech Feminist Episode, in honor of International Women's Day – Celebrating Women in Tech

    Play Episode Listen Later Mar 8, 2023 11:29


    Women figure heavily into the tech humanist story. So for International Women's Day — and to kick off our new season, Season 3! — we're celebrating with an episode highlighting just a few of the brilliant women tech and futurist thinkers who've been guests on our show: Vanessa Mason, Dr. Safiya Noble, Giselle Mota, and Dr. […]

    What Does Spotify Unwrapped Have to do With Surveillance?

    Play Episode Listen Later Dec 12, 2022 15:33


    As fun as Spotify Unwrapped is, it speaks to the fact that we're all slowly willing to accept more and more tech companies gathering huge amounts of data on all of us...

    Reducing Algorithmic Bias, Harm, and Oppression

    Play Episode Listen Later Sep 2, 2022 28:16


    Everyone who interacts with technology in any way is either affected by or directly affecting algorithmic bias on a daily basis, and most of it is invisible to us. That's what I'm talking about today: algorithmic bias, where it comes from, how it affects you and your business, and how we can use strategic big-picture thinking to mitigate and erase the harm it causes.

    Humanity in (Supply) Chains

    Play Episode Listen Later Aug 4, 2022 23:20


    'Supply Chain' has become a household term. With its growing popularity, it's time to address something missing from that language: the humans who make it run.

    Why Human Experience? (vs Customer, Consumer, User, etc)

    Play Episode Listen Later Jul 1, 2022 32:25


    This week, we're exploring why it behooves businesses and business leaders to look at their users, consumers, customers, etc., as humans first. Slightly shifting perspective to consider the humanity behind purchasing decisions can lead to greater loyalty, more frequent use, and genuinely happier users, all of which add up to more business success and better outcomes for the world. Together with my guests, we discuss how human-centric decisions apply to various industries and how you can build better relationships that lead to success for all of humanity. Guests this week include Charlie Cole, Neil Redding, Dr. Rumman Chowdhury, Ana Milicevic, Cathy Hackl, Marcus Whitney, and David Ryan Polgar. The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. Produced and edited by Chloe Skye, with research by Ashley Robinson and Erin Daugherty at Interrobang and input from Elizabeth Marshall. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube, or head to KOInsights.com. Full Transcript Kate O'Neill: When you buy something, you're a customer. But — to paraphrase a line from the movie Notting Hill — you're also just a person, standing in front of a business, asking it to treat you like a human being. Over the last two decades plus working in technology, I've often held job titles that were centered on the experience of the user, the consumer, or the customer. In fact, the term ‘customer experience' has been in use since at least the 1960s, and has become so common that a recent survey of nearly 2,000 business professionals showed that customer experience was the top priority over the next five years. And while generally speaking this emphasis is a good thing, my own focus over the past decade or so has shifted. I've realized that the more macro consideration of human experience was a subtle but vital piece missing from the discussion at large. Because when we talk about experience design and strategy, no matter what word we use to qualify it—customer, user, patient, guest, student, or otherwise—we are always talking about humans, and the roles humans are in relative to that experience. In order to refocus on human experience instead of customer, you have to change the way you think about your buyers. You owe it to yourself to think not just about how people can have a better experience purchasing from your company, but also what it means to be fully human within the journey that brings them to that moment, and the uniquely human factors that drive us to make decisions leading to purchase or loyalty. A recent piece by Deloitte shared in the Wall Street Journal echoes this idea and offers five ways to be more human-centric in business: 1) be obsessed by all things human, 2) proactively identify & understand human needs before they are expressed, 3) execute with humanity, 4) be authentic, and 5) change the world. That's what today's episode is about: using empathy and strategic business-savvy to understand what it means to be human, and how that intersects with the worlds of technology and business. Neil Redding: “When you look at everything that has to do with buying and selling of things, it's so closely tied with what we care about, what we value most, value enough as humans to spend our hard-earned money on. And so, the realm of retail reflects something really deeply human, and profoundly human.” Kate: That was Neil Redding, brand strategist and self-described “Near Futurist” focused on the retail space. He's right—buying and selling things has become deeply entwined with humanity. But when we purchase something, it's not because we think of ourselves as “customers” or “end users.” We buy because we have a need or desire to fulfill, and sometimes that need is purely emotional. A ‘customer' buys your product—a human buys your product for a reason. 84% of consumers say that being treated like a person instead of a number is an important element to winning their business. It does seem like business professionals are catching on, as 79% say it's impossible to provide great service without full context of the client and their needs. But understanding something isn't the same as putting it into practice—only 34% of people say they feel like companies actually treat them as individuals. One major difference is the question of framing. Customer experience frames the motivator as, ‘how effectively the business operates the events related to a purchase decision.' It drives companies to focus on improving their own metrics, like bringing down call center wait times. These may yield worthwhile outcomes, but they're inherently skewed to the business perspective and aligned to the purchase transaction. Focusing instead on human experience shifts the perspective to the person outside the business, and what they want or need. It allows consideration of the emotional state they may be bringing to the interaction, which leaves greater room for empathy and context. A human experience mindset suggests that each individual's unique circumstances are more important than aggregate business metrics, because the reason why that person is interacting with your company probably can't be captured by measuring, say, how long they might have to wait on the phone. You could bring that wait time to zero and it still may not have any impact on whether the person feels heard, respected, or satisfied with the outcome — or whether they want to engage with you again. But as fuzzy as it is to talk about human experience, we know that measurement is fundamental to business success, so we have to find a way to define useful metrics somehow. For each business, that number is likely a bit different. So how do you know whether your customers feel like they're being treated as humans instead of just numbers? Charlie Cole, CEO of the flower delivery website ftd.com, believes one answer is obsessing over customer satisfaction metrics. Charlie Cole: “The best way to win this industry is just kick ass with the customer. We obsess over NPS scores, uh, as kind of leading indicators of LTV scores.” Kate: If you're not familiar with the acronyms, allow me to decipher: NPS stands for Net Promoter Score, which measures how likely the customer is to recommend the business, and LTV in this context means ‘lifetime value,' or the amount a customer may spend at your business over the course of their lifetime. Charlie Cole: “But remember, it's not the receiver's lifetime, it's the sender's lifetime. I mean, think about it. My stepmom is—just had a birthday April 9th, and I sent her a plant. If I went on a website and picked out a Roselia, and she received an Azelia, she's gonna be like, ‘thank you so much, that was so thoughtful of you,' and I'm gonna be pissed, right? And so like, we have to make sure we optimize that sender NPS score. It was shocking to us when we looked into the NPS, when we first got to FTD, our NPS, Kate, was in like the teens! My CTO looked at it and he goes, ‘how is this possible? We send gifts, who doesn't like receiving gifts?' And so we were looking at this stuff and we realized like, this is how you win. And I think when people look at the world of online delivery, there's very few companies that are extremely customer-centric… and in our world it matters. It's births, it's deaths, it's birthdays, it's Mother's Days… it's the most emotional moments of your life that you're relying on us for, so I think that gravitas just goes up to the next level.” Kate: Net Promoter Score offers directional insight about the customer experience, but it still isn't quite measurement of the broader human experience. The typical NPS question is phrased, “How likely is it that you would recommend [company X] to a friend or colleague?”, which forces customers to predict future actions and place themselves into hypothetical or idealistic scenarios. It is also measured on a 1-10 scale, which is pretty arbitrary and subjective — one person's 9 would not be another person's 9. A clearer way to ask this and gain more useful human-centric data would be with simple yes/no questions, asking people about actual past behaviors. For instance, “in the past 6 weeks, have you recommended [company X] to a friend or colleague?” Other alternative measures include PES, or Product Engagement Score, which measures growth, adoption, and stickiness of a given product or service, and doesn't require directly asking customers questions about their past or future habits. Instead, data comes in in real-time and allows for a clear measurement of success relative to a product's usage. While these metrics are useful in various ways, one thing missing from them is emotion. As humans, we are animals deeply driven by our emotions: research from MIT Sloan finds that before humans decide to take an action—any action, including buying something—the decision must first go through a filtering process that incorporates both reason and feelings. Reason leads to conclusions, but emotion leads to action. And if a customer feels frustrated by the customer service they're experiencing—perhaps they feel like they are being treated like a number, and not a person—they'll file a complaint, share on social media, and tell their friends and family to avoid the business. These actions can be quite time-consuming, but people will give up their time to right a wrong they feel they've experienced. All this is to say that if you want to retain human loyalty or attract new people to your business, you have to create a positive emotional response in your customers, which means understanding more about who they are than simply what product they might want. Many businesses have discovered that one of the best ways to create an emotional connection with people is through branding. A great brand image can forge a permanent bond with someone who feels strongly that the company shares their values and practices what they preach. Once someone has connected a brand to their own identity, it becomes much more difficult to convince them to switch to another company—even if that company provides the same product at lower cost—because switching companies feels like losing a part of them. Dr. Rumman Chowdhury, Director of the Machine Learning Ethics, Transparency, and Accountability team at Twitter, explored the concept of branding with me when she came on my show last year. Rumman Chowdhury: “Human flourishing is not at odds with good business. Some of what you build, especially if you're a B2C company, it's about brand. It's about how people feel when they interact with your technology or your product. You are trying to spark an emotion. Why do you buy Coke vs Pepsi? Why do you go to McDonald's vs Burger King? Some of this is an emotional decision. It's also this notion of value. People can get overly narrowly focused on value as revenue generation—value comes from many, many different things. People often choose less ‘efficient' outcomes or less economically sound outcomes because of how it makes them feel. A frivolous example but an extreme example of it would be luxury brands. Apple spends so much money on design. Opening every Apple product is designed to feel like you're opening a present. That was intentional. They fully understand the experience of an individual, in interacting with technology like a phone or a computer, is also an emotional experience.” Kate: If you're able to understand what people connect to about your brand, you can invest into magnifying that image. If your customer loves that you invest into clean energies, it becomes less important how much time they spend on the phone waiting for a service rep. Operational metrics can't show you this emotional resonance, so instead you have to think about what makes you stand out, and why people are attracted to you. Sometimes, however, human emotion has nothing to do with the product or brand in question, and more to do with the circumstances surrounding it. There's perhaps no better example of this than flowers, which can be given for myriad reasons, and usually at the extreme ends of the emotional spectrum. I'll let Charlie Cole explain. Charlie Cole: “For us, it's buyer journey by occasion. So, you are sending flowers for the birth of a newborn. You are sending flowers for the tragic death of a teenager. You are sending flowers for the death of your 96 year old great grandfather. You are sending flowers for your wife's birthday. I would argue that even though the end of all those buyer journeys is ‘flowers,' they are fundamentally different. And you have to understand the idiosyncrasies within those buyer journeys from an emotional component. You have to start with the emotions in mind. You're buying running shoes. The buying journey for like a runner, for like a marathoner, a guy who runs all the time, is emotionally different than someone who just got told they need to lose weight at the doctor. Someone who travels for business all the time versus someone who's taking their first ever international…travel. Like, my wife retold a story the other day to my aunt about how her first European trip was when she won a raffle to go to Austria when she was 17. And her, like, single mom was taking her to Europe, and neither of them had ever been to Europe. That's a different luggage journey than me, who used to fly 300,000 miles a year. And I think that if you take the time to really appreciate the emotional nuance of those journeys, yes there's data challenges, and yes there's customer recognition challenges, so you can personalize it. But I would urge every brand to start with like the emotional amino-acid level of why that journey starts, and then reverse-engineer it from there. Because I think you'll be able to answer the data challenges and the attribution challenges, but I think that's a place where we sometimes get too tech-y and too tactical, as opposed to human.” Kate: Another challenge unique to flowers and other products usually given as gifts is that there are two completely different humans involved in the transaction, each with different expectations and emotions riding on it. Charlie Cole: “There's two people involved in every one of our journeys, or about 92% of them: the buyer, and the receiver. So how do I message to you, I don't want to ruin the surprise! But I need to educate you, and oh yeah, I'm a really really nervous boyfriend, right? I wanna make sure everybody's doing it right, and it's gonna be there on time, and I need to make sure it's going to the right place… So the messaging pathways to the sender and receiver are fundamentally different. If you kind of forget about your buying journey, and imagine everything as a gifting buyer journey, it just changes the messaging component. Not in a nuanced way, but darn near in a reciprocal way.” And while some businesses struggle to connect emotionally with the humans that make up their customer base, the tech industry—and specifically social media companies—seem to fundamentally understand what it is that humans crave, in a way that allows them to use it against us. They thrive because they take something that is quintessentially human—connecting with people and sharing our lives—and turn it into a means for data collection that can then be used to sell us products that feel specifically designed for us. Like most of us, Neil Redding has experienced this phenomenon firsthand. Neil Redding: “We spend more and more of our time in contexts that we are apparently willing to have commercialized, right? Instagram is kind of my go-to example, where almost all of us have experienced this uncanny presentation to us of something that we can buy that's like so closely tied to… I mean, it's like how did you know that this is what I wanted? So myself and people close to me have just said, ‘wow, I just keep buying this stuff that gets presented to me on Instagram that I never heard of before but gets pushed to me as like, yeah it's so easy, and it's so aligned with what I already want. So there's this suffusion of commercial transaction—or at least discovery—of goods that can be bought and sold, y'know, in these moments of our daily lives, y'know, so that increasingly deep integration of commerce and buying and selling of things into our self-expression, into our communication, works because what we care about and what we are willing to buy or what we are interested in buying are so intertwined, right? They're kind of the same thing at some deep level.” Kate: Part of the reason this works is that humans crave convenience. Lack of convenience adds friction to any process, and friction can quickly lead to frustration, which isn't a mind state that leads to more business. The internet and social media has made keeping up with friends and gathering information incredibly convenient, so an advertisement here or there—especially one that looks and feels the same as everything else on our feed—doesn't bother us like it might in other contexts. And when those advertisements have been tailored specifically to our interests, they're even less likely to spark a negative emotion, and may in fact encourage us to buy something that we feel is very “us.” The big question for business leaders and marketers then is how do you digitize your business so that it emphasizes the richness of the human experience? How do you know which technologies to bring into your business, and which to leave aside? There are plenty of established and emerging technologies to choose from: Interactive email helps marketers drive engagement and also provides an avenue for additional data collection. Loyalty marketing strategies help brands identify their best customers and customize experiences for them. Salesforce introduced new features to help humanize the customer service experience with AI-powered conversational chatbots that feel pretty darn close to speaking with an actual human. Virtual and Augmented Reality website options allow customers to interact with products and see them in their hands or living rooms before they buy. With all the choice out there, it can be overwhelming. And t oo often, businesses and governments lean into the “just buy as much tech as possible!” approach without thinking integratively about the applications of said technology. Many companies are using that technology to leverage more data than ever before, hoping to customize and personalize experiences. David Ryan Polgar, a tech ethicist and founder of All Tech Is Human, explains why this method may not yield the results you think—because humans aren't just a collection of data points. David Ryan Polgar: “Are we an algorithm, or are we unique? I always joke, like, my mom always said I'm a, a snowflake! I'm unique! Because, when you think about Amazon and recommendations, it's thinking that your past is predicting your future. And that, with enough data, we can accurately determine where your next step is. Or even with auto-suggestion, and things like that. What's getting tricky is, is that true? Or is it subtly going to be off? With a lot of these auto-suggestions, let's say like text. Well the question I always like to think about is, how often am I influenced by what they said I should say? So if I wanna write, like, ‘have a…' and then it says ‘great day,' well, maybe I was gonna say great day, but maybe I was gonna say good day. And it's subtly different, but it's also influencing kinda, my volition. Now we're being influenced by the very technology that's pushing us is a certain direction. And we like to think of it, ‘well, it's already based on you,' but then that has a sort of cyclical nature to actually extending—” Kate: “Quantum human consciousness or something.” David: “Exactly! Exactly.” Kate: “Like, the moment you observe it, it's changed.” Kate: It's so easy, especially when you work with data, to view humans as output generators. But we're living in an age where people are growing increasingly wary of data collection, which means you may not know as much about the people whose data you've collected as you think you do. Becoming dependent on an entirely data-driven model for customer acquisition may lead to faulty decisions — and may even be seen as a huge mistake five years from now. Instead, I always talk about “human-centric digital transformation,” which means the data and tech-driven changes you make should start from a human frame. Even if you're already adopting intelligent automation to accelerate your operations, in some cases, very simple technologies may belong at the heart of your model. Here's Neil Redding again. Neil Redding: “Using Zoom or FaceTime or Skype is the only technology needed to do what a lot of stores have done during COVID, where their customers expect the store associate interaction when they come to the stores, they just create a one-on-one video call, and the shopper just has this interaction over videochat, or video call, and kind of does that associate-assisted shopping, right? And so you have that human connection, and again, it's nowhere near as great as sitting across a table and having coffee, but it's better than, y'know, a 2-dimensional e-commerce style shopping experience.” Kate: As a parallel to video conferencing, Virtual Reality has opened up avenues for new human experiences of business as well. Cathy Hackl, a metaverse strategist and tech futurist, explained a new human experience she was able to have during COVID that wouldn't have been possible without VR. Cathy Hackl: “I'll give you an example, like with the Wall Street Journal, they had the WSJ Tech Live, which is their big tech conference, and certain parts of it were in VR, and that was a lot of fun! I mean, I was in Spatial, which is one of the platforms, hanging out with Joanna Stern, and with Jason Mims, and like, in this kind of experience, where like I actually got to spend some 1-on-1 time with them, and I don't know if I would have gotten that if I was in a Zoom call, and I don't know if I would have gotten that in person, either.” Kate: Virtual Reality and video technologies have also opened up new avenues for healthcare, allowing patients to conference with doctors from home and only travel to a hospital if absolutely necessary. Marcus Whitney is a healthcare investor and founder of the first venture fund in America to invest exclusively in Black founded and led healthcare innovation companies; he explains that these virtual experiences allow for better happiness, healing, and comfort. Marcus Whitney: “Going forward, telehealth will be a thing. We were already on the path to doing more and more healthcare in the home. It was something that they were trying to stop because, is the home an appropriate place for healthcare to take place? Lo and behold, it's just fine. Patients feel more secure in the home, and it's a better environment for healing, so you're gonna see a lot more of that. I think we're finally gonna start seeing some real breakthroughs and innovation in healthcare. Most of the lack of innovation has not been because we didn't have great thinkers, it has largely been regulatory barriers. Remote patient monitoring was a huge one that came up in the last year, so now we have doctors caring about it. What moves in healthcare is what's reimbursable. They were always trying to regulate to protect people, but then they realized, well, we removed the regulatory barriers and people were fine, so that regulation makes actually no sense, and people should have more choice, and they should be able to do telehealth if they want to.” Kate: And that's just it: humans want choice. We want to feel seen, and heard, and like our opinions are being considered. There's another technology on the horizon that could give people more power over their technology, and therefore freedom and choice, that will likely cause massive change in the marketplace when it is more widely available: Brain-computer interface. Cathy Hackl explains. Cathy Hackl: “So I'm very keen right now on brain-computer interface. The way I'm gonna explain it is, if you've been following Elon Musk, you've probably heard of neuro-link—he's working on BCI that's more internal, the ones I've been trying are all external devices. So I'm able to put a device on that reads my brainwaves, it reads my intent, and it knows that I wanna scroll an iPad, or I've been able to turn on lights using just my thoughts, or play a video game, or input a code… I've been able to do all these things. And I'm very keen on it, very interested to see what's going on… I think the biggest thing that's stuck with me from studying all these technologies and trying them out from an external perspective, is that my brain actually really likes it. Loves the workout. Like, I'm thinking about it, and I'm like, the receptors here, pleasure receptors are like lighting up, I'm like ‘ohmygosh!' So I'm still sitting with that. Is that a good thing? Or a bad thing? I don't know, but I think these technologies can allow us to do a lot of things, especially people with disabilities. If they don't have a hand, being able to use a virtual hand to do things in a virtual space. I think that's powerful.” Kate: That story also illuminates the fact that there are many different types of people, each with different needs. Digital transformation has given people with disabilities a new way to claim more agency over their lives, which creates a brand new potential customer-base, filled with humans who desire freedom and choice as much as the next person. Now, let's talk about some companies who are doing at least a few q things right when it comes to the digital transformation of human experience. Starbucks, for instance. One of the worst parts of shopping in-store was waiting in line, and then the social pressure from the people behind you wishing you would order faster. If you weren't a regular customer, the experience could be overwhelming. When they launched their mobile order app, it tapped into a number of things that made the experience of buying coffee faster and easier, with all sorts of fun customization options that I never knew existed when I only ordered in-store. Now, even brand new customers could order complex coffee drinks — meaning in that one move the company may have brought in new customers and allowed the cost per coffee to increase — all without people feeling pressure from other shoppers, and without the inconvenience of waiting in line. Then there's Wal-Mart, who during the pandemic instituted ‘Wal-Mart pickup,' a service where people can shop online and pick up their goods without ever having to step into the store. The service is technically operating at a financial loss, but Wal-Mart understands that solid branding and convenience are worth more to their company's bottom-line in the long run than the amount of money they're losing by investing into this particular service. Of course, some businesses are better suited for the online-only world than others. As more companies attempt to digitize their businesses, it's incredibly important to tap into the human reasons that people wanted to engage with your business in the first place. In some cases, businesses have failed to make this connection, assuming that “if people liked us as a physical product, then they'll continue using us when we're digital,” or worse, “if we simply make people aware of us, they will become customers!” This assumption ignores human nature, as Ana Milicevic, a longtime digital media executive who is principal and co-founder of Sparrow Digital Holdings, explains. Ana Milicevic: “To be relevant in this direct to consumer world, you also have to approach awareness and customer acquisition differently. And this is the #1 mistake we see a lot of traditional companies make, and not really understand how to pitch to a digital-first, mobile-first consumer or a direct subscriber. They're just not wired to do it that way, and often times the technology stacks that they have in place just aren't the types of tools that can facilitate this type of direct interaction as well. So they're stuck in this very strange limbo where they are committed to continuing to acquire customers in traditional ways, but that's just not how you would go about acquiring a direct customer.” Kate: Acquiring those direct customers requires an understanding of what humans want—a large part of which is meaning. And how people create meaning in their lives is changing as well. Long before the pandemic, trends were already pointing toward a future where we live more of our lives online, but those trends have also been accelerated. So beyond digitizing your business, it may also be useful to invest time, money, and energy into discovering how the humans of the future will create meaning in their lives. Cathy Hackl discussed some of the trends she's seen in her own kids that show how today's children will consume and make purchasing decisions in a very different way than most modern businesses are used to. Cathy Hackl: “Something else that I'm noticing… y'know we're going to brick and mortar, but we're going to brick and mortar less. So you start to see this need for that virtual try-on to buy your makeup, or to buy clothes, and it's also transitioning not only from the virtual try-on into what I'm calling the direct-to-avatar economy. Everything from virtual dresses that you're buying, or custom avatars, y'know you're starting to create this virtualized economy. And this is the reason I always talk about this now, is my son recently did his first communion, and when we said, ‘hey, what do you want as a gift?' he said, ‘I don't want money, I want a Roblox gift card that I can turn into Robucks,'—which is the currency they use inside Roblox—'so that I can buy—whichever gamer's skin.' And, y'know, when I was growing up, my brother was saving up to buy AirJordans. My son doesn't want that, y'know, he wants Robucks, to buy something new for his avatar. This is direct-to-avatar; is direct-to-avatar the next direct-to-consumer?” Kate: Our online avatars represent us. We can customize them to directly express who we feel we are. Part of the reason this idea is so attractive is that many people—increasingly so in the context of online interaction—seek out meaningful experiences as our ‘aspirational' selves. We gravitate to the communities that align with facets of who we wish we were. And perhaps less productively, we may also choose to present the idealized version of ourselves to the world, omitting anything we're embarrassed by or that we feel may paint us in a negative light. But honestly, all of this makes sense in the context of making meaning, because humans are generally the most emotionally fulfilled when we feel empowered to control which ‘self' we present in any given interaction. With this much freedom of choice and expression, and with the complications of the modern supply chain—which I will talk about more in depth in our next episode—it's important to acknowledge that creating convenience and improving human satisfaction aren't going to be easy tasks. Behind the scenes, there is a tremendous amount of work that goes into providing a satisfying customer experience. Let's go back to the example of flowers and see what Charlie Cole has to say. Charlie Cole: “If it's too cold they freeze, if it's too hot they wilt, if UPS is a day late they die. And then, the real interesting aspect—and this isn't unique to flowers—the source is remarkably centralized. So the New York Times estimated that 90-92% of roses that are bought in America for Valentine's Day come from Columbia and Ecuador. And so, if anything goes wrong there, then you really don't have a chance. Imagine the quintessential Valentine's Day order: A dozen long-stem roses, New York City. Easy, right? I used to live on 28th and 6th, so let's say Chelsea. Okay, I've got 7 florists who could do it. Who has delivery capacity? Roses capacity? The freshest roses? The closest to proximity? The closest to the picture in the order? Who has the vase that's in the order? Did they buy roses from us? Because I like to be able to incentivize people based on margins they already have. And so without exaggeration, Kate, we have about 11-12 ranking factors that educate a quality score for a florist, and that's how it starts the process. But then there's all the other things, like how do we know somebody didn't walk into that florist that morning and buy all the roses, right? And so there's this real-time ebb-and-flow of demand because our demand is not ours! They have their own store, they have their own B2B business, they might take orders from some of our competitors. They might have their own website. We have no idea what any given florist happens in real time because they are not captive to us. What we've learned is the place we have to get really really really really good is technology on the forecasting side, on the florist communication side, and the customer communication side. Because I can't control the seeds on the ground in Columbia, but I can really control the communication across the entire network as far as we go, as well as the amounts the we need in various places.” Kate: Creating that small-scale, emotional human moment where someone receives flowers requires immense computing power and collaboration between multiple businesses and workers. Which is part of why Charlie Cole also believes that in some cases, the best way to help your business succeed is to invest in helping other businesses that yours interacts with. Charlie Cole: “Small businesses… I think it's our secret sauce. And I think COVID has shined a light on this: small businesses are the core of our communities. Right? They are the absolute core, and I think it was always nice to say that, but now we know it. And so here's what I think we do better than anybody else: we've invested more in helping our florists run their own small business independently of us than we have about optimizing our marketplace. We launched new POS software. We launched a new local website product where we're like the first person ever to become a reseller for Shopify because we made a custom platform for florists. We're just their website provider. They're actually competing with FTD.com in a lot of ways—but I think that's where we're gonna differentiate ourselves from all the other people that are perceived as, by small businesses, (their words not mine) leeches. Right? I think to actually effectively run a marketplace which is fulfilled by small businesses, you need to invest as much in helping them win their local market independent of you.” Kate: You could make the case that there is no more evolved human experience than choosing to help others. So if your business is engaged in activities that allow other businesses—and therefore humans—to thrive, you may also be building your brand in a direction that creates more customer loyalty than any exit survey or great service interaction ever could. Beyond understanding human emotions and needs, you can help your business by leaning into understanding how we create meaning. At our core, we are compelled to make meaning. Whether we realize it or not, meaningful experiences and interactions are the driving force behind many of our decisions, financial or otherwise. Meaning is different for everyone, but having it is vital to our happiness. If you are able to engage with potential customers in a way that helps them create meaning, or allows them to use your product to make meaning on their own, you are aligning your success with your customers' success, and that bodes well for the long term. At the end of the day, making any of these changes starts at the very top of your business. Leadership needs to set the tone, creating a culture that allows room for workers at every level to engage more meaningfully with customers, and with each other. (By the way, for more discussion on creating or changing work culture, you can check out our last episode, “Does the Future of Work Mean More Agency For Workers?”) Your effort will benefit not only your business, but society as a whole. Remember the Deloitte piece in the Wall Street Journal I mentioned at the start of the episode, with ways to be more human-centric in business? Number 5 on that list was “change the world,” and research from Frontiers suggests that the well-being of any society is directly linked to how the people living within it feel about their lives and purpose. How we do that may be as simple — and as complicated — as helping people to experience meaning at any level. While the technologies around us keep changing, the opportunity becomes increasingly clear for people who work around creating customer experiences and user experiences to open up the aperture to see humanity through a fuller lens. This way, as you set your business up for longterm success, you also advocate for making human experiences as meaningful as possible — and you just might be changing the world for the better. Thanks for joining me as I explored what it means to think of customers as human. Next time, I'll be exploring the supply chain and how, despite the vast technology involved, the closer you look the more you realize: the economy is people.

    Why Human Experience? (vs Customer, Consumer, User, etc)

    Play Episode Listen Later Jul 1, 2022 32:25


    This week, we're exploring why it behooves businesses and business leaders to look at their users, consumers, customers, etc., as humans first. Slightly shifting perspective to consider the humanity behind purchasing decisions can lead to greater loyalty, more frequent use, and genuinely happier users, all of which add up to more business success and better […]

    Does the Future of Work Mean More Agency for Workers?

    Play Episode Listen Later May 20, 2022 33:07


    This week, we look at a few of the macro trends shaping both the labor market today and the future of work — such as the Great Resignation and collective bargaining — and examine how tech-driven business has both brought them about and potentially given workers more freedom and leverage. We also consider what all of that means for you if you're the one tasked with managing workers or leading a workplace forward, as well as what these trends might mean overall for humanity. Guests this week include Giselle Mota, Christopher Mims, Dr. Rumman Chowdhury, Dorothea Baur, John C. Havens, and Vanessa Mason. The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Full Transcript: Kate: The global workforce is experiencing an unprecedented level of change. The Great Resignation may look like a direct result of the COVID Pandemic, but the drivers behind this large-scale trend come from deep-rooted and centuries-old issues in employer-employee dynamics that have been amplified by evolving technology. So in this episode, we're exploring the lessons we've learned from the technologization — the impact of technology on work, as well as how the changing work landscape is pushing people to crave and demand more agency over our work and our lives. I recently had the opportunity to speak with Giselle Mota, Principal Consultant on the Future of Work at ADP, who offered some insight into the emotional human factor behind these changes. Giselle: “I think it's more about us realizing that work is not all that we are. Some people have left their very high-paying roles because they had stress about it, or because they need to be at home caregiving, or now they have issues with their own healthcare or mental health that came up, and they're prioritizing self over this idea of ‘I live to work I live to work I live to work,' right? The value system of humanity globally has shifted a lot, and people have been reassessing, ‘how do I want to spend my time?' ‘How do I want to live my life?' Work should not be driving all of that, our lives should be driving work experience. The ability to think about choosing when you're gonna work, ability to work from different places, how long is my work week, can I come in and out of my shifts throughout the day, can I work on projects, can I destructure and break down what work is and work at it my way? I think that's what we've been seeing.” Kate: Before we can fully understand why this is happening, we have to look at where we are and how we got here. Trends like the Great Resignation follow many years of jobs being automated or shipped overseas. Fewer people are needed to fill the remaining roles, so demand for workers in certain markets is disappearing, while in other markets, the supply of workers for a given job is so high that people aren't paid a living wage. With the rise of the ‘gig economy,' it's becoming less clear what level of education is needed to attain a well-paying job that will still be around in 5 years. Not that this is an entirely new phenomenon. Since at least the dawn of the industrial era, automation caused certain jobs to go out of favor while other jobs sprang up to fill the void. In the 21st century, with the advent of the Internet, algorithms, and ‘big data,' this cycle has been significantly accelerated. More jobs have been “optimized” by technology to prioritize maximum efficiency over human well-being, which is part of what's causing—as I talked about in our last episode—a global mental health crisis. And while the overview sounds bad, there is good news. As long as we can stay open-minded to change, we can work together to design solutions that work for everyone. And if we can do that, the future of work has the potential to be much brighter than the realities of today. To get there, we have to ask ourselves, what assumptions were made in the past to create the modern work environment, and which of those no longer serve us? Rahaf: “If we're gonna move to a more humane productivity mindset, we have to have some uncomfortable conversations about the role of work in our lives, the link between our identity and our jobs and our self-worth, our need for validation with social media and professional recognition, our egos…” Kate: That's Rahaf Harfoush, a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. You may have heard the extended version of this quote in our last episode, but her insight into how questioning our assumptions about work is playing into the changing work landscape felt equally relevant here. Rahaf: “We really have to talk about, ‘growing up, what did your parents teach you about work ethic?' how is that related to how you see yourself? Who are the people that you admire? You can start testing your relationship with work, and you start to see that we have built a relationship with work psychologically where we feel like if we don't work hard enough, we're not deserving. We don't ever stop and say, ‘does this belief actually allow me to produce my best possible work, or is it just pushing me to a point where I'm exhausted and burnt out?” Kate: Outside of our own personal assumptions about our relationship with work, there's also the relationship businesses and technology have with us as consumers, and how their assumptions about what we want are equally problematic. John: “I've read a lot of media, where there's a lot of assumptions that I would call, if not arrogant, certainly dismissive, if not wildly rude… You'll read an article that's like, ‘This machine does X, it shovels! Because no one wants to shovel for a living'!” Kate: That's John C. Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Here he's talking about the current belief held by a lot of the people creating modern technologies that everything can be automated, no matter the cost. John: “We've all done jobs that, elements of it you really don't like and wish could be automated, but usually that's because you do the job long enough to realize, this part of my job I wish could be automated. I've done a lot of, y'know, camp counseling jobs for the summer where I was outside, y'know I was doing physical labor… it was awesome! That said, you know, I was like, ‘this is great for what it was, I kind of don't want to do this for my whole life.' Yeah, a lot of people would not be like, ‘give me 40 years of shoveling!' But the other thing there that I really get upset about when I read some of those articles is what if, whatever the job is, insert job X, is how someone makes their living? Then it's not just a value judgment of the nature of the labor itself, but is saying, from the economic side of it, it's justified to automate anything that can be automated, because someone can make money from it outside of what that person does to make money for them and their family. We have to have a discussion about, y'know, which jobs might go away. Why is that not brought up? It's because there's the assumption, at all times, that the main indicator of success is exponential growth. And a lot of my work is to say, I don't think that's true.” In many ways, our society has failed to question the assumption ‘if something can be automated, automate it.' But as the great Ian Malcolm said in Jurassic Park, “your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.” While automation of jobs is frequently thought of in a manufacturing context, more and more we're seeing automating creep into other areas as well, like decision-making and workplace management. The same factories where machines are replacing physical human labor have now been optimized to replace human thought labor and managers as well. Christopher Mims, tech columnist at the Wall Street Journal and author of Arriving Today, on how everything gets from the factory to our front door, calls this phenomenon “Bezosism.” Christopher: “Bezosism, it's like the modern-day version of Taylorism or Fordism… the bottom line is, this is how you optimize the repetitive work that people do. This isn't just Amazon, Amazon is just the tip of the sphere. Amazon is the best at doing this, but every other company that can is trying to do the same thing: make workers more productive by managing them with software and algorithms, kind of whatever the consequence is. Emily Gindelsberger talks about how, whether it's an Amazon warehouse, or any fast-food restaurant you can name, or a call center… all of these places are now managed by algorithm, and the workers are monitored by software. Instead of a boss telling them to work faster, it's the software cracking the whip and being like, ‘you're not working fast enough, you need to pick packages faster' in this Amazon warehouse, or ‘you need to flip burgers faster' if you work at a McDonald's. But this is becoming the dominant way that work is organized if you don't have a college degree, if you're an hourly worker. You know, the whole phenomenon of the gig economy, the rise of part-time work, subcontracting, the so-called ‘fissured workplace'—all of that is, as one person put it, do you work above the API, like, are you a knowledge worker who's creating these systems? Or do you work below the API, where, what's organizing your work and your life—it's a piece of software! I mean, it's designed by humans, but your boss is an algorithm. And that is becoming, other than wealth and income inequality, one of the defining characteristics of, almost a neo-feudalism, ‘cause it's like, ‘hey! we've figured out how to organize labor at scale, and extract the most from people and make them work as efficiently as possible… we'll just let the software do it!'” Kate: The acceleration of this style of management is a big part of the driver pushing people to question our assumptions about work and begin to demand more agency. If you've been following my work for a while, you've heard me say, “the economy is people”, and that means we can't talk about the future of work without talking about the future of the worker. The idea that people, especially those doing what is considered ‘unskilled' labor, have little agency over how they work isn't new. AI may have exacerbated the issue, but the problem goes back as far as labor itself. Labor unions arose in the early 19th century in an attempt to level the playing field and allow workers to express their needs and concerns, but as we've seen with the recent Starbucks and Amazon unionization stories, the battle for human rights and agency in the workplace is far from decided. And it isn't just factories and assembly lines—it's happening in every industry. In the tech industry, there's a subset of people known as “Ghost Workers,” a term created by anthropologist Mary L. Gray and computer scientist Siddharth Suri to describe the usually underpaid and unseen workers doing contract work or content moderation. They frequently work alone, don't interact with one another, and often aren't even aware who they're working for, so the idea of collective bargaining feels farther out of reach. Dorothea Baur, a leading expert & advisor in Europe on ethics, responsibility, and sustainability across industries such as finance, technology, and beyond, explains some of the human rights issues at play in this phenomenon. Dorothea: “If you look at heavily industrialized contexts or like, heavy manufacturing, or like, textile industry, the rights we talk about first are like the human rights of labor, health and safety, etc. But I mean, trade unions have come out of fashion awhile ago, a lot of companies don't really like to talk about trade unions anymore. So when we switch to AI you think, ‘oh, we're in the service industry, it's not labor intensive,' but the human factor is still there. Certainly not blue collar employees, at least not within the own operations of tech companies, and also maybe not as many white collar employees, in relation to their turnover as in other contexts, but there's a lot of people linked to tech companies or to AI, often invisible. We have those Ghost Workers, gig economy, or people doing low-payed work of tagging pictures to train algo—uh, data sets, etc., so there is a labor issue, a classical one, that's really a straightforward human rights case there.” Kate: Algorithms have worked their way into the systems that manage most of our industries, from factory workers to police to judges. It's more than just “work faster,” too. These algorithms are making decisions as important as where and how many police should be deployed, as well as whether bail should be set, and at what amount. The logical (but not necessarily inevitable) extreme of this way of thinking is that all decisions will be relegated to algorithms and machines. But if people with the ability to make decisions continue to give these types of decisions to machines, we continue to lose sight of the human in the equation. What little decision making power the workers had before is being taken away and given to AI; little by little, human agency is being stripped away. The question then becomes, what if an algorithm tells a worker to do something they think is wrong? Will they have the freedom to question the algorithm, or is the output absolute? Dr. Rumman Chowdhury, Director of the Machine Learning Ethics, Transparency, and Accountability team at Twitter, elaborates. Rumman: “So if we're talking about, for example, a recommendation system to help judges decide if certain prisoners should get bail or not get bail, what's really interesting is not just how this affects the prisoner, but also the role of the judge in sort of the structure of the judicial system, and whether or not they feel the need to be subject to the output of this model, whether they have the agency to say, ‘I disagree with this.' A judge is a position of high social standing, they're considered to be highly educated… if there's an algorithm and it's telling them something that they think is wrong, they may be in a better position to say, ‘I disagree, I'm not going to do this,' versus somebody who is let's say an employee, like a warehouse employee, at Amazon, or somebody who works in retail at a store where your job is not necessarily considered to be high prestige, and you may feel like your job is replaceable, or worse, you may get in trouble if you're not agreeing with the output of this model. So, thinking about this system that surrounds these models that could actually be a sort of identically structured model, but because of the individual's place in society, they can or cannot take action on it.” Kate: The jury — if you'll pardon the expression — is still out on these questions, but we do know that in the past, worker agency was a key element in the success of our early systems. In fact, in the early days of creating the assembly line, human agency was fundamental to the success of those systems. Christopher Mims again. Christopher: “The Toyota production system was developed in a context of extreme worker agency, of complete loyalty between employer and employee, lifelong employment in Japan, and workers who had the ability to stop the assembly line the instant they noticed that something was not working, and were consulted on all changes to the way that they work. Honestly, most companies in the US cannot imagine functioning in this way, and they find it incredibly threatening to imagine their hourly workers operating this way, and that's why they all—even ‘employee-friendly' Starbucks—uses all these union busting measures, and Amazon loves them… because they just think, ‘oh, god, the worst thing in the world would be if our ‘lazy' employees have some say over how they work. It's nonsense, right? There's an entire continent called Europe where worker counsels dictate how innovations are incorporated. You know, that's how these things work in Germany, but we have just so destroyed the ability of workers to organize, to have any agency… Frankly, it is just disrespectful, it's this idea that all this labor is “unskilled,” that what you learn in this jobs has no real value… I think companies, they're just in this short term quarter-to-quarter mentality, and they're not thinking like, ‘how are we building a legacy? How do we retain employees, and how do we make productivity compatible with their thriving and happiness?' They all give lip service to this, but if you push as hard as Starbucks for instance against a labor union, honestly you're just lying.” Kate: Throughout the 19th and 20th centuries, Unions were an imperfect but necessary solution to ensuring workers had access to rights, freedoms, and safety in certain workplaces. According to a 2020 report from the Economic Policy Institute, Unionized workers earn on average 11.2% more in wages than nonunionized peers, and Black and Hispanic workers get an even larger boost from unionization. However, it looks like the changing nature of work is changing unionization as well. Unlike the Great Depression, which expanded the reach of labor unions, the Great Recession may have ushered in a period of de-unionization in the public sector. From the 1970s to today, the percentage of U.S. workers in a union has fallen from 25 to just 11.7 percent. In a piece of good news for Amazon employees in New York, they successfully voted for a union in their workplace on April 4th of this year, marking the first victory in a years-long battle for Amazon employee rights and agency. Looking forward, it's hard to say whether unions will be the best solution to worker woes. As more jobs become automated and fewer humans are needed in the workplace, there may be a time when there are only a few employees in a given department, which makes it harder to organize and empower collective bargaining. At the same time, being the only person working in your department may in fact give you more power to influence decisions in your workplace, as Christopher Mims explains. Christopher: “If you reduce the number of humans that work in a facility, it's like a tautology—the ones that remain are more important! Because in the old days, you could hire thousands of longshoreman to unload a ship, if one of them didn't show up, like, who cares? But if you're talking about a professional, today, longshoreman who's making in excess of 6 figures, has these incredibly specialized skills, knows how to operate a crane that can lift an 80,000 lb. shipping container off of a building-size ship, and safely put it on the back of a truck without killing anybody—that person doesn't show up to work, you just lost, y'know, a tenth of your productivity for that whole terminal that day. This is also an example of this tension between, like, it's great that these are good-paying blue-collar jobs, there aren't that many left in America, however, their negotiating power is also why the automation of ports has really been slowed. So that is a real genuine tension that has to be resolved.” Kate: So far in this episode, we've talked a lot about factory workers and the types of jobs that frequently unionize, but the future of work encompasses everyone on the work ladder. In the past, all of the problems regarding lack of worker agency has applied to ‘white collar' jobs as well. The modern office workplace evolved in tandem with factories, and the assumptions about how work should be organized are just as present there. Vanessa: “Our work environments, with who was involved with it and how they were constructed, is something that has been done over a long period of time. And the people who have been involved in that who are not White men, who are not sort of property owners, who are not otherwise wealthy, is a really short timeline.” Kate: That's Vanessa Mason, research director for the Institute for the Future's Vantage Partnership. Here she's explaining how workplace culture evolved from a factory mindset—and mostly by the mindset of a particular subgroup of people. Offices may feel like very different places from factories, but when you look at the big picture, the organizational structures are guided by many of the same ideas. Vanessa: “I think that a lot of organizations and offices are fundamentally sort of command and control, kind of top-down hierarchies, unfortunately. You know, the sort of, ‘the manager does this! Accountability only goes one direction! There's a low level of autonomy depending on what level you are in the chart!' All of those treat humans like widgets. I think that we have to keep in mind that history and that experience, like I still bring that experience into the workplace—basically, I'm in a workplace that was not designed for me, it's not meant for me to succeed, it's not meant for me to even feel as socially safe and as comfortable. There's a lot of research about psychological safety in teams. Like, our teams are not meant to be psychologically safe, they're set up to basically be office factories for us to sort of churn out whatever it is that we're doing in an increasingly efficient manner, productivity is off the charts, and then you receive a paycheck for said efforts. And it's only right now (especially in the pandemic) that people are sort of realizing that organizational culture 1) is created, and 2) that there's an organizational that people didn't realize that they were kind of unintentionally creating. And then 3) if you want your organizational culture to be something other than what it is, you need to collectively decide, and then implement that culture. All of those steps require a sort of precondition of vulnerability and curiosity which people are really frightened to do, and they're trying to escape the sort of harder longer work of negotiating for that to occur.” Kate: And that's what's needed from our managers and leaders as we navigate to a brighter future of work: vulnerability and curiosity. The vulnerability to admit that things could be better, and the curiosity to explore new ways of structuring work to allow more room for agency and decision-making to bring out the best in everyone. If the idea of a union sounds scary or expensive, perhaps there are other ways to allow employees the have more agency over how they work. A world in flux means there's still room to test new solutions. Lately, one of the changes business leaders have tried to make to their organizations is to bring in more diversity of workers. Women, people of color, neurodivergent minds, and people with disabilities have all been given more opportunities than they have in the past, but as Giselle Mota explains, just bringing those people into the workplace isn't enough. Giselle: “I read a study recently that was talking about, even though a lot of diverse people have been hired and promoted into leadership roles, they're leaving anyway. They don't stick around an organization. Why is that? Because no matter what the pay was, no matter what the opportunity was, some of them are realizing, this was maybe just an effort to check off a box, but the culture doesn't exist here where I truly belong, where I'm truly heard, where I want to bring something to the forefront and something's really being done about it. And again it has nothing to do with technology or innovation, we have to go back to very human, basic elements. Create that culture first, let people see that they have a voice, that what they say matters, it helps influence the direction of the company, and then from there you can do all these neat things.” Kate: If you're managing a workplace that has functioned one way for a long time, it may not be intuitive to change it to a model that is more worker agency-driven. How can you change something you may not even be aware exists? Vanessa Mason has a few tips for employers on what they can do to help bring about a new workplace culture. Vanessa: “And so what you can do, is really fundamentally listening! So, more spaces at all hands for employees to share what their experience has been, more experience to share what it is like to try to get to know co-workers. You know, anything that really just surfaces people's opinions and experiences and allows themselves to be heard—by everyone, I would say, also, too. Not just have one team do that and then the senior leadership just isn't involved in that at all. The second thing is to have some kind of spaces for shared imagination. Like all the sort of popular team retreats that are out there, but you certainly could do this asynchronously, at an event, as part of a celebration. Celebrating things like, y'know, someone has had a child, someone's gotten married, someone's bought a house—all of those things are sort of core to recognizing the pace and experience of being human in this world that aren't just about work and productivity. And then some way of communicating how you're going to act upon what you're hearing and what people are imagining, too. There's a bias towards inaction in most organizations, so that's something that certainly senior leadership should talk about: ‘How do we think about making changes, knowing that we're going to surface some changes from this process?' Being transparent, being accountable… all of those sort of pieces that go along with good change management.” Kate: A 2021 paper in the Journal of Management echoes these ideas, stating that communication between employers and their workers need to be authentic, ongoing, and two-directional, meaning that on top of listening to employee concerns, managers also needed to effectively communicate their understanding of those concerns as well as what they intended to do about them. A professional services firm analyzing a company's internal messaging metadata was able to predict highly successful managers by finding people who communicated often, responded quickly, and were action-oriented. Of course another thing many workplaces have been trying, especially in the wake of the COVID pandemic, is allowing employees to work remotely. Giselle Mota again. Giselle: “I think all we're seeing is we're just reimagining work, the worker, and workplace. Now that the pandemic happened, we learned from like Zoom, ‘wait a minute, I can actually work remotely, and still learn and produce and be productive, on a video!' But now, we can add layers of experience to it, and if you so choose to, you can now work in a virtual environment… people are flattening out the playing field. Companies that used to be die-hard ‘you have to work here in our office, you have to be here located right next door to our vicinity,' now they've opened it up and they're getting talent from across the pond, across the globe, from wherever! And it's creating new opportunities for people to get into new roles.” Kate: Although COVID and Zoom accelerated a lot of things, the idea of people working from home instead of the office isn't actually a new one. AT&T experimented with employees working from home back in 1994, exploring how far an organization could transform the workplace by moving the work to the worker instead of the other way around. Ultimately, they freed up around $550M in cash flow by eliminating no longer needed office space. AT&T also reported increases in worker productivity, ability to retain talent, and the ability to avoid sanctions like zoning rules while also meeting Clear Air Act requirements. As remote work on a massive scale is a relatively new phenomenon, the research is still ongoing as to how this will affect long-term work processes and human happiness. It is notable that working remotely is far less likely to be an option the farther you drop down the income ladder. According to the U.S. Bureau of Labor Statistics, only 9.2% of workers in the bottom quartile of wage-earners have the ability to work remotely. The availability also varies depending on the job you're doing, with education, healthcare, hospitality, agriculture, retail, and transportation among the least-able to work remotely, and finance and knowledge workers among the most-able. Because we aren't entirely sure whether remote work is the best long-term solution, it's worth looking at other ways to attract high-value workers—and keep them around. One idea? Invest in career planning. Technology is making it easier than ever for employers to work with their employees to plan for a future within the company. AI has made it possible to forecast roles that the company will need in the future, so rather than scramble to fill that role when the time comes, employers can work with a current or prospective employee to help prepare them for the job. In my conversation with Giselle Mota, she explored this idea further. Giselle: “A lot of companies are now able to start applying analytics and forecast and plan, ‘okay, if this is a role for the future, maybe it doesn't exist today, and maybe this person doesn't yet have all the qualifications for this other role. But, they expressed to us an interest in this area, they expressed certain qualifications that they do have today, and now AI can help, and data can help to match and help a human, you know, talent acquisition person, career developer, or manager, to help guide that user to say, ‘this is where you are today, this is where you want to be, so let's map out a career plan to help you get to where you should be'.” Kate: She went on to explain that employers don't need to think about jobs so rigidly, and rather than looking for one perfect person to fill a role, you can spread the tasks around to help prepare for the future. Giselle: “I was talking to someone the other day who was saying, ‘y'know, we have trouble finding diverse leadership within our organization and bringing them up,' and I was talking to them and saying, ‘break down a job! Let people be able to work on projects to be able to build up their skillset. Maybe they don't have what it takes today, fully, on paper to be what you might be looking for, but maybe you can give them exposure to that, and help them from the inside of your organization to take on those roles.” Kate: All of these changes to work and the workplace mean that a lot of office workers can demand more from their jobs. Rather than settle for something nearby with a rigid schedule, people can choose a job that fits their lifestyle. As more of these jobs are automated, we are hopefully heading for an age where people who were relegated to the so-called “unskilled” jobs will be able to find careers that work for them. Because it is more than the workplace that is changing, it's also the work itself. I asked Giselle what types of jobs we might see in the future, and she had this to say. Giselle: “As we continue to explore the workplace, the worker, and the work that's being done, as digital transformation keeps occurring, we keep forming new roles. But we also see a resurgence and reemergence of certain roles taking more importance than even before. For example, leadership development is on the rise more than ever. Why? Because if you look at the last few years and the way that people have been leaving their workplaces, and going to others and jumping ship, there's a need for leaders to lead well. Officers of diversity have been created in organizations that never had it before because the way the world was going, people had to start opening up roles like that when they didn't even have a department before. As we move into more virtual experiences, we need creators. We're seeing organizations, big technology organizations, people who enable virtual and video interactions are creating layers of experience that need those same designers and that same talent—gamers and all types of creators—to now come into their spaces to help them start shaping the future of what their next technology offerings are gonna look like. Before, if you used to be into photography or graphic design or gaming or whatever, now there's space for you in these organizations that probably specialize in human capital management, social management… To give you a quick example, Subway! Subway opened up a virtual space and they allowed an employee to man a virtual store, so you could go virtually, into a Subway, order a subway sandwich down the line like you're there in person, and there's someone that's actually manning that. That's a job. And apart from all of that side of the world, we need people to manage, we need legal counsel, we need people who work on AI and ethics and governance—data scientists on the rise, roles that are about data analytics… When Postmates came out and they were delivering to people's homes or wherever it was, college campuses, etc., with a robot, the person who was making sure that robot didn't get hijacked, vandalized, or whatever the case is—it was a human person, a gamer, it was a young kid working from their apartment somewhere, they could virtually navigate that robot so that if it flipped over on its side or whatever, it would take manual control over it, set it right back up, and find it and do whatever it needed to do. So that's an actual role that was created.” Kate: While many people fear that as jobs disappear, people will have to survive without work — or rather, without the jobs that provide them with a livelihood, an income, a team to work with, and a sense of contribution — the more comforting truth is that we've always found jobs to replace the ones that went out of fashion. When cars were invented, the horse-and-buggy business became far less profitable, but those workers found something else to do. We shouldn't be glib or dismissive about the need individual workers will have for help in making career transitions, but in the big picture, humans are adaptable, and that isn't something that looks like it will be changing any time soon. Giselle: “Where we're seeing the direction of work going right now, people want to have more agency over how they work, where they work, themselves, etc. I think people want to own how they show up in the world, people want to own more of their financial abilities, they want to keep more of their pay… If you just wade through all of the buzzwords that are coming out lately, people want to imagine a different world of work. The future of work should be a place where people are encouraged to bring their true full selves to the table, and that they're heard. I think we've had way too much of a focus on customer experience because we're trying to drive profitability and revenue, but internally, behind the scenes, that's another story that we really need to work on.” Kate: The more aware we are of the way things are changing, the better able we are to prepare for the future we want. Even in the face of automation and constantly-evolving technologies, humans are adaptable. One thing that won't be changing any time soon? Workers aren't going to stop craving agency over their jobs and their lives, and employers aren't going to stop needing to hire talented and high-value employees to help their businesses thrive. Hopefully you've heard a few ideas in this episode of ways to lean into the change and make your business, or your life, a little bit better. Even more hopeful is the possibility that, after so much disruption and uncertainty, we may be entering a moment where more people at every stage of employment feel more empowered about their work: freer to express their whole selves in the workplace, and able to do work that is about more than paying the bills. That's a trend worth working toward. Thank you so much for joining me this week on The Tech Humanist Show. In our next episode, I'm talking about why it behooves businesses to focusing on the human experience of buying their product or service, rather than the customer experience. I'll see you then.

    Does the Future of Work Mean More Agency for Workers?

    Play Episode Listen Later May 20, 2022 33:07


    This week, we look at a few of the macro trends shaping both the labor market today and the future of work — such as the Great Resignation and collective bargaining — and examine how tech-driven business has both brought them about and potentially given workers more freedom and leverage.

    How Tech and Social Media Impact Our Mental Health

    Play Episode Listen Later Apr 28, 2022 29:00


    On this week's episode, we're talking about how technology and social media impact our mental health, and has led to a mental health crisis that some have called “the next global pandemic.” From the algorithms that decide what we see to the marketing tricks designed to keep us constantly engaged, we explore how our assumptions about work have led to a feedback loop that keeps us feeling worse about ourselves for longer. But never fear! At the Tech Humanist Show, we're about finding solutions and staying optimistic, and I spoke with some of the brightest minds who are working on these problems. Guests this week include Kaitlin Ugolik Phillips, John C. Havens, Rahaf Harfoush, Emma Bedor Hiland, and David Ryan Polgar. The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Full Transcript: Kate: Hello humans! Today we look at a global crisis that's affecting us all on a near-daily basis… No, not that one. I'm talking about the other crisis—the one getting a lot less media attention: the Global Mental Health Crisis. In December, Gallup published an article with the headline, “The Next Global Pandemic: Mental Health.” A cursory Google search of the words “mental health crisis” pulls up dozens of articles published just within the past few days and weeks. Children and teenagers are being hospitalized for mental health crises at higher rates than ever. And as with most topics, there is a tech angle: we'll explore the role technology is playing in creating this crisis, and what we might be able to do about it. Let's start with social media. For a lot of us, social media is a place where we keep up with our friends and family, get our news, and keep people updated on what we're doing with our lives. Some of us have even curated feeds specifically with positivity and encouragement to help combat what we already know are the negative effects of being on social media too long. There's a downside to this, though, which I spoke about with Kaitlin Ugolik Phillips, the author of The Future of Feeling: Building Empathy in a Tech-Obsessed World. Kaitlin: I wrote about this a little bit in an article about mental health culture on places like Instagram and Pintrest where you have these pretty images that have nice sayings and sort of the commodification of things like anxiety and depression and it's cool to be not okay, but then you're comparing your ‘not-okay'ness to other people's. Kate: We've even managed to turn ‘being not okay' into a competition, which means we're taking our attempts to be healthy and poisoning them with feelings of inferiority and unworthiness, turning our solution back into the problem it was trying to solve. One of the other issues on social media is the tendency for all of us to engage in conversations–or perhaps ‘arguments' is a better word–with strangers that linger with us, sometimes for a full day or days at a time. Kaitlin explains one way she was able to deal with those situations. Kaitlin: Being more in touch with what our boundaries actually are and what we're comfortable and capable of talking about and how… I think that's a good place to start for empathy for others. A lot of times, when I've found myself in these kind of quagmire conversations (which I don't do so much anymore but definitely have in the past), I realized that I was anxious about something, or I was being triggered by what this person is saying. That's about me. I mean, that's a pretty common thing in pscyhology and just in general—when someone is trolling you or being a bully, it's usually about then. If we get better at empathizing with ourselves, or just setting better boundaries, we're going to wade into these situations less. I mean, that's a big ask. For Millennials, and Gen Z, Gen X, and anyone trying to survive right now on the Internet. Kate: But social media doesn't make it easy. And the COVID pandemic only exacerbated the issues already prevalent within the platforms. Part of the problem is that social media wasn't designed to make us happy, it was designed to make money. John C. Havens, the Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, elaborates on this idea. John: Often times, the value is framed in exponential growth, right? Not just profit. Exponential Growth is an ideology that's not just about getting some profit or speed, it's about doing this. But when you maximize any one thing, other things by definition take less of a focus. And especially with humans, that can be things like mental health. This is not bad or evil, but it is a decision. And in this case it's a key performance indicator decision, the priority is to get something to market, versus, how can we get something to market focused on well-being? How can we make innovation about mental health? Kate: The upside is that our time indoors led some people to more quickly realize the issues with technology and its effects on us. Early in the pandemic, I spoke with Rahaf Harfoush — a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture — about what she learned about our relationship to technology during that time. Rahaf: For me I think it just amplified a lot of the issues with the way we were using tech before. I noticed in my social networks and friend groups, people were home more, so what can we do but turn to our online, to this never-ending content and distraction and connections. And in the first couple weeks, everyone was about the Zoom everything, and then there was a Zoom burnout… for me, there's a couple big issues at play. The first is that we have more bandwidth because we're at home, so we're consuming more information. A lot of these platforms leverage this addictive constant-refresh, breaking-news cycle, and with something as complex and nuanced as COVID, a lot of us were glued to our screens refreshing refreshing refreshing… that was not the best thing I could have done for my mental well being or anxiety. At one point I was like, “i need to step away!” because I was just addicted to the news of instead of increasing knowledge. And the other thing is that for many people, the forced pause made us realize that we use productivity as a coping mechanism, and what does it mean that we have more time? A lot of people started trying to make their personal time as productive as their professional time—pushing themselves to pick up 10 new hobbies and learn 10 new languages and take 10 new classes! One or two of those things is great, but I really saw people loading up. That was a good indication to me of our lack of comfort with not doing anything. I noticed I was guilting myself for not writing and not learning and then I was like, you know what? we're undergoing this immensely traumatic, super-stressful thing… it's okay to not do anything, like that's fine. Kate: If you're anything like me, that's a lot easier said than done. Even if you've mostly resumed your life as normal, you're probably still in the habit of working all day, and then filling your free time with more work, hobbies, or time on social media. I asked Rahaf what someone trapped in this cycle could do about it. Rahaf: Your brain needs at least a week to just unwind from the stress of work. If you're just constantly on planes and in deliverables and client stuff… you're never going to take the time to imagine new opportunities for yourself. The trick is we have to balance periods of actually producing the thing with periods of intangible creativity. A lot of the thinking you can't see—in our culture, we don't like things that we can't see. But how many of us have gone for a walk about got that idea, or were daydreaming and got that idea? So creatives, we need that downtime. And by the way, downtime isn't taking a coffee break and being on social media. Downtime is really downtime. Daydreaming, just letting your brain go. Which is why we need a different framework, because for a writer or strategist, like you, you spend so much time thinking about things… but to think about things, you need the time to think about them!” Kate: Most of us don't have the luxury to just shut off our Internet usage entirely. If you're someone, like most of us, who needs technology to get by. , how do we find that balance? And why is it so difficult? Rahaf: I think it's because we've shamed ourselves into thinking if we're not doing stuff, it's a waste. And that's the problem, the problem is intentional recovery, prioritizing and choosing rest, that's really hard for us, because we constantly hear these stories of CEOs and celebrities, and Elon Musk sleeping on the floor of his factory, and Tim Cooke waking up at 4:30 in the morning, and we think, I can't take a nap, I can't watch a movie, I can't go for a walk, because then I'm not really committed to being successful! And that's the most toxic belief system we've incorporated into our society today, especially for creatives. The breakthrough that I had was that it's not actually about systems or organizations, it's about us as people. We are our hardest taskmasters, we will push ourselves to the limit, even when other people tell us to take a break. If we're gonna move to a more humane productivity mindset, we have to have some uncomfortable conversations about the role of work in our lives, the link between our identity and our jobs and our self-worth, our need for validation with social media and professional recognition, our egos… all of these things battle it out, which is why I can't just come on here and be like, “okay guys, take a break here, do this…” we're not going to do it! We really have to talk about, ‘growing up, what did your parents teach you about work ethic?' how is that related to how you see yourself? Who are the people that you admire? And then there are statements you can ask yourself, like “if you work hard, anything is possible!” All these things, you can start testing your relationship with work, and you start to see that we have built a relationship with work psychologically where we feel like if we don't work hard enough, we're not deserving. And not only do we have to work hard, we have to suffer! We have to pull all-nighters! Think of the words we use, ‘hustle' and ‘grind'… these horrible verbs! The reason that's important to dig into is that our views about our work become assumptions that we don't question. We don't ever stop and say, ‘does this belief actually allow me to produce my best possible work, or is it just pushing me to a point where I'm exhausted and burnt out? The second thing is, a lot of the stories we've been told about success aren't true. As a super-quick example, if there's an equation for success, most people think it's “hard work = success.” But in reality, while hard work is important, it's not the only variable. Where you're born, your luck, your gender, your race… all of these things are little variables that add into the equation. So what I don't like about “hard work = success,” it's that the flip side of that tells people, “if you're not successful, it's because you aren't working hard enough.” And part of the awakening is understanding that there are other factors at play here, and we're all working pretty hard! We don't need more things telling us that we're not enough and we're not worthy. Rahaf: When I had my own burnout, I knew better but didn't do better. That was really frustrating to me, it's like, I have the knowledge, why could I not put the knowledge to practice? And then I realized, all these belief systems and stories are embedded in every IG meme and every algorithm that asks you to refresh every 10 seconds, and every notification that interrupts your time, and the design of these tools to socially shame people for not responding fast enough. With Whatsapp for example, the blue checkmark that lets you know if someone has seen your message. What is that if not social pressure to respond? We've also shaped technology to amplify the social norms that if you're ‘left on read,' that's a breach of etiquette. Kate: We, as a culture, believe things about success that aren't true. Then, we program those beliefs into our technology, and that technology ramps up and exacerbates the speed at which we're exposed to those flawed ideas. It creates a downward spiral for the user — or, the person using these platforms — to believe these untrue truths more deeply, broadening the disconnect between our ideal selves and reality. And yet, despite these outside forces at play, there is an urge to place responsibility on the user, to say that each of us is solely responsible for our own mental health. Emma Bedor Hiland — the author of Therapy Tech: The Digital Transformation of Mental Healthcare — calls this “Responsibilization” Emma: I draw from the work of Michelle Foucault who writes about neo-liberalism too. So the way I use it in the book is to say that there is an emphasis when we talk about neo-liberalism upon taking responsibility for yourself, anything that could be presumably in your control. And in this day and age, we're seeing mental health, one's own mental health, being framed as something we can take responsibility for. So in tandem with this rollback of what would ideally be large-scale support mechanisms, local mental health facilities to help people in need, we're seeing an increasing emphasis upon these ideas like ‘use the technology that you can get for free or low cost to help yourselves.' But at the same time, those technologies literally don't speak to or reflect an imagined user who we know in this country need interventions most badly. Kate: Thankfully, we live in a world where once a problem has been identified, some enterprising people set out to design a potential solution. Some of those solutions have been built into our technology, with ‘screen time tracking' designed for us to think twice about whether we should spend more time on our phones, and Netflix's “are you still watching?” feature that adds a little friction into the process of consuming content. When it comes to mental health specifically, there is a growing Telemental Healthcare industry, including online services such as BetterHelp, Cerebral, or Calmerry. These, however, may not be the solutions we want them to be. Emma: “A lot of my research, it's so interesting looking back at it now, my interviews with people who provide tele-mental health were conducted prior to the pandemic. It was really challenging at that time to find people who were advocates and supporters of screen-based mental health services, they told me that their peers sort of derided them for that because of this assumption that when care is screen-based, it is diluted in fundamental ways that impact the therapeutic experience. Which is understandable, because communication is not just about words or tone or what we can see on a screen, there's so much more to it. But when interactions are confined to a screen, you do lose communicative information. One of the things I've grappled with is I don't want it to seem like I don't think telemental health is an important asset. One of my critiques is that a lot of the times in our discussions, we assume people have access to the requisite technologies and access to infrastructure that makes telemental healthcare possible in the first place. Like having smart devices, even just Smartphones, if not a laptop or home computer station, as well as reliable access to an internet connection, in a place where they could interface with a mental healthcare provider. So a lot of the discourse is not about thinking about those people whatsoever, who due to the digital divide or technology gap, even using technology couldn't interface with a healthcare provider. Some of my other concerns are related to the ways our increased emphasis and desire to have people providing screen-based care also are actually transforming people who provide that care, like psychiatrists, psychologists, etc, into members of the digital gig economy, who have to divide up their time in increasingly burdensome ways, and work in ways where their employment tends to be increasingly tenuous. Relatedly, I am also worried about platforms. I know people are becoming more familiar with the idea that these places exist that they can go to on their laptops or wherever, assuming they have that technology, and be connected to service providers, but as we've seen with Crisis Text Line, there are a lot of reasons to be concerned about those platforms which become hubs of collecting and aggregating and potentially sharing user data. So while I think telemental healthcare services are important, I'd like to see dedication of resources not just to technologically facilitated care, but using that care to direct people to in-person care as well. We know due to the COVID Pandemic, we saw so many people offering services that were solely screen-based, and for good reason. A lot of clinics that provided healthcare for people without insurance or who are living, considered in poverty, relied upon in-person clinic services, and haven't been able to get them due to their shuttering due to the pandemic. So I worry about the people who we don't talk about as much as I worry about the negative consequences and affects of mental healthcare's technologization Kate: So while some people's access to mental healthcare has increased with technology, many of the people who need it most have even less access to help. On top of that, the business model of these platforms makes it so that healthcare professionals have to work harder for longer in order to make their living. On top of all this, as a means of sustaining the companies themselves, they sometimes turn to sharing user data, which is a major concern for myriad reasons, one of which is people who use that data to create predictive algorithms for mental health. Next, Emma elaborates on this concept. Emma: People have been trying this for a number of years; aggregating people's public social media posts and trying to make predictive algorithms to diagnose them with things like ADHD, depression, anxiety… I'm still unsure how I feel about trying to make predictive algorithms in any way that try to make predictions in any way about when people are likely to harm themselves or others, simply because of how easy it is to use that type of software for things like predictive policing. I write in the book as well that people want to harness internet data and what people do on social media to try to stop people from violent behavior before it starts, so it's very much a slippery slope, and that's why I find data sharing in the realm of mental health so difficult to critique, because of course I want to help people, but I'm also concerned about privacy. Kate: For those saying, “but what about the free services? Things like Crisis Text Line or Trevor Project?” Emma: Crisis Text Line, when it comes into fruition in 2013 and it says, “we can meet people where they are by allowing them to communicate via text when they're experiencing crises”… I think that's a really laudable thing that was done, and that people thought it was an intervention that could save lives, and based on research from external and internal researchers, we know that is the case. But for people who might not be aware, Crisis Text Line doesn't put people in contact with professional mental healthcare workers, instead it's often people who have no background or training in mental healthcare services, and instead go through training and serve as volunteers to help people in dire moments need and crisis. In Therapy Tech I also describe how I perceive that as a form of exploitative labor, because although in the past there were conversations about whether to provide financial compensation for volunteers, they ultimately decided that by emphasizing the altruistic benefits of volunteering, that sort of payment wasn't necessary. And then I compare that to Facebook's problematic compensation of its content moderators, and the fact that those moderators filed a lawsuit against Facebook—although it hasn't been disclosed what the settlement was, at least there's some acknowledgement that they experienced harm as a result of their work, even if it wasn't volunteering. So I do take some issue with Crisis Text Line and then, in relation to neo-liberalism and responsibilization, again I feel that CTL is not the ultimate solution to the mental healthcare crisis in this country, or internationally, and CTL has created international partners and affiliates. I underwent training for a separate entity called Seven Cups of Tea which is both a smartphone app as well as an internet-accessible platform on a computer. And Seven Cups of Tea's training, compared to what I know CTL volunteers have to go through, is incredibly short and I would characterize as unhelpful and inadequate. For me it took 10 minutes, and I can't imagine it would take anyone more than a half hour. So the types of things I learned were how to reflect user statements back to them, how to listen empathetically but also not provide any advice or tell them what to do, because you never know who's on the other end! At the time I conducted the research, I started to volunteer on the platform. A lot of the messages I got were not from people who were experiencing mental distress necessarily, but from people who just wanted to chat or abuse the platform. But even though I only had a few experiences with people who I felt were genuinely experiencing mental distress, I still found those experiences to be really difficult for me. That could be just because of who I am as a person, but one of the things I've realized or feel and believe, is that my volunteering on the platform was part of a larger-scale initiative of 7CoT to try to differentiate between who would pay for services after I suggested to them because of my perception of them experiencing mental distress, and those whose needs could be fulfilled by just being mean to me, or having their emotions reflected back to them through superficial messaging. I very rarely felt that I was able to help people in need, and therefore I feel worse about myself for not being able to help as though it's somehow my fault, related to this idea of individual responsibilization. Me with my no knowledge, or maybe slightly more than some other volunteers, feeling like I couldn't help them. As though I'm supposed to be able to help them. I worry about the fatalistic determinism types of rhetoric that make it seem like technology is the only way to intervene, because I truly believe that technology has a role to play, but is not the only way. Kate: Technology isn't going anywhere anytime soon. So if the products and services we've built to help us aren't quite as amazing as they purport themselves to be, is there a role for tech interventions in mental health scenarios? Emma explains one possible use-case. Emma: I think technology can help in cases where there are immediate dangers. Like if you see someone upload a status or content which says there is imminent intent to self-harm or harm another person. I think there is a warrant for intervention in that case. But we also know that there are problems associated with the fact that those cries for help (or whatever you want to call them) are technologically mediated and they happen on platforms, because everything that happens via a technology generates information / data, and then we have no control, depending on the platform being used, over what happens with that data. So I'd like to see platforms that are made for mental health purposes or interventions be held accountable in that they need to be closed-circuits. It needs to be that they all pledge not to engage in data sharing, not engage in monetization of user data even if it's not for-profit, and they need to have very clear terms of service that make very evident and easily-comprehendible to the average person who doesn't want to read 50 pages before agreeing, that they won't share data or information. Kate: Now, I do like to close my show with optimism. So first, let's go to Rahaf once again with one potential solution to the current tech issues plaguing our minds. Rahaf: To me one of the most important things that we need to tackle—and I don't know why we can't just do this immediately—we need to have the capacity on any platform that we use to turn off the algorithm. Having an algorithm choose what we see is one of the biggest threats, because think about all the information that you consume in a day, and think about how much of that was selected for you by an algorithm. We need to have an ability to go outside of the power that this little piece of code has to go out and select our own information, or hold companies accountable to produce information that is much more balanced. Kate: And that sounds like a great solution. But how do we do that? We don't control our technology, the parent companies do. It's easy to feel hopeless… unless you're my friend David Ryan Polgar, a tech ethicist and founder of All Tech Is Human, who's here to remind us that we aren't bystanders in this. I asked him what the most important question we should be asking ourselves is at this moment, and he had this to say. David: What do we want from our technology? This is not happening to us, this is us. We are part of the process. We are not just magically watching something take place, and I think we often times forget that. The best and brightest of our generation should not be focused on getting us to click on an ad, it should be focused on improving humanity. We have major societal issues, but we also have the talent and expertise to solve some of these problems. And another area that I think should be focused on a little more, we are really missing out on human touch. Frankly, I feel impacted by it. We need to hug each other. We need to shake hands as Americans. I know some people would disagree with that, but we need warmth. We need presence of somebody. If there was a way that if we ended this conversation and like, we had some type of haptic feedback, where you could like, pat me on the shoulder or something like that… everybody right now is an avatar. So I need to have something to say like, “Kate! You and I are friends, we know each other! So I want a greater connection with you than with any other video that I could watch online. You are more important than that other video.” But right now it's still very two dimensional, and I'm not feeling anything from you. And I think there's going to have to be a lot more focus on, how can I feel this conversation a little more. Because I mean listen, people are sick and tired right now, ‘not another Zoom call!' But if there was some kind of feeling behind it, then you could say, “I feel nourished!” whereas now, you can sometimes feel exhausted. We're not trying to replace humanity, what we're always trying to do is, no matter where you stand on an issue, at the end of the day, we're actually pretty basic. We want more friends, we want more love… there are actual base emotions and I think COVID has really set that in motion, to say, hey, we can disagree on a lot in life, but what we're trying to do is get more value. Be happier as humans, and be more fulfilled. Be more educated and stimulated. And technology has a major role in that, and now, it's about saying how can it be more focused on that, rather than something that is more extractive in nature? Kate: Whether we like it or not, the Internet and digital technology play a major role in our collective mental health, and most of the controls are outside of our hands. That can feel heavy, or make you want to throw in the towel. Those feelings are valid, but they aren't the end of the story. I asked David for something actionable, and this is what he had to say. David: Get more involved in the process. Part of the problem is we don't feel like we can, but we're going to have to demand that we are, and I think frankly some of this is going to come down to political involvement, to say ‘we want these conversations to be happening. We don't want something adopted and deployed before we've had a chance to ask what we actually desire.' So that's the biggest part is that everyone needs to add their voice, because these are political issues, and right now people think, ‘well, I'm not a techie!' Guess what? if you're carrying around a smartphone… Kate: All the more reason we need you! David: Right! We need everybody. Technology is much larger. Technology is society. These are actually social issues, and I think once we start applying that, then we start saying, ‘yeah, I can get involved.' And that's one of the things we need to do as a society is get plugged in and be part of the process. KO: There are a lot of factors that contribute to our overall sense of happiness as humans. And although it may sound like a cliche, some of those factors are the technologies that we use to make our lives easier and the algorithms that govern the apps we thought we were using to stay connected. But that doesn't mean things are hopeless. If we keep talking about what matters to us, and make an effort to bring back meaningful human interaction, we can influence the people building our technology so that it works for our mental health, instead of against it.

    How Tech and Social Media Impact Our Mental Health

    Play Episode Listen Later Apr 28, 2022 29:00


    Hello humans! Today we look at a global crisis that's affecting us all on a near-daily basis—the Global Mental Health Crisis.

    How Tech Harms – and Can Help Heal – the Climate

    Play Episode Listen Later Apr 21, 2022 45:09


    On this week's episode, we're talking about one of the most urgent issues facing humanity today, and how we can reframe our mindset around it to better encourage and allow ourselves to take action. That issue, of course, is climate change. Technology has created a lot of the problems we face, but is also coming up with some of the most innovative and inventive solutions. Solving this is going to take creativity, collaboration, and a willingness to change, but that's what we're all about here at the Tech Humanist Show! What is our individual responsibility to tackling these problems? What are the most exciting solutions on the horizon? Who should we be holding to account, and how? Those answers and more on this week's episode. Guests this week include Sarah T. Roberts, AR Siders, Tan Copsey, Anne Therese Gennari, Christopher Mims, Art Chang, Dorothea Baur, Abhishek Gupta, and Caleb Gardner. The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Full Transcript: Hello, humans! Today we're talking about a problem that technology is both a major cause of and perhaps one of our best potential solutions for: climate change. By almost any reckoning, the climate emergency is the most urgent and existential challenge facing humanity for the foreseeable future. All of the other issues we face pale in comparison to the need to arrest and reverse carbon emissions, reduce global average temperatures, and begin the work of rebuilding sustainable models for all of us to be able to live and work on this planet. By late 2020, melting ice in the Arctic began to release previously-trapped methane gas deposits. The warming effects of methane are 80 times stronger than carbon over 20 years, which has climate scientists deeply worried. Meanwhile, the Amazon rainforest has been devastated by burning. The plastic-filled oceans are warming. Coral reefs are dying. Experts are constantly adjusting their predictions on warming trends. And climate issues contribute to other socio-political issues as well, usually causing a big loop: Climate disasters create uninhabitable environments, leading to increased migration and refugee populations, which can overwhelm nearby areas and stoke the conditions for nationalistic and jingoistic political power grabs. This puts authoritarians and fascists into power—who usually aren't too keen on spending money to fix problems like climate change that don't affect them personally—exacerbating all of the previous problems. UK Prime Minister Boris Johnson showcased exactly this type of position before a recent UN climate conference, claiming the fall of the Roman empire was due to uncontrolled immigration as a way of refocusing people's fear and attention away from climate change. Marine Le Pen of France went so far as to say that those without a homeland don't care about the environment. Similarly out-of-touch and out-of-context things have been said recently by right-wing leaders in Spain, Germany, Switzerland… the list goes on and on. Perhaps the most psychologically challenging aspect of all this is that even as we begin to tackle these issues one by one, we will continue to see worsening environmental effects for the next few decades. As David Wallace-Wells writes in The Uninhabitable Earth: “Some amount of further warming is already baked in, thanks to the protracted processes by which the planet adapts to greenhouse gas…But all of those paths projected from the present…to two degrees, to three, to four or even five—will be carved overwhelmingly by what we choose to do now.” The message is: It's up to us. We know what's coming, and are thus empowered to chart the course for the future. What we need are bold visions and determined action, and we need it now. At this point you may be thinking, “I could really use some of that Kate O'Neill optimism right about now…” Not only do I have hope, but many of the climate experts I have read and spoken with are hopeful as well. But the first step in Strategic Optimism is acknowledging the full and unvarnished reality, and the hard truth about the climate crisis is that things do look bleak right now. Which just means our optimistic strategy in response has to be that much more ambitious, collaborative, and comprehensive. As Christiana Figuere and Tom Rivett-Carnac wrote in The Future We Choose: Surviving the Climate Crisis, “[To feel] a lack of agency can easily transform into anger. Anger that sinks into despair is powerless to make change. Anger that evolves into conviction is unstoppable.” One of the things slowing progress on the climate front is the people on the extreme ends of the belief spectrum—especially those in positions of power—who believe it's either too late to do anything, or that climate change isn't happening at all. Technology exacerbates this problem through the spread of false information. Thankfully by this point most people—around 90% of Americans and a higher percentage of scientists—are in agreement that it's happening, although we're still divided on the cause. The same poll conducted in October 2021 by the Associated Press-NORC Center for Public Affairs Research and the Energy Policy Institute at the University of Chicago, found that only 54% of Americans believe humans contribute to climate change. A separate study conducted that same month looked at 88,125 peer-reviewed climate studies published between 2012 and 2020, and determined that 99.9% of those studies found human activity to be directly responsible for our warming planet. It's important, however, not to write off the people who aren't yet fully convinced. Technology, as much as it has given us near-infinite access to information, is also a tremendous propagator of mis- and disinformation, which is fed to people by algorithms as immutable fact, and is often indistinguishable from the truth. Sarah T Roberts, who is Associate Professor at the University of California, Los Angeles (UCLA) where she also serves as the co-founder of the UCLA Center for Critical Internet Inquiry, explains further. Sarah T Roberts: “When I think about people who fall victim to conspiracy theories, what I see is a human impulse to make sense of a world that increasingly doesn't. And they're doing it in the absence of information that is way more complex and hard to parse out and might actually point criticism at places that are very uncomfortable. They sense a wrongness about the world but they don't have the right information, or access to it, or even the ability to parse it, because we've destroyed public schools. And then the auxiliary institutions that help people, such as libraries, and that leaves them chasing their own tail through conspiracy theories instead of unpacking things like the consequences of western imperialism, or understanding human migration as economic and environmental injustice issues. Y'know, you combine all that, and people, what do they do? They reach for the pablum of Social Media, which is instantaneous, always on, easy to digest, and worth about as much as, y'know, those things might be worth. I guess what I'm trying to do is draw some connections around phenomena that seem like they have come from nowhere. It would behoove us to connect those dots both in this moment, but also draw back on history, at least the last 40 years of sort of like neoliberal policies that have eroded the public sphere in favor of private industry. What it didn't do was erode the public's desire to know, but what has popped up in that vacuum are these really questionable information sources that really don't respond to any greater norms, other than partisanship, advertising dollars, etc. And that's on a good day!” The fact is, there are a number of industries and people who have a vested interest in maintaining the status quo. Not all of them engage in disinformation schemes, but some corporations—and people—who are interested in fighting climate change aren't willing to look at solutions that might change their business or way of life. Too much change is scary, so they look for solutions that keep things as they are. AR Siders: “Too much of our climate change adaptation is focused on trying to maintain the status quo. We're trying to say, ‘hey, the climate is changing, what can we do to make sure that everything stays the same in the face of climate change?' And I think that's the wrong way to think about this.” That's AR Siders, assistant professor in the Biden School of Public Policy and Administration and the Department of Geography and a Core Faculty Member of the Disaster Research Center. Siders' research focuses on climate change adaptation governance, decision-making, and evaluation. ARSiders: “I think we need to think about the idea that we're not trying to maintain the status quo, we're trying to choose how we want our societies to change. I often start talks by showing historic photos, and trying to point out, in 1900, those photos don't look like they do today. So, 100 years in the future, things are going to look different. And that's true even if you don't accept climate change. Even if we stop climate change tomorrow, we might have another pandemic. We'll have new technology. And so our goal shouldn't be to try to lock society into the way it works today, it should be to think about, what are the things we really care about preserving, and then what things do we actively want to choose to change? Climate adaptation can be a really exciting field if we think about it that way.” And it is! But as more people have opened their eyes to the real threat looming in the near-horizon, disinformation entities and bad actors have changed their tactics, shifting responsibility to individuals, and away from the corporations causing the majority of the harm. So let's talk about our personal responsibility to healing the climate. Tan Copsey: “We always should be careful of this trap of individual action, because in the past the fossil fuel industry has emphasized individual action.” That's Tan Copsey, who is Senior Director, Projects and Partnerships at Climate Nexus, a strategic communications organization. His work focuses on communicating the impacts of climate change and the benefits of acting to reduce climate risks. You'll be hearing from him a lot this episode. We spoke recently about climate change solutions and responsibilities across countries and industries. He continued: Tan Copsey: “I don't know if it's true but apparently BP invented the carbon footprint as a way of kind of getting people to focus on themselves and feel a sense of guilt, and project out a sense of blame, but that's not really what it's about. Dealing with climate change should ultimately be a story about hope, and that's what I kind of try and tell myself and other people.” Speaking of, Shell had a minor PR awakening in November 2020 when they tweeted a poll asking: “What are you willing to change to help reduce carbon emissions?” The tweet prompted many high-profile figures like climate activist Greta Thunberg and US congresswoman Alexandria Ocasio-Cortez to call out the hypocrisy of a fossil fuel company asking the public for personal change. In truth, research has found that the richest 1% of the world's population were responsible for the emission of more than twice as much carbon dioxide as the poorer half of the world from 1990 to 2015, with people in the US causing the most emissions per capita in the world. Now, this doesn't mean to abandon personal responsibility. We should all make what efforts we can to lower our carbon footprint where feasible—whether by reviewing consumption habits, eating less meat, driving less, or anything from a wide variety of options. There's interesting psychological research around how making sustainable choices keeps us grounded in the mindset of what needs to change. I spoke with Anne Therese Gennari, a speaker, educator, and environmental activist known as The Climate Optimist, about the psychology behind individual action, and how the simple act of being more climate conscious in our daily lives can make the world a better place in ways beyond reducing our carbon footprints. Anne Therese Gennari: “Do our individual actions matter… and I think it matters so much, for 4 reasons. The first one is that it mends anxiety. A lot of people are starting to experience climate anxiety, and the first step out of that is actually to put yourself back in power. Choosing optimism is not enough. Telling ourselves, ‘I want to be optimistic,' is gonna fall short very quickly, but if we keep showing up for that work and that change, we're actually fueling the optimism from within. And that's how we keep going. The second one is that it builds character. So, the things that you do every day start to build up your habits, and that builds your character. Recognizing that the things we do becomes the identity that we hold onto, and that actually plays a huge part on what I'll say next, which is, start shifting the culture. We are social creatures, and we always look to our surroundings to see what's acceptable and okay and not cool and all these things, so the more of us that do something, it starts to shift norms and create a new culture, and we have a lot of power when we start to shift the culture. And then lastly, I'll just say, we always plant seeds. So whatever you do, someone else might see and pick up on, you never know what's gonna ripple effect from your actions.” No one person can make every change needed, but we can all do something. Every small action has the potential to create positive effects you'll never know. One surprising piece of information is that some of the things we're doing that we know are bad for the environment—like online delivery—may have more of a positive environmental impact than we thought. While the sheer amount of product that we order—especially non-essential items—is definitely exacerbating climate change, there are some positive takeaways. Christopher Mims, tech columnist at the Wall Street Journal and author of Arriving Today, on how everything gets from the factory to our front door, explains how, especially once our transportation and delivery vehicles have been electrified, ordering online may be a significantly greener alternative to shopping in stores. Christopher Mims: “The good news—you would think all of this ordering stuff online is terrible for the environment—look, it's bad for the environment in as much as it makes us consume more. We're all over-consuming, on average. But it's good for the environment in that, people forget, hopping into a 2 or 3 thousand pound car and driving to the grocery store—or a store—to get 5 to 15 pounds of goods and driving it home is horribly inefficient compared to putting the same amount of goods onto a giant box truck that can make 150 stops (if you're talking about a UPS or an Amazon delivery van), or a few dozen if you're talking about groceries. The funny thing is that delivery has the potential to be way more sustainable, and involve way less waste than our current system of going to stores. Frankly, physical retail is kind of a nightmare environmentally.” That's only a small piece of the puzzle, and there are still social and economic issues involved in the direct-to-home delivery industry. More important in regards to our personal responsibility is to stay engaged in the conversation. A both/and mindset is best: embrace our own individual responsibilities, one of which is holding companies and entities with more direct impact on the climate accountable for making infrastructural and operational change that can give individuals more freedom to make responsible choices. Tan Copsey again. Tan Copsey: “It is about political action and engagement for me. Not just voting, but it's about everything that happens in between. It's about community engagement, and the tangible things you feel when there are solar panels on a rooftop, or New York begins to move away from gas. I mean, that's a huge thing! In a more existential sense, the news has been bad. The world is warming, and our approach to dealing with it distributes the benefits to too few people. There are definitely things you can do, and so when I talk about political pressure, I'm not just talking about political pressure for ‘climate action,' I'm talking about political pressure for climate action that benefits as many people as possible.” So, if part of our responsibility is to hold our leaders to account… what changes do we need? What should we be encouraging our leaders to do? Since we're talking about political engagement, let's start with government. Tan spoke to me about government response to another global disaster—the COVID-19 Pandemic—and some of the takeaways that might be applied to battling climate change as well. Tan Copsey: “What's really interesting to me about the pandemic is how much money governments made available, particularly the Fed in the US, and how they just pumped that money into the economy as it exists. Now, you can pump that money into the economy and change it, too, and you can change it quite dramatically. And that's what we're beginning to see in Europe as they attempt to get off Russian gas. You're seeing not just the installation of heat pumps at astonishing scale, but you're also seeing real acceleration of a push toward green energy, particularly in Germany. You're also seeing some ideas being revisited. In Germany it's changing people's minds about nuclear power, and they're keeping nukes back on.” Revisiting debates we previously felt decided on is unsettling. Making the future a better place is going to require a great deal of examination and change, which can be scary. It's also something federal governments are designed not to be able to do too quickly. But that change doesn't have to work against the existing economy; it can build with it. It might be notable to people looking at this from a monetary perspective—the world's seven most industrialized countries will lose a combined nearly $5 trillion in GDP over the next several decades if global temperatures rise by 2.6 degrees Celsius. So it behooves everyone to work on these solutions. And what are those solutions? AR Siders spoke to me about the four types of solutions to climate issues. A lot of her work involves coastal cities, so her answer uses “flooding” as an example, but the strategies apply to other problems as well. AR Siders: “So the main categories are, Resistance, so this is things like building a flood wall, putting in dunes, anything that tries to stop the water from reaching your home. Then there's Accommodation, the classic example here is elevating homes, so the water comes, and the water goes, but it does less damage because you're sort of out of the way. Then there's Avoidance, which is ‘don't build there in the first place,' (America, we're not very good at that one). And then Retreat is, once you've built there, if you can't resist or accommodate, or if those have too many costs, financial or otherwise, then maybe it's time to relocate.” We'll need to apply all four strategies to different problems as they crop up, but it's important that we're proactive and remain open to which solution works best for a given issue. City governments have tremendous opportunities to emerge as leaders in this space. Studies project that by the end of the century, US cities could be up to 10 degrees Fahrenheit warmer in the afternoon and 14 degrees warmer at night, meaning cities need to start taking action now. Phoenix, Arizona—a city that experiences the “heat island effect” year round—is actively making efforts to minimize these effects. In 2020, they began testing “cool pavement,” a chemical coating that reflects sunlight and minimizes the absorption of heat to curb the heat island effect. Additionally, measures to offer better transit options are on the table, with cities like Austin and New York emerging as leaders in the space. The Citi Bike app in New York City now shows transit information alongside rental and docking updates as acknowledgement that for many trips biking isn't enough, but in combination with buses or trains, biking can simplify and speed a commute as part of a greener lifestyle. Austin's recognition of the synergies between bikeshare and public transit has been praised as a model for other cities, as city transit agencies move away from seeing themselves as managers of assets (like busses), and towards being managers of mobility. I spoke with Art Chang, who has been a longtime entrepreneur and innovator in New York City—and who was, at the time of our discussion, running for mayor—about the need for resilience in preparing cities for the future. Art Chang: “There was a future—a digital future—for New York, but also being open to this idea that seas were rising, that global temperatures were going up, that we're going to have more violent storms, that things like the 100-year flood line may not be drawn to incorporate the future of these rising seas and storms. So we planned, deliberately and consciously, for a hundred-fifty year storm. We softened the edge of the water, because it creates such an exorbitant buffer for the rising seas and storms. We created trenches that are mostly hidden so that overflow water had a place to go. We surrounded the foundations of the building with what we call ‘bathtubs,' which are concrete enclosures that would prevent water from going into these places where so much of the infrastructure of these buildings were, and then we located as much of the mechanicals on top of the building, so they would be protected from any water. Those are some of the most major things. All technologies, they're all interconnected, they're all systems.” Making any of the changes suggested thus far requires collective action. And one of the ways in which we need to begin to collaborate better is simply to agree on the terms we're using and how we're measuring our progress. Some countries, like the United States, have an advantage when it comes to reporting on climate progress due to the amount of forests that naturally occur within their borders. That means the US can underreport emissions by factoring in the forests as “carbon sinks,” while other countries that may have lower emissions, but also fewer naturally-occurring forests, look worse on paper. This isn't factually wrong, but it obscures the work that's needed to be done in order to curb the damage. I asked Tan about these issues, and he elaborated on what he believes needs to be done. Tan Copsey: “Again, I'd say we resolve the ambiguity through government regulation. For example, the Securities and Exchange Commission is looking at ESG. So this big trend among investors and companies, the idea that you take account of environmental, social, and governance factors in your investments, in what your company does. Realistically, there hasn't been consistent measure of this. I could buy an exchange-traded fund, and it could be ‘ESG,' and I wouldn't really know what's in it. And it could be that what's in it isn't particularly good. And so regulators are really trying to look at that now and to try and standardize it, because that matters. Likewise, you have carbon markets which are sort of within European Union, and then you have voluntary carbon markets, which are often very reliant on forest credits sourced from somewhere else, where you're not quite sure if the carbon reduction is permanent or not. And yeah, there is a need for better standards there.” To do this holistically we will need to get creative with economic incentives, whether that involves offsets, green energy credits, or new programs at local, state, or national levels. One of the more aggressive and comprehensive plans for rethinking energy policy came from the EU in summer 2021, just as Germany and Belgium reeled from killer floods that were likely exacerbated by the climate crisis. The EU announced its ”Fit for 55” plans, ”a set of inter-connected proposals, which all drive toward the same goal of ensuring a fair, competitive and green transition by 2030 and beyond.” It's an approach that is systemic, recognizing the interconnectedness of a wide variety of policy areas and economic sectors: energy, transportation, buildings, land use, and forestry. And we need more programs and regulations like this. But until we have those better regulations we need, there are still things business leaders can do to make their businesses better for the environment today, so let's move away from government and talk about businesses. A lot of businesses these days pay an enormous amount of lip service (and money) to showing that they care about the environment, but the actual work being done to lower their carbon footprint or invest in cleaner business practices is a lot less significant. Tan spoke to me about this as well. Tan Copsey: “They need to move from a model which was a little bit more about PR to something that's real. In the past when a business issued a sustainability report, it was beautiful! It was glossily designed… And then when it came to like, filings with the SEC, they said ‘climate change is a serious issue and we are taking it seriously,' because their lawyers read it very, very closely. And so, if dealing with climate risk is embedded in everything you do as a business (as it probably should be), because almost every business, well, every business probably, interacts with the energy system—every business is a climate change business. They should be thinking about it, they should be reporting on it, y'know, when it comes to CEOs, it should be part of the way we assess their performance.” Nowadays, lots of companies are talking about “offsetting” their carbon emissions, or attempting to counter-act their emissions by planting trees or recapturing some of the carbon. But is this the right way to think about things? Dorothea Baur: “Offsetting is a really good thing, but the first question to ask should not be, ‘can I offset it?' or ‘how can I offset it?', but, ‘is what I'm doing, is it even necessary?'” That's Dorothea Baur, a leading expert & advisor in Europe on ethics, responsibility, and sustainability across industries such as finance, technology, and beyond. Her PhD is in NGO-business partnerships, and she's been active in research and projects around sustainable investment, corporate social responsibility, and increasingly, emerging technology such as AI. Dorothea Baur: “So, I mean, let's say my favorite passion is to fly to Barcelona every other weekend just for fun, for partying. So, instead of offsetting it, maybe I should stop doing it. And the same for tech companies saying, you know, ‘we're going to be carbon negative!' but then make the most money from totally unsustainable industries. That's kind of a double-edged sword.” It is notable that one of the key ways businesses and governments attempt to offset their emissions is “planting trees,” which has more problems than you may think. Yes, trees are an incredibly important part of a carbon sink approach, and we definitely need to plant more of them—but there's a catch to how we say we're going to do it. The promise of tree-planting has been such an easy add-on for companies' marketing campaigns to make over the years that there's a backlog of trees to be planted and not enough tree seedlings to keep up with the promises. It's not uncommon for companies to make the commitment to their customers to plant trees first, only for them to struggle to find partners to plant the promised trees. Dorothea Baur lamented this fact in her interview. Dorothea Baur: “It's also controversial, what I always joke about—the amount of trees that have been promised to be planted? I'm waiting for the day when I look out of my window in the middle of the city and they start planting trees! Because so much—I mean, the whole planet must be covered with trees! The thing is, it takes decades until the tree you plant really turns into a carbon sink. So, all that planting trees—it sounds nice, but also I think there's some double-counting going on. It's easy to get the credit for planting a tree, but it's hard to verify the reduction you achieve because it takes such a long time.” It's going to take more than lip service about tree-planting; we have to actually expand our infrastructural capability to grow and plant them, commit land to that use, and compensate for trees lost in wildfires and other natural disasters. Beyond that, we have to make sure the trees we're planting will actually have the effect we want. The New York Times published an article in March, arguing that “Reforestation can fight climate change, uplift communities and restore biodiversity. When done badly, though, it can speed extinctions and make nature less resilient…companies and countries are increasingly investing in tree planting that carpets large areas with commercial, nonnative species in the name of fighting climate change. These trees sock away carbon but provide little support to the webs of life that once thrived in those areas.” And that can mean the trees take resources away from existing plant life, killing it and eliminating the native carbon-sink—leading to a situation where net carbon emissions were reduced by nearly zero. These are problems that require collaboration and communication between industries, governments, activists, and individuals. Beyond those initiatives, companies can also improve their climate impact by investing in improvements to transportation for employees and customers, perhaps offering public transit or electric vehicle incentives to employees, or investing in a partnership with their municipality to provide electric vehicle charging stations at offices and storefronts. Additionally, business responsibility may include strategic adjustments to the supply chain or to materials used in products, packaging, or delivery. Another issue when it comes to offsetting emissions is the leeway the tech industry gives itself when it comes to measuring their own global climate impact, when the materials they need to build technology is one of the chief contributors to carbon emissions. Dorothea Baur again. Dorothea Baur: “The whole supply chain of the IT industry is also heavily based on minerals. There are actually, there are really interesting initiatives also by tech companies, or like commodity companies that specifically focus on the minerals or the metals that are in our computers. Like cobalt, there's a new transparency initiative, a fair cobalt initiative. So they are aware of this, but if you look at where is the main focus, it's more on the output than on the input. And even though the tech companies say, ‘oh, we're going to be carbon neutral or carbon negative,' as long as they sell their cloud services to the fossil industry, that's basically irrelevant.” Currently, AI tech is an “energy glutton”—training just one machine learning algorithm can produce CO2 emissions that are 5 times more than the lifetime emissions of a car. But there is still hope for AI as a tool to help with climate change, namely using it to learn how to more efficiently run energy grids and predict energy usage, especially as energy grids become more complicated with combined use of solar, wind, and water power in addition to traditional fossil fuels. AI can also make the global supply chain more efficient, reducing emissions and speeding up the process of developing new, cleaner materials. One small-scale use-case is “Trashbot,” which sorts waste materials into categories using sensors and cameras, eliminating the need for people to try to sort out their own recyclables. What's clear from every emerging report is that net zero emissions are no longer enough. We need governments and companies and every entity possible to commit to net negative emissions. Cities need ambitious plans for incentivizing buildings that sequester carbon. Companies need logistics overhauls to ensure their supply chains are as compliant as possible, and then some. Tan Copsey: ““What's interesting is when they talk about Net Zero—particularly companies, but also a lot of governments—they talk about Net Zero by 2050. What is that, 28 years. 28 years is still a long time away, and if you're a government, the current president certainly won't be president in 2050. If you're a company CEO, you may not be CEO next quarter, let alone in 28 years, and so we have to have nearer-term targets. You want to be Net Zero by 2050? Tell me how you're gonna get there. Tell me what you're gonna do by 2030, tell me what you're gonna do by next quarter. One of the things that encourages me is things like change in financial regulation, which sounds arcane and slightly off-topic, but it's not. It's about what companies report when, and how investors hold those companies to account to nearer-term action, because that's how we get there.” One of the reasons that corporations do so little to minimize their carbon footprint is that they don't accurately measure their own carbon emissions. Using AI to track emissions can show problem areas, and what can be done to address those issues. Abhishek Gupta, machine learning engineer, founder of the Montreal AI Ethics Institute, and board member of Microsoft's CSE Responsible AI board, spoke to me about an initiative he's working on to help ease this burden by making it easier for developers to track the effect they're having on the environment by incorporating data collection into their existing workflow. Abhishek Gupta: “One of the projects that we're working on is to help developers assess the environmental impacts of the work that they do. Not to say that there aren't initiative already, there are—the problem with a lot of these are, they ignore the developer's workflow. So the problem then is, if you're asking me to go to an external website and put in all of this information, chances are I might do it the first couple of times, but I start to drop the ball later on. But if you were to integrate this in a manner that is similar to ML Flow, now that's something that's a little more natural to the developer workflow; data science workflow. If you were to integrate the environmental impacts in a way that follows this precedent that's set by something like ML Flow, there is a lot higher of a possibility for people taking you up on that, and subsequently reporting those outcomes back to you, rather than me having to go to an external website, fill out a form, take that PDF report of whatever… that's just too much effort. So that's really what we're trying to do, is to make it easy for you to do the right thing.” And Abhishek isn't the only one who sees potential in AI. Dorothea Baur also spoke to me about her belief in AI, although she sees us using it for a different purpose. Dorothea Baur: “AI has huge potential to cause good, especially when it comes to environmental sustainability. For example, the whole problem of pattern recognition in machine learning, where if it's applied to humans, it is full of biases, and it kind of confuses correlation and causation, and it's violating privacy, etc. There are a lot of issues that you don't have when you use the same kind of technology in a natural science context, you know? Where you just observe patterns of oceans and clouds and whatever, or when you try to control the extinction of species. I mean, animals don't have a need for or a right to privacy, so why not use AI in contexts where it doesn't violate anyone's moral rights? And where you, at the same time, resolve a real problem.” Turning AI and algorithms away from people and towards nature is a wise decision in many respects. A lot of our efforts to curb the effects of climate change thus far have overlooked the same people that are overlooked in our data, and in almost every measurable respect, negative impacts of the climate crisis are felt most by marginalized populations and poorer communities. Tan Copsey: “I think that when it comes to climate tech, you need to think about who it's supposed to benefit. There's more than 7B people on earth, it can't just be for the US market, it has to be for everyone.” “The best futures for the most people” really comes into play here—communities of color are often more at risk from air pollution, due to decades of redlining forcing them into more dangerous areas. Seniors, people with disabilities, and people with chronic illnesses may have a harder time surviving extreme heat or quickly evacuating from natural disasters. Subsidized housing is often located in a flood plain, causing mold, and frequently lacks adequate insulation or air conditioning. People with a low-income may also be hard-pressed to afford insurance or be able to come back from an extreme loss after catastrophe strikes. Some indigenous communities have already lost their homelands to rising sea levels and drought. Indigenous communities, speaking of, often have traditional approaches—empowered by millennia of historical experience—to living gently on the planet and a mindset for cooperating with nature that are well worth learning. Seeking leadership on climate issues from Indigenous people should be a priority. An article published by Mongabay on December 21, 2021 gives an example of an initiative in Mexico that is using the knowledge of indigenous communities, and is working. Essentially, the Ejido Verde company grants interest-free loans to local communities to plant and tend pine trees for the tapping of resin, a multibillion-dollar global industry. Younger generations are eager to participate, and fewer people feel the need to migrate away from their homes. According to a paper by the Royal Botanic Gardens of Kew, the only way that recovery can work is if it is based on sound science, supported by fair governance, incentivized by long-term funding mechanisms, and guided by indigenous knowledge and local communities. Speaking of long-term funding mechanisms, let's talk about another group of leaders who have the potential to make a drastic positive impact today: private investors. Activist investors may seem unwelcome, but when they're making priorities known on behalf of humanity, they're ultimately doing us all a service. These people have the ability to help shape company and government policy by letting their dollars speak for us, by investing in solutions and burgeoning industries that we drastically need. That's been happening, such as when the shareholders of both ExxonMobil and Chevron sent strong messages about getting serious with respect to climate responsibility. In Europe, shareholder votes and a Dutch court ordered Royal Dutch Shell to cut its emissions faster than they'd already been planning. And social and financial pressure is a good way to nudge executives in the right direction, especially leaders who don't make climate-friendly decisions out of fear of pushback from their boards and investors. Tan Copsey: “Investors increasingly should be thinking about the companies they invest in on the basis of their climate performance. And that isn't just, ‘oh, they reduced some greenhouse gas emissions,' because, y'know, you look at a lot of tech companies and they have reduced greenhouse gas emissions, but really they have to do more than that. For businesses in other sectors, it may not be that simple. Certainly there are harder to abate sectors, and so it could be that you are the CEO of a steel company, and your emissions are still gigantic, but the change you can make by introducing, say, hydrogen, and getting rid of coal, or introducing renewable energy plus hydrogen to your—the way in which you do steel, is transformative for the global economy and transformative for the climate system, and in a way investing in that company is more climate-friendly than investing in a tech company; but chances are you have an ETF and you're doing both.” Despite everything I've talked about today, it's important for all of us to remain optimistic. I asked Anne Therese Gennari why optimism is important, and her answer didn't disappoint. Anne Therese Gennari: “Optimism, for scientific reasons, is actually very important. If you look to neuroscience, we need optimism to believe something better is possible, and then find the motivation and the courage to take action right now to get us closer to that goal. And I think there is a huge difference between optimism and toxic positivity, and I think a lot of people who don't agree with optimism associate it with always trying to be happy, thinking good thoughts and hoping things will turn out to the better. And that's why I love to come back to this understanding that ‘awareness hurts, and that's okay.' Because when we tell ourselves that not everything is beautiful, and sometimes things will be painful, we can actually handle that, and we can take that. But from that place of awareness, we can start to grow a seed of hope and tell ourselves, ‘well, what if? What if we did take action, and this happened? What if we can create a more beautiful world in the future? And so, we can paint a picture that's all doomsday, or we can paint one that's beautiful. So which one do we want to start working towards?” And if you find yourself saying, “I really want to be optimistic, but it's too hard! There's just so much bad news out there…” don't fret! You aren't alone. You might even say that's a quite human response. Anne Therese Gennari: “We're human beings, and as a species, we respond to certain kinds of information in different ways. Information that's negative or fear based has a very limiting response in our brains. When we hear something that's overwhelming, like climate change, and we know it's urgent, we might understand that it's urgent, but the action isn't there. Because how our brains respond to something that we don't want to happen is actually to not take action. And it goes back to way back in time, where like, you're facing this dangerous animal, and you're like ‘there's no way I can fight this animal, I can't outrun it, so what am I gonna do? I'm gonna stand here super still and hope that it doesn't see me.' That's literally what our brains think about when something's that overwhelming. And so I think the more urgent the matter is, the more important it is that we actually fuel ourselves with an optimistic future or goal to work towards, because that is the only way that we can actually trigger action.” So let's fuel our minds with an optimistic future to work towards. Despite all the bad news you've heard—even on this episode—there are a lot of hopeful developments happening! The most recent U.N. Climate Conference, COP26, established the Glasgow Climate Pact, which recognizes that the situation is at an emergency level, asking countries to accelerate their plans by calling for provable action by next year. Policy changes, government regulations, and people becoming motivated are all on the rise. Caleb Gardner, who was lead digital strategist for President Obama's political advocacy group, OFA and is now founding partner of 18 Coffees, a strategy firm working at the intersection of digital innovation, social change, and the future of work, spoke to me about what he's most optimistic about, which is right in line with this show's values. Caleb Gardner: “I'm probably most optimistic about technology's ability to tackle global problems like climate change. I'm actually pretty bullish on technology's ability to solve and actually innovate around the reduction of carbon in our atmosphere, electric vehicles, electric grid… and what's great is a lot of that's already being driven by the private sector around the world, so it's not as dependent on government as we think that it is.” So let's talk about some of the emerging technologies that show a lot of promise in mitigating the effects of climate change—and that might make sense to invest in, if you have the means to do so. A team of UCLA scientists led by Aaswath Raman has developed a thin, mirror-like film that reflects heat to outer space through radiative cooling, and can lower the temperatures of objects it's applied to by more than 10 degrees. The idea comes from generations of knowledge from people living in desert climates who learned to cool water by letting the heat radiate out of it overnight. If this film were added to paint and/or applied to pipes and refrigeration units, it could help cool buildings and make refrigeration systems more efficient, reducing the need for air conditioning, which accounts for as much as 70% of residential energy demand in the United States and Middle East. One of the strongest selling points of innovations like this film is that it doesn't need electricity; it only needs a clear day to do its job. Another innovation in reflecting energy back into space comes in the form of ‘cloud brightening,' a technique where salt drops are sprayed into the sky so that clouds reflect more radiation, allowing us to refreeze the polar ice caps. Then there's the new trend of green roofs, in particular the California Academy of Sciences' Living Roof, which spans 2.5 acres and runs six inches deep, with an estimated 1.7 million plants, collecting 100 percent of storm water runoff and offering insulation to the building below. The whole endeavor is brilliantly hopeful and strategic. A massive green roof is completely on brand for a science museum, but that doesn't mean other buildings and businesses wouldn't benefit from them as well. The National Park Service even estimates that over a forty year building lifespan, a green roof could save a typical structure about $200,000, nearly two-thirds of which would come from reduced energy costs. Other building technologies move beyond solar panels and green roofs, with automated building management systems detecting usage patterns of lighting, heating, and air conditioning. There have also been innovations in window insulation, trapping heat during the winter and blocking it out in the summer. ‘Green cement' can be heated to lower temperatures and cuts emissions by a third compared to regular cement. There are new Hydrogen-powered ships whose emissions are water. Electric planes have been developed for short-distance flights. Large floating solar power installations have the potential to generate terawatts of energy on a global scale, and when built near hydropower, can generate electricity even in the dark. Lithium batteries continue to get smaller and more efficient, and can be charged faster and more often than other batteries, making electric vehicles cheaper. And speaking of electric vehicles, they can help with our energy storage problems, with owners buying electricity at night to charge their cars and selling it to the grid when demand is high and cars are unused during the day. Feeding cows seaweed and replacing beef with insects such as mealworms can drastically reduce methane emissions. Scientists in Argentina are working on backpacks for cows that collect their methane, which have shown to collect enough methane from a single cow every day to fuel a refrigerator for 24 hours. To help curb other types of emissions, carbon capture and storage technologies like NZT allow us to capture CO2 in offshore storage sites several kilometres beneath the North Sea. But it's not just about new technologies, or technologies that only work for the richest people. Here's Tan again to elaborate on this idea. Tan Copsey: “This is a really tricky moment, y'know, this is a really bad time to be inefficiently using the resources we have. As we think about climate tech, think about optimizing mobility, as well as copying the existing model. There's a lot of existing tech out there that would make people's lives better—very simple irrigation systems—and so, we shouldn't just think of this in terms of big new exciting things, we should think about it in terms of deploying existing things.” All of this is part of embracing the mindset that says things can change. We need a can-do mindset, but we also need clarity and collaboration. Basically all options need to be implemented if we want to curb the damage that has already been done. Our solutions need to work in conjunction with one another, and support the greatest number of people. To close out, here's Christopher Mims with the last word on putting away the doom and gloom, and remaining optimistic in the face of overwhelming adversity. Christopher Mims: “If you really think about the whole sweep of human history, we live in a time where the pace of especially technological, and therefore in some ways cultural change, is so much faster than ever. We keep inventing new ways to kind of trip ourselves up, and then we have to just adapt so quickly to them. We're constantly playing catch-up with our own technological and social developments. So there's a lot of beating ourselves up over like, ‘woah, how come we didn't do it this way, or we didn't do this right?' or whatever. Sometimes I'm just like, ahh, just chill! We're going as fast as we can. It's very easy to get caught up in the moment to moment, but I think there is this kind of overall arc where, if we don't cook ourselves to death, or blow ourselves up, or distract ourselves to death, we're moving in directions that, once we have fully understood how to live in harmony with the technology that we've created, we'll probably be okay.” Thanks for joining me on The Tech Humanist Show today. I hope you've learned something, and at the very least, that you're going into the future with more hope than you had before.

    How Tech Harms – and Can Help Heal – the Climate

    Play Episode Listen Later Apr 21, 2022 45:09


    By almost any reckoning, the climate emergency is the most urgent and existential challenge facing humanity for the foreseeable future. All of the other issues we face pale in comparison to the need to arrest and reverse carbon emissions, reduce global average temperatures, and begin the work of rebuilding sustainable models for all of us to be able to live and work on this planet. Not only do I have hope, but many of the climate experts I have read and spoken with are hopeful as well.

    What is a Tech Humanist?

    Play Episode Listen Later Apr 10, 2022 5:50


    Hello and welcome to The Tech Humanist Show! In this introductory episode, host Kate O'Neill explains what a tech humanist is and what you can expect from future episodes. Guests on this episode includue Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdhury, Chris Gilliard, and Rahaf Harfoush. The Tech Humanist Show is a multi-media […]

    What is a Tech Humanist?

    Play Episode Listen Later Apr 10, 2022 5:50


    Hello and welcome to The Tech Humanist Show! In this introductory episode, host Kate O'Neill explains what a tech humanist is and what you can expect from future episodes. Guests on this episode includue Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdhury, Chris Gilliard, and Rahaf Harfoush. The Tech Humanist Show is a multi-media format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Transcript Hello humans, and welcome to The Tech Humanist Show! In this introductory episode, I'll explain what a tech humanist is, and what you can expect from future episodes. A Tech Humanist, as I've coined it, is a person who sees the exciting opportunities technology offers humanity, while remaining cautious & conscious of the potential risks and harms those technologies bring. It isn't the same thing as a techno-utopian, who believes technology will inevitably bring about a utopia in the future, or a techno-solutionist, who believes technology is the solution to all our problems. Instead, a tech humanist believes that when we design technology, we have to think of humanity first and foremost, and remain active and diligent in making technology work better for all people. Here are a few clips from some of the experts I've spoken with for The Tech Humanist Show who sum it up well. Emma Bedor Hiland: “I do actually identify as a tech humanist, because I am optimistic about what technologies can do, and offer, and provide and the ways they might be utilized to enhance human flourishing, especially in health spaces and including the mental healthcare space, too. I just think we also need to be realistic about what technology can do, and the ways that technologies are deployed which might cause us harm.” Oluwakemi Olurinola: “I actually like the humanist put beside the tech. Since I advocate for empathy and social and emotional learning while we also train on the digital skills, I am a tech humanist.” Dorothea Baur: “I'm proud to be a humanist! I believe that there is something distinctive about humans that we need to keep alive. One of the biggest achievements is that, like, 200 and, y'know, 40 years ago, when the Enlightenment set in, where we said, ‘hey, people, dare to use your own minds!' it was like a wake-up call, because we didn't really make an effort to explore the world because we thought everything was determined by God. By stepping out of this dependency and using our own brains, we liberated ourselves. And so now, are we taking it too far? Have we used our brains so far that we're eventually training machines that are smarter than us and they're kind of imposing their decisions again upon us, and not just imposing their decisions on us, but also imposing decisions that are equally as intransparent as god's decisions, if you look at certain algorithms. We cannot delegate our responsibility to machines! We can use machines to improve our health, and our well-being, etc., to improve the world, but we cannot entirely delegate responsibility to machines.” Right now, we're seeing massive shifts in the way humans live and interact with technology, which makes tech humanism more important than ever. To maintain our agency, we need to work together to fight bias in our algorithms, make sure we think of the user experience and how technology affects us, and consider the role humans play in a world that is becoming increasingly automated. Dr. Rumman Chowdhury: “I recognize and want a world in which people make decisions that I disagree with, but they are making those decisions fully informed, fully capable. Whether it's being able to derive meaning from the systems we've created, or understanding what our meaning is, or what our purpose is as a human being, and not having that be shaped or guided by other forces.” Dr. Chris Gilliard: “We really need to think about the effects of these things. Like, what are the potential harms of this thing? Before you put it out, right? When [REDACTED] came out and said, ‘we had no idea that people would use it to spread racism and misogyny!' …they could have done that work, right? One of the things I've seen that does give me a little bit of hope is that there are more and more people not only saying that we have to do that work, but being inside these companies and actually holding them accountable for doing it.” Rahaf Harfoush: “For me I think the reality is that everything that has the capacity to help us can also simultaneously hurt us in some new and different ways. I don't necessarily think about what's gonna help humanity, I think about, ‘what challenges are gonna emerge from this technology, and how can we navigate that?'.” The first season of this podcast featured a number of interviews with some of today's top thinkers, experts, and educators in the field of technology, with one guest interview per episode. From season 2 onward, every episode will instead focus on a key area of the intersection of technology and humanity, and the ways technology is changing and shaping the human experience. Each episode will feature multiple guests, featuring clips pulled from season one, as well as brand-new interviews that haven't been and won't be released on the podcast. Together, we'll be tackling big ideas about how to make the future a brighter place for everyone. The guests you heard in this episode were, respectively, Emma Bedor Hiland, Oluwakemi Olurinola, Dorothea Baur, Rumman Chowdry, Chris Gilliard, and Rahaf Harfoush. You can hear more from them and all my guests in past and future episodes of the podcast, or find full interviews at TheTechHumanist.com.

    A Brighter Future for Education (using Technology!)

    Play Episode Listen Later Mar 24, 2022 28:00


    On this week's episode, we're rethinking education to bring about a brighter future for humanity. I speak with guests about the ways technology has changed the way we think about what's possible for education, as well as how we can challenge our assumptions to make the system work better for all humans. Which technologies can we use to improve learning? Who benefits from the lessons we learned throughout the ongoing COVID pandemic? And in what ways can we rethink our current system to help all learners reach their potential? Guests include Dr. Rumman Chowdhury, Dr. Chris Gilliard, Rahaf Harfoush, John C. Havens, & Dr. Oluwakemi Olurinola. The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. To watch full interviews with past and future guests, or for updates on what Kate O'Neill is doing next, subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube. Transcript Today on the show, we're talking about how we can achieve A Brighter Future for Education. Schools are not created equal, as any parent will tell you. For proof, look no further than the recent college admissions bribery scandal, or the fact that we still grade our schools and use those metrics to determine school budgets. Beyond that, budgetary restrictions and teacher experience can make for vastly different education outcomes. And with our rapidly changing technology, some of these differences will become magnified. In my book A Future So Bright, I write about the opportunity for a brighter future for education–which is critical to ensuring we meet United Nations Sustainable Development Goal #4: “ensure inclusive and equitable quality education and promote lifelong learning opportunities for all.”When we think about what it might take to make the future brighter for education, most teachers and administrators I have spoken with in the US will start their answer with “budgets” and move on to “curricula.” This isn't a strictly American occurrence, either. I spoke with Dr. Oluwakemi Olurinola, who is both an educator and an educational technology consultant, speaker, and a Microsoft Global Training partner based in Nigeria, about attempts to improve the education system in Nigeria and where the most significant gaps are. [Dr. Oluwakemi Olurinola] “Sometimes when we think about the teaching and learning, and we think about the instructional materials, most times we are looking at how to get these things bought, you know? We are talking about budgets. You know sometimes we engage with some schools and they tell you ‘oh, I'm ICT compliant' because they have quite a number of laptops, but then you go into how these devices are actually used and you see that basically all they are doing is converting their hard notes to soft copies and that isn't really what technology integration is really about. And you know sometimes you also see where budgets and large amounts of money spent buying devices, because there used to be this imagination that once you have technology in the hands of students, definitely there is improved learning, and we know that that is not true. One of the lessons taken away was actually the skill gap of the teachers. We've seen governments or budgets spend on technology, but then you still have that skill gap.” Budget and curriculum are very real limitations, but before we even get there, there are more fundamental challenges facing education, many of which are globally relevant. But as we look at the challenges and what I call “Change Factors” faced by schools and teachers, we see a lot more to overcome. A brighter future starts with full acknowledgment of harms & risks, as well as the opportunities for improvement. If we want the future of education to be as bright as possible, we have to do that here. Largely, when we talk about the future, we think of two extremes: Dystopia vs. Utopia. While it feels like we should be aiming for utopia in our planning and strategizing, deep down we know that's not possible, and that makes that useless. It's a problem of framing. Several of the experts I've spoken with share this view, including Rahaf Harfoush, a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture and John C Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, who each elaborate on why it doesn't make sense to think of things this way. [Rahaf Harfoush] “Everything has the capacity to help us, it's just that it's going to also simultaneously hurt us in some new and different ways. I don't necessarily think about what's going to help humanity, I think about what new challenges are going to emerge from this technology, and how can we navigate that? The bigger question for me becomes, how can we prepare people to hold this duality? What worries me is that the tech crowd comes in and they try to push you this utopian version, and other people push the dystopian version. Both of those are not true, but both are true in different ways. For every single case of facial recognition used to catch a criminal there's a case where it's used to breach privacy. I always say, ‘it's going to be equally awesome and equally terrible at the same time,' and that's why it's going to be so hard to predict the future. We just have to continuously ask ourselves which side of the equation we're falling on.” [John C Havens] “Six years ago I was writing a series for Mashable. What I was finding was that even 6 years ago, there were only the extremes… here's the dystopian aspect of AI, here's the utopian… I just kept calling people and asking ‘is there a code of ethics for AI? Because that will help balance things out.' And more and more, no one knew of one.” There will never be a complete utopia or complete dystopia—they exist simultaneously. Within our tech and within ourselves. The “either/or” model distances us from the very real consequences of our decisions, and how they play out in future realities. When it comes to technology in education, there are externalities to our decisions that must be considered. The good news is, we make decisions that affect the future every day, which means we can still bend that future towards the most uplifting and empowering outcomes for all of humanity. First, though, let's look at the potential Harms and Risks within our current system. One major issue that has cropped up and been magnified since the onset of the Pandemic is lack of equitable broadband access. Dr. Chris Gilliard, a writer, professor and speaker whose scholarship concentrates on digital privacy and the intersections of race, class, and technology, explains the consequences he's seen firsthand because of this inequity in Detroit. [Dr. Chris Gilliard] “Lack of access to internet can be tied to health outcomes, long-term educational outcomes, or employment opportunities. And If you looked at a redlining map of the city of Detroit, many of the ways these maps were drawn, a lot of the disproportionate affects of discrimination are still being felt by the populations. What I call that is digital redlining. If you drive along 8 Mile, or some other roads in Detroit, it's very clear 50-60-70 years later, the after-effects of these housing policies. I teach at a community college. I started to see through my work with students how these effects became digital, whether it was lack of access to broadband, or scholarly publications.” These were issues before COVID, but our changing education landscape has made them much more noticeable and urgent. Shortly after the onset of the COVID-19 Pandemic, UNESCO reported that 192 countries had closed all schools and universities, which left nearly 1.6 billion children and young people (representing more than 90 percent of the world's learners) scrambling to adapt—not to mention their teachers, parents, and guardians. UN data reveals a ‘nearly insurmountable' scale of lost schooling due to Covid. The research suggests that “…up to 70% of 10-year-olds in low- and middle-income countries cannot read or understand simple text, up from 53% pre-Covid.” “In South Africa, schoolchildren are between 75% and a whole school year behind where they should be, with up to 500,000 having dropped out of school altogether between March 2020 and October 2021. This has long-term implications as well. In the 2005 Pakistan earthquake, students missed 3 months of school, but four years later were still 1.5 years behind where they would have been. Then there are intersectional issues of gender, class, and race. Around the world, girls' education is most at risk, with over 11M girls at risk of not returning to school after COVID-19 for a variety of reasons, including caregiving demands, early and forced marriages, adolescent pregnancy, beliefs that girls aren't supposed to be educated, and more. On top of that, there is a tremendous inequity of resources available to students in low-income communities, leaving far too many students–including a disproportionate number of non-White students–at a significant disadvantage. And then there are issues of safety. With the increasing number of school shootings, many districts are increasing their security–often at the expense of jobs designed to help students progress. NYC public schools, for example, have over 5,000 full-time police officers but only 3,000 guidance counselors. The presence of these officers drives up rates of punitive measures for students of color–including instances of punishment for things like burping–which feeds into the school-to-prison pipeline. On top of all of this, the cost of education is increasing–especially higher education like colleges and universities. Daniel Bignault of WBIR-TV in Knoxville calculated the increases in in-state tuition at the University of Tennessee compared with wages over a nearly forty-year span and found that “from 1982 to 2018, college costs at UT grew by 1,430%, while median income grew by 213% and minimum wage grew by only 116%.” The total amount of student debt carried by people well out of school is far too high. College didn't used to be a risky investment, but for many students–especially those from low-income backgrounds–it very much is. And we still haven't talked about curriculum. In addition to the quality of information varying wildly from school to school, many schools don't offer contemporary technical skills, aren't as inclusive as they could be, and don't take into account the differing learning styles of the students. Because of this variety of challenges, we have a long way to go if we want to reach the goal of education equity. Now, let's take a look at The Bright Side! What, for example, are the unique advantages of remote learning?Because I investigated the intersection of online and offline experiences for my 2016 book Pixels and Place, I have been particularly intrigued with the pros and cons of the mass pivots to online experiences since early 2020.First, online learning fosters a different type of imagination. For a long time, students have existed in a binary where they are either “at school,” where learning is done, or “not at school,” where learning is not expected to happen. With the onset of online learning, students' homes have become a sort of “thirdspace,” which is described by Edward Soja in the field of human geography as “an in-between space between binaries that enables the possibility to think and act otherwise.” This thirdspace ideology has allowed teachers to begin rejecting the long-held assumption that school buildings are the locus of learning, and toward imagining ways in which meaningful learning can occur outside our rigid perceptions of what constitutes “legitimate” education. For instance, a 2021 study published in *Education Sciences* explores the ways that teachers in Scotland were pushed to not only learn how to use new digital tools for online learning during COVID-19, but to, even more importantly, imagine how to teach adaptively, a practice that requires “deep and sophisticated knowledge about learning, learners, and content.” This pushed teachers to embrace the idea that learning can occur in various forms and mediums, including during activities usually seen as “just for fun.” Dr. Olurinola encountered this in Nigeria as well, and spoke to me about the joys of watching teachers embrace novelty and creativity in their teaching processes. [Dr. Oluwakemi Olurinola] “We had all forms of interventions as a country, because we were aware there was a disparity in access to technology, especially for not-too-developed cities and remote areas. One of the lessons was the skill gap of the teachers… so one of the major things we saw the government do, and I think they are learning from the experience, was teacher development. We had a lot of government initiatives in upskilling teachers, especially with digital skills. Radio broadcasts, TV stations with teachers teaching via television… but for schools that could afford it, there was technology integration at different levels. The beauty about that period was the creativity of the teachers. We saw teachers use tools not originally developed for academic purposes. We saw them adapt to meet the needs of their students during this period. One lesson learned was the importance of technology to everyday life, we couldn't adopt the ostrich approach, we had to stand up and embrace this change. In fairness to the teachers & students within that period, we saw a lot of them taking up these challenges head-on. Because destruction was sudden, teachers weren't really prepared, but we saw them take up crash-courses, improve upon professional development, learning how to use various technology tools, just to ensure learning continued even though the pandemic was on.” In using thirdspaces to challenge the “at school or not” binary, some students have been better able to participate and learn than they ever were in the classroom. Classrooms were not designed for all learning styles, and with thirdspace learning, “some of the underlying logics, assumptions and norms that make people feel excluded and alone within [institutionalized spaces] are unmasked and made visible”—a practice that can lead to greater inclusion, self-expression, and change. Neurodivergent students, for example, seem to be better able to thrive in at-home learning, where they are able to be in a familiar environment so the novelty of learning is not overwhelming. A 2017 report from the All Party Parliamentary Group on Autism (APPGA) in England presented survey results showing that “fewer than half of children and young people on the autism spectrum say they are happy at school; seven in ten say that their peers do not understand them, and five in ten say that their teachers do not know how to support them.” Sean Arnold, a special educator and STEM coach in NYC, noticed a significant change when his students were working from home, saying ‘I had students who were selectively mute, and had never spoken to their peers in school in person. But because they had a familiar space… they literally spoke to their classmates for the first time in remote learning. I think that's meaningful.' He also noted a trend: nearly all of the remote students with whom he works showed more growth than in-person classmates.” An article by Eva Tesfaye for NPR suggests that some students with autism and other neurological differences tend to focus better without other classmates around. Bobby, a sixth grader in western Massachusetts, told NPR that he likes online learning because “it's a lot easier to focus. I can be in my room and be a lot more comfortable doing stuff.” It's worth noting that virtual learning isn't always the best solution for neurodivergent students, particularly in situations when remote learning requires significant support from parents, when certain learners need to focus on developing social skills with classmates, or when remote learning conflicts with meeting other objectives in a student's Individualized Education Plan. That said, there is a growing and vocal contingent of parents, teachers, and students who want to permanently incorporate virtual or at-home learning as a resource. Which leads us to the part where we look forward. How can we achieve A Brighter Future in regards to education? What opportunities can we take action on today? Our goal is to make education equitable, inclusive, accessible, available to all ages, & resilient – in spite of existing infrastructure gaps and climate challenges. That means there's still a need to ensure public access to at least the basics of education. It's hard to quantify the spillover benefits of public education, but society can only gain in both economic prosperity and overall quality of life by continuing to invest in it. I've put together a number of specific areas that, if we focus our attention, we can have the largest impact on future prosperity. First, invest in educating girls worldwide. UNESCO lists several compelling statistics on their website that demonstrate the value of education at the individual level (“just one more year of school can increase a girl's earnings, when she is an adult, by up to 20%”) and at the more macroeconomic level (“some countries lose more than US $1 billion a year by failing to educate girls to the same level as boys”). Dr. Olurinola works to expand what girls see as possible for themselves in STEM fields. Although girls in Nigeria knew they could be Doctors, that was the only job they could see themselves in. [Dr. Oluwakemi Olurinola] “Over time, especially in this climate of gender stereotypes of the place of a woman and types of career that she can or cannot do. To change this narrative, we started “Girls in Science & Technology” program, (in short, GISTs) so it's basically an initiative in that educating girls by providing girls the opportunity to learn about STEM. I remember in that particular time I ran a program and invited 70 girls. I asked which of them wanted to be medical doctors, and everyone's hands went up. I had only one person in that room who was considering a career in engineering. I realized they loved science, but they didn't know what other career options were available to them. So you have the problem of awareness. One of the things that I love to do is show them videos of women who are trailblazing in different career paths in science & tech fields so they know this is a possibility, they have people they can look up to and mentors they can say ‘okay, if she can do it, why can't I also do it if I have an interest in this field?'.” Our next actionable and necessary step is to actively work to remove racist ideas and other systemic discrimination from the curriculum and the classroom. We can instead increase messages of inclusion and respect. Another thing to think about is reimagining our education delivery methods. One model, called Teaching at the Right Level (TaRL), attempts to sort students based on their current knowledge & learning level rather than their age. The method was pioneered in India and rolled out to ten African countries by mid-2020. Whether that method works here or anywhere is yet to be determined, but we have to be willing to be bold if we want to make big, lasting change. Where possible, we should also be working to improve learning opportunities with technology. This includes making accommodations for students with (autism spectrum disorder, or) ASD or who learn better in familiar environments. Students from The National Autistic Society's Young Ambassadors Group in England submitted a 7-point plan for how they believe schools should do things differently for students with ASD, including things like 1) tackle bullying more effectively, 2) provide safe spaces, including a quiet room that is always available to students with ASD, and 3) understand that students on the Autism Spectrum may have sensory differences, and may be particularly sensitive to things like light and noise. In addition, schools can work to use technology to enhance learning that's already happening in the current system. Dr. Olurinola explored matching specific technologies to different lessons to solidify concepts. [Dr. Oluwakemi Olurinola] “We see that different kinds of content require different kinds of engagement. One of the most common tools is Powerpoint. The Powerpoint presentation doesn't address every form of engagement. For instance, I want to teach math. There are other math tools that allow you to collaborate. For instance, if I'm using one and sharing that note with all my students, they all can collaborate in that space to solve that math problem. That has a better output than presenting rigid content using Powerpoint. Because it's there and easy to use, sometimes it's abused. For instance, I'm teaching a literature class, let's say you wrote a book about Tech Humanism. One of the ways to bring to light that content, is to actually Skype with you or have you on Zoom and have my students connect with you via live session and ask you questions about the content that you have written in your book. This is something we can do because technology enables it. It would be difficult for you to come into my classroom, but we can do this in real time because we have technology enabling, and the learning on that topic is actually enhanced.” In our increasingly digital world, we also need to teach both critical media and digital  literacies. The rise of misinformation and disinformation suggests that more people would benefit from skills in reading comprehension, critical thinking, and questioning motives driving media and institutions. A study published in PNAS in 2020 used Facebook's “Tips to Spot Fake News” article to create a short course and quiz which was given to five-thousand participants. The result? People's ability to spot fake news increased by 26.5%. This also means teaching kindness and empathy. If our goal is global equity, that means thinking of ourselves as a global community and using technology to showcase our authentic selves. Dr. Olurinola spoke to me about how she teaches her students to think of themselves as members of a global community. [Dr. Oluwakemi Olurinola] “I know that the fusion of technologies is beginning to blur, therefore I believe that the effort should be focused towards global competencies for our students, because the world has become more interconnected. Coming from a developing country, we know that it becomes more imperative that we train our students to be globally competent, to develop the skills to know how to live, learn, and work even in the global village. As we make these global connections because people are working remotely, and you have more global communities rising, our students need to know how to successfully navigate and interact within the digital space. Things like kindness and empathy. There isn't really a dichotomy between your online self and offline persona. Your online and offline persona should be the same. So if I'm kind as a person, even when I'm online and using tech, I should be kind in my use of tech and kind when I'm online engaging in the digital space. We need to learn how to be good citizens, how to develop global competences, and also to appreciate differences when they exist. For me, that's the future I see.” Along those lines, we also need to teach young people the human skills they need for the future workplace. I spoke with Dr. Rumman Chowdry, who is currently the Director of the Machine Learning Ethics, Transparency, and Accountability team at Twitter, about the dichotomy between our education system and the workplace, and the skills taught vs the skills needed. [Dr. Rumman Chowdhury] “If I were to pick one thing that got me the most interested in this technology, it's actually the potential for EdTech. What it should be is a complete reimagining of education. Because for one, educational systems do not help people get jobs or do well at their jobs. People joke that the number one skill you need to learn in college is Excel, and that's the one thing they don't teach you. So there's this disconnect between the real world and the jobs we get and then educational systems and how they're structured. We know there's inequality. There's just so much that can be resolved with this tech, whether it's remote learning or customized learning. When I started my job at Accenture, even before then, people were talking about lifelong learning, and how AI really means we have to embrace learning and think about how we're going to spend the rest of our lives educating us. What amazing aspirations! I sincerely hope that what we don't do is try to stick technology into the broken infrastructure that is our education system. That would be a disservice to us as humanity, but also to technology and its potential. KO: Is it true or not that once you use technology to accelerate a system, where it breaks might be instructive about where those institutions are already failing us? RC: Specifically using the education example, there are so many people that have already looked at the inefficiencies of these systems, what does/doesn't work, and if we really think about this in regards to human self-determination… what is the purpose of this system? Can we take a step back and emotionlessly ask, ‘is it serving the purpose it is intended to serve?' There are plenty of people pointing out the systemic flaws. Now we have technologies that could be designed to solve these problems, rather than reinforce the power imbalance and structural inequalities, and we're going to ignore what these people say because it's easier to perpetuate, amplify, and cement these inequalities rather than do the extra work to fix things.” Some of the skills that will be most in-demand are difficult-to-automate manual skills, like plumbing and other fine motor work, and the skills commonly called “soft”—usually mature versions of unique-to-human abilities such as making decisions in context, judgment calls, nuanced management, leading with emotional intelligence, and so on. As the future workplace remains uncertain, we also need to teach humans to be adept at making meaning. If our identities are tied too closely with our jobs, many people are in for a massive loss of self as the upheaval in the job marketplace forces millions of people to change career paths as we build our way to the ideal future. One way to fight this is to have a better sense of how we make meaning in our lives, and how we can begin something new without losing track of ourselves. This is by no means an exhaustive list, but consider it a blueprint to build and amend as we go. Taken as a whole, this may sound like a lot of work, but if we all focus on one thing we can influence, our combined efforts can build a future that works for everyone.

    A Brighter Future for Education (using Technology!)

    Play Episode Listen Later Mar 24, 2022 28:00


    On this week's episode, we're rethinking education to bring about a brighter future for humanity. I speak with guests about the ways technology has changed the way we think about what's possible for education, as well as how we can challenge our assumptions to make the system work better for all humans. Which technologies can […]

    The Tech Humanist Show: Episode 19 – Art Chang

    Play Episode Listen Later Jun 21, 2021 54:48


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live in a live-streamed video program before it's made available in audio format. Hosted by Kate O'Neill. About this week's guest: Art Chang is a mayoral candidate for the city of New York. The son of Korean immigrants and father of 2 boys, Art Chang has spent the last 35 years working as a professional problem solver in NYC. He's built a dozen startups in the city, all focused on using technology as a force for good. He built Casebook, the first web-based software platform for child welfare, which is now the system of record in the State of Indiana. He put Queens West -- the LIC waterfront -- in the ground with climate change in mind, making it one of only 2 developments in the city to not lose power during Hurricane Sandy. He also co-created NYC Votes with the Campaign Finance Board to improve participation in our local democracy. He has had the privilege to work at some of NYC's most important institutions, such as CUNY, the Brooklyn Public Library, the City Law Department, and Brooklyn Tech, giving him the tools and knowledge to make real solutions for the challenges faced by the people of New York City. Visit www.Chang.nyc to learn more and join #TeamChang. He tweets as @Art4MayorNYC. This episode streamed live on Thursday, June 17, 2021.

    The Tech Humanist Show: Episode 19 – Art Chang

    Play Episode Listen Later Jun 21, 2021 54:48


    About this episode's guest: Art Chang is a mayoral candidate for the city of New York. The son of Korean immigrants and father of 2 boys, Art Chang has spent the last 35 years working as a professional problem solver in NYC. He's built a dozen startups in the city, all focused on using technology […]

    The Tech Humanist Show: Episode 18 – Cathy Hackl

    Play Episode Listen Later Nov 20, 2020 43:48


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this week's guest: Futurist, speaker, and author Cathy Hackl is a globally recognized augmented reality, virtual reality and spatial computing thought leader. She’s been named one of the top 10 Tech Voices on Linkedin for two years in a row, the highest honor on the platform. She currently works as part of the Enterprise team at one of the industry's top OEMs. Prior to that, Cathy was the lead futurist at You Are Here Labs, where she led agencies, brands and companies in applying Augmented Reality and Virtual Reality for marketing and training working with brands like AT&T & Porsche. Hackl worked as a VR Evangelist for HTC VIVE during the launch of their enterprise VR headset and during the company’s partnership with Warner Brothers’ blockbuster, Ready Player One. She's the co-author of Marketing New Realities, the first VR AR marketing book ever written. She also worked as Chief Communications Officer for cinematic VR studio Future Lighthouse, where she collaborated on projects with Sony Pictures Entertainment, Oculus, Beefeater, and William Morris Endeavor. Hackl has been featured in media outlets like Forbes, Barron’s, Salon, VentureBeat, Digiday, Tech Target, CMO.com, and Mashable. She is a global advisor for VR AR Association and was recognized in 2016 by NBC News as one of the top Latina women working in VR. Before working in spatial computing and technology, she worked as a communicator at media companies such as CNN, Discovery, and ABC News and was nominated in 2007 for an EMMY Award for her storytelling work. She's also the creator of the world’s first holographic press release and loves all things spatial computing, artificial intelligence and futurism. Cathy is currently working on her second book The Augmented Workforce: How AI, AR, and 5G Will Impact Every Dollar You Make. She’s co-authoring the book with John Buzzell. She tweets as @CathyHackl. This episode streamed live on Thursday, November 12, 2020.

    The Tech Humanist Show: Episode 18 – Cathy Hackl

    Play Episode Listen Later Nov 20, 2020 43:48


    About this episode's guest: Futurist, speaker, and author Cathy Hackl is a globally recognized augmented reality, virtual reality and spatial computing thought leader. She's been named one of the top 10 Tech Voices on Linkedin for two years in a row, the highest honor on the platform. She currently works as part of the Enterprise […]

    The Tech Humanist Show: Episode 17 – Caleb Gardner

    Play Episode Listen Later Nov 13, 2020 56:52


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Caleb Gardner, in his more than a decade of experience in digital leadership, entrepreneurship, and social impact, has worked for a variety of organizations in the public and private sectors, including at prestigious professional service firms like Bain & Company and Edelman. During the second Obama Administration, he was the lead digital strategist for President Obama’s political advocacy group, OFA. He brought his unique insights to growing one of the largest digital programs in existence, with a millions-strong email list and massive social media following—including the largest Twitter account in the world. Now as a founding partner of 18 Coffees, a strategy firm working at the intersection of digital innovation, social change, and the future of work, he’s helping forward-thinking companies and nonprofits adapt and evolve to meet the challenges of today’s economy. He speaks, trains, and leads workshops around the world on topics related to change, including strategy in a mission economy, technology and innovation for a better world, and change management at the speed of digital. He tweets as @CalebGardner. This episode streamed live on Thursday, November 5, 2020.

    The Tech Humanist Show: Episode 17 – Caleb Gardner

    Play Episode Listen Later Nov 13, 2020 56:52


    About this episode's guest: Caleb Gardner, who in his more than a decade of experience in digital leadership, entrepreneurship, and social impact, has worked for a variety of organizations in the public and private sectors, including at prestigious professional service firms like Bain & Company and Edelman. During the second Obama Administration, he was the […]

    The Tech Humanist Show: Episode 16 – Yaël Eisenstat

    Play Episode Listen Later Nov 6, 2020 57:19


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Yaël is a thought leader, democracy activist and strategist working with governments, tech companies, and investors focused on the intersection of ethics, tech, democracy, and policy. She has spent 20 years working around the globe as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, a corporate social responsibility strategist at ExxonMobil, and the head of a global risk firm. Currently, she is a Visiting Fellow at Cornell Tech's Digital Life Initiative, where she explores technology's effects on civil discourse and democracy and teaches a multi-university course on Tech, Media and Democracy. Yaël has become a key voice and public advocate for transparency and accountability in tech, particularly where real-world-consequences affect democracy and societies around the world. Her recent TED talk addresses these issues and proposes ideas for how government and society should hold the companies accountable. Yaël travels internationally as a keynote speaker at any number of venues seeking informed, inspirational women to help make sense of our world's most difficult challenges. She can be booked through the Lavin Agency. Yaël was named to Forbes' 2017 list of “40 Women to Watch Over 40”. She is also an Adjunct Professor at NYU's Center for Global Affairs, a member of the Council on Foreign Relations, and she provides context and analysis on national security, elections integrity, political and foreign affairs in the media. She has been published in the New York Times, the Washington Post, Brookings Techstream, TIME, WIRED, Quartz and The Huffington Post, has appeared on CNN, BBC World News, Bloomberg News, CBS News, PBS and C-SPAN, in policy forums, and on a number of podcasts. She earned an M.A. in International Affairs from the Johns Hopkins School of Advanced International Studies (SAIS). More than anything, she is passionate about using her background and skills to help foster reasoned, civil discourse. She tweets as @YaelEisenstat. This episode streamed live on Thursday, October 29, 2020.

    The Tech Humanist Show: Episode 16 – Yaël Eisenstat

    Play Episode Listen Later Nov 6, 2020 57:18


    About this episode's guest: Yaël is a thought leader, democracy activist and strategist working with governments, tech companies, and investors focused on the intersection of ethics, tech, democracy, and policy. She has spent 20 years working around the globe as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for […]

    The Tech Humanist Show: Episode 15 – Abhishek Gupta

    Play Episode Listen Later Oct 30, 2020 51:27


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Abhishek Gupta is the founder of Montreal AI Ethics Institute (https://montrealethics.ai ) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He represents Canada for the International Visitor Leaders Program (IVLP) administered by the US State Department as an expert on the future of work. He additionally serves on the AI Advisory Board for Dawson College and is an Associate Member of the LF AI Foundation at the Linux Foundation. Abhishek is also a Global Shaper with the World Economic Forum and a member of the Banff Forum. He is a Faculty Associate at the Frankfurt Big Data Lab at the Goethe University, an AI Ethics Mentor for Acorn Aspirations and an AI Ethics Expert at Ethical Intelligence Co. He is the Responsible AI Lead for the Data Advisory Council at the Northwest Commission on Colleges and Universities. He is a guest lecturer at the McGill University School of Continuing Studies for the Data Science in Business Decisions course on the special topic of AI Ethics. He is a Subject Matter Expert in AI Ethics for the Certified Ethical Emerging Technologies group at CertNexus. He is also a course creator and instructor for the Coursera Certified Ethical Emerging Technologist courses. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world that has made significant contributions to the Montreal Declaration for Responsible AI, the G7 AI Summit, AHRC and WEF Responsible Innovation framework, PIPEDA amendments for AI impacts, Scotland’s national AI strategy and the European Commission Trustworthy AI Guidelines. His work on public competence building in AI Ethics has been recognized by governments from North America, Europe, Asia, and Oceania. More information on his work can be found at https://atg-abhishek.github.io He tweets as @atg_abhishek. This episode streamed live on Thursday, October 22, 2020.

    The Tech Humanist Show: Episode 15 – Abhishek Gupta

    Play Episode Listen Later Oct 30, 2020 51:27


    About this episode's guest: Abhishek Gupta is the founder of Montreal AI Ethics Institute (https://montrealethics.ai ) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He represents Canada for the International Visitor Leaders Program (IVLP) administered by the US State Department as an expert on the future of […]

    The Tech Humanist Show: Episode 14 – Neil Redding

    Play Episode Listen Later Oct 23, 2020 58:00


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Neil Redding is Founder and CEO of Redding Futures—a boutique consultancy that enables brands and businesses to engage powerfully with the Near Future. His rare multidisciplinary perspective draws on the craft of software engineering, the art of brand narrative and expression, and the practice of digital-physical experience strategy. Prior to founding Redding Futures, Neil held leadership roles at Mediacom, Proximity/BBDO, Gensler, ThoughtWorks and Lab49. He tweets as @neilredding. This episode streamed live on Thursday, October 15, 2020.

    The Tech Humanist Show: Episode 14 – Neil Redding

    Play Episode Listen Later Oct 23, 2020 57:59


    About this episode's guest: Neil Redding is Founder and CEO of Redding Futures—a boutique consultancy that enables brands and businesses to engage powerfully with the Near Future. His rare multidisciplinary perspective draws on the craft of software engineering, the art of brand narrative and expression, and the practice of digital-physical experience strategy. Prior to founding […]

    The Tech Humanist Show: Episode 13 – Ana Milicevic

    Play Episode Listen Later Oct 16, 2020 44:34


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Ana Milicevic is an entrepreneur, media executive, and digital technology innovator. She is the co-founder and principal of Sparrow Advisers, a strategic consultancy helping marketers and C-suite executives navigate the data-driven adtech and martech waters. A pioneer of digital data management in advertising, Ana was responsible for the development of the Demdex platform (now Adobe Audience Manager) from its early days through its successful acquisition and integration into the Adobe Digital Marketing suite. Prior to starting Sparrow she established Signal's Global Strategic Consulting group and helped Fortune 500 customers adopt advanced and predictive analytics across their marketing, ad ops, and digital content business units at SAS. Her consulting portfolio includes working for the United Nations, executing initiatives in 50+ countries, and advising companies on go-to-market strategies all around the globe. Ana is frequently quoted by media powerhouses like The Wall Street Journal and Business Insider (who in 2018 named her as one of 23 industry leaders working on fixing advertising) as well as industry trades like AdWeek, AdAge, Digiday, Marketing Magazine, AdExchanger, and Exchangewire. She is a sought-after speaker on topics of adtech, martech, innovation, customer experience, data management and new frontiers of technology. She tweets as @aexm. This episode streamed live on Thursday, October 8, 2020.

    The Tech Humanist Show: Episode 13 – Ana Milicevic

    Play Episode Listen Later Oct 16, 2020 44:34


    About this episode's guest: Ana Milicevic is an entrepreneur, media executive, and digital technology innovator. She is the co-founder and principal of Sparrow Advisers, a strategic consultancy helping marketers and C-suite executives navigate the data-driven adtech and martech waters. A pioneer of digital data management in advertising, Ana was responsible for the development of the […]

    The Tech Humanist Show: Episode 12 – Dr. Sarah T. Roberts

    Play Episode Listen Later Oct 9, 2020 55:34


    About this episode's guest: Sarah T. Roberts is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, Ontario for three years. On the internet since 1993, she was previously an information technology professional for 15 years, and, as such, her research interests focus on information work and workers and on the social, economic and political impact of the widespread adoption of the internet in everyday life. Since 2010, the main focus of her research has been to uncover the ecosystem – made up of people, practices and politics – of content moderation of major social media platforms, news media companies, and corporate brands. She served as consultant to and is featured in the award-winning documentary The Cleaners, which debuted at Sundance 2018 and aired on PBS in the United States in November 2018. Roberts is frequently consulted by the press and others on issues related to commercial content moderation and to social media, society and culture, in general. She has been interviewed on these topics in print, on radio and on television worldwide including: The New York Times, Associated Press, NPR, Le Monde, The Atlantic, The Economist, BBC Nightly News, the CBC, The Los Angeles Times, Rolling Stone, Wired, The Washington Post, Australian Broadcasting Corporation, SPIEGEL Online, and CNN, among many others. She is a 2018 Carnegie Fellow and a 2018 recipient of the EFF Barlow Pioneer Award for her groundbreaking research on content moderation of social media. She tweets as @ubiquity75. This episode streamed live on Thursday, October 1, 2020. Here's an archive of the show on YouTube: About the show: The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O'Neill. Subscribe to The Tech Humanist Show hosted by Kate O'Neill channel on YouTube for updates. Transcript 01:43all right01:44hey humans01:48how we doing out there come on in start01:50gathering around the uh the old digital01:52campfire01:54let me hear from those of you who are in01:55line uh right now tell me01:57tell me who's out there and tell me01:59where you're tuning in from02:01i hope you're starting to get your02:02questions and thoughts ready02:04for our guest i'm sure many of you have02:06already seen who our guest is and i'll02:07be reading her bio here in just a moment02:09so start thinking of your questions02:11about commercial content moderation and02:13what you want to02:14know about that and you know all that02:17kind of stuff02:18uh i hear sarah laughing in the02:19background it's not to laugh02:22really good valid questions i think i02:25was just snorting02:26honestly through my uh through my sinus02:29trouble02:30so uh welcome to those of you who are02:32all tuned in welcome to the tech02:34humanist show this is a multimedia02:36format program02:37exploring how data and technology shape02:39the human experience02:41and i am your host kate o'neil so i hope02:44you'll subscribe and follow wherever02:45you're catching this02:46so that you won't miss any new episodes02:49i02:50am going to introduce our guest here in02:51just a moment uh one one last shout out02:53if anybody's out there wanting to say hi02:56feel free02:56you are welcome to comment and i see a02:59bunch of you03:00online so feel free to tune uh03:03comment in and tell me who you are and03:05where you're tuning in from03:07but just get those you know type in03:08fingers warmed up because we're gonna03:10want you to03:10to weigh in with some questions and03:12comments as the show goes on03:14but now i'll go ahead and introduce our03:17esteemed guest so today we have the03:19very great privilege of talking with03:21sarah t roberts who03:22is an assistant professor in the03:24department of information studies03:26graduate school of education and03:28information studies at ucla03:30she holds a phd from the ischool at the03:32university of illinois urbana-champaign03:34my sister's school i went to university03:36of illinois chicago03:38prior to joining ucla in 2016 she was an03:40assistant professor03:42in the faculty of information and media03:44studies at western university in london03:46ontario for three years03:47on the internet since 1993 she was03:50previously an information technology03:52professional for 15 years and as such03:54her research interests focus on03:56information work and workers and on the03:58social03:59economic and political impact of the04:01widespread adoption of the internet in04:02everyday life right totally04:06so since 2010 the main focus of her04:08research has been to uncover the04:10ecosystem04:11made up of people practices and politics04:14of content moderation of major social04:16media platforms04:17news media companies and corporate04:19brands04:20she served as consultant tune is04:21featured in the award-winning04:22documentary04:23the cleaners which debuted at sundance04:26201804:27and aired on pbs in the united states in04:29november04:30 so roberts is frequently consulted04:33by the press and others on issues04:34related to commercial content moderation04:36and to social media society and culture04:38in general04:39she's been interviewed on these topics04:41in print on radio04:42on television worldwide and now on the04:44tech humanist show04:45uh including the new york times04:47associated press npr04:48le monde the atlantic i mean this list04:50is going to go on and on so04:52buckle in folks the economist bbc04:55rolling stone wired and picking and04:57choosing now it's a really really04:59impressive list of media05:00she's a 2018 carnegie fellow and a 201805:04recipient of the eff barlow05:06pioneer award for her groundbreaking05:08research on content moderation05:10of social media so audience again please05:12start getting your questions ready for05:13our outstanding guest05:15please do note as a live show i well05:17i'll do my best to vet comments and05:19questions in real time05:20we may not get to all of them but very05:23much appreciate05:24you being here tuned in and05:25participating in the show so with that05:27please welcome uh our dear guest05:31sarah t roberts and you are live on the05:34show05:34sarah thank you so much for being here05:37thank you uh05:38thanks for the invitation and thanks to05:40your audience and05:41uh all those interested folks who are05:44spending time with us today i'm really05:45grateful05:46for the opportunity we've already got uh05:48david polgar05:49saying excited for today's talk hey our05:52buddy05:53dave drp05:54[Laughter]05:56all right so i wanna talk right away05:59about your um06:01your book behind the screen i i hadn't06:03had a chance to read and until i was06:05preparing for the06:06show and it was it was wonderful to get06:07a chance to dig into your research06:09so tell us a little bit about that came06:11out last year is that right06:13um yeah it just just a little over a06:15year ago uh06:16came out on on yale university press06:19um you know the academic06:23publishing cycle is its own beast it's06:25its own world06:26it uh as it relates to06:29um kind of like journalism and and06:31mainstream press timelines it's much06:33slower06:34that said uh i wrote the book in about a06:37year which is about a normal06:39a normal cycle but it took about eight06:42years to put together the research that06:44went into the book06:46and this is because when i started my06:48research in 201006:50which you know we say 2010 it seems like06:53yesterday that was a decade ago now06:55you know if we're in terminable 202006:59you know which is which is a million07:01years long so far but07:03back in 2010 when i started looking into07:05this topic as a07:07as a doctoral researcher at the07:09university of illinois07:10uh you know there were a lot of things07:12stacked against that endeavor07:14including the fact that i was a doctoral07:16student at the university of illinois i07:17had no cachet i had very few07:20like material resources um you know to07:23finance07:24a study that would require uh07:27at the end of the day required going07:29around the world quite literally07:32but maybe the biggest barrier at the07:34time was the fact07:36that i was still fighting an uphill07:38battle trying to tell people07:40that major mainstream social media07:43platforms07:44were engaged in a practice that is now07:47weirdly um you know a phrase that you07:51might say around the dinner table and07:52everyone would get which is content07:54moderation07:55and that further when i would um raise07:58the issue08:00and and bring up the fact that firms08:01were engaged in this practice which08:04you know has to do with the adjudication08:06of people's08:08self-expression online and sits08:10somewhere between users08:13and the platform and then the platform's08:15recirculation of users material08:18uh you know people would argue with me08:20at that point08:22about the fact that that practice would08:24even go on08:25and then when i would say that uh you08:27know kind of offer08:28incontrovertible proof that in fact it08:30did go on uh08:32then we would uh find ourselves in a08:34debate about whether or not08:36it was a legion of human beings08:40who was undertaking this work or uh in08:43fact it was computational08:45now in 2010 in 2020 the landscape is08:48complicated but in 201008:51the technology and the sort of08:53widespread adoption08:54of of computational uh08:58automated let's say algorithmic kinds of09:01content moderation or machine learning09:03and forum content moderation was not a09:05thing09:05it was humans and so i had to start the09:09conversation09:10so far below baseline09:14that it you know it took uh it took09:17quite a lot of effort just to get09:19everybody on the same page to discuss it09:22and you know when i'm talking about09:24uh engaging in these conversations i09:27mean just like trying to vet this as a09:29as an appropriate research topic at the09:32graduate school you know what i mean09:34like to get faculty members09:36many of whom were world experts in in09:39various aspects of uh of the internet or09:42of09:42media or information systems themselves09:46um it was new to them too that was did09:49you originally frame it was it it's a09:51question of how09:52is this done or what was the original09:54framework of that question yeah09:56so i'll tell you a little bit about the09:57origin of why i got interested10:00and it's something that i write about in10:01the book because i think it's so10:03important to acknowledge kind of those10:06those antecedents i had read i was10:08actually teaching down at the university10:10of illinois in the summer10:12of 2010 and i was on a break from10:15teaching and10:16you know probably drinking a latte which10:18is what i'm doing right now10:19and um and uh uh reading the paper i was10:23reading the new york times and there was10:24a very small10:26uh but compelling article in the new10:28york times about a group of workers10:30who were there there were a couple of10:32sites they mentioned but there was in10:33particular a group of workers in rural10:35iowa well here i was sitting in rural10:38central illinois thinking about this10:40group of workers in rural iowa as10:42profiled in this piece10:44who were in fact engaging in what we now10:46know as commercial content moderation10:48they were working10:49in effectively a call center uh10:53adjudicating content for unnamed kind of10:55you know10:56media sites websites and social media10:59properties11:00and i kind of circulated that article11:03around i shared it with friends i shared11:05it with my colleagues and i shared it11:06with professors and11:07the argument that i made was that it was11:10it was multifaceted first of all it11:12sounded like a miserable11:14job and guess what that has been borne11:16out it is a11:17very difficult and largely unpleasant11:20job11:21uh so i was captivated by that fact that11:24there were these you know11:25unnamed people who a generation or two11:28ago would have been on a family farm11:30who were now in the quote unquote11:32information economy but seemed to be11:34doing11:34a drag just awful work11:38uh but also there was this bigger issue11:41of11:42uh you know really having this this big11:44reveal11:45of the of the actual11:48ecosystem an unknown here for unknown11:51portion of the social media ecosystem11:54effectively letting us know how the11:56sausage was being made right11:58and yet if you were to look at any of12:01the12:02the uh the social media platforms12:05themselves or any of the discourse at12:06really high levels in12:08industry or in regulatory bodies this12:11was not12:12this was a non-starter but i was was12:14arguing at the time12:16that how content was being adjudicated12:18on the platforms12:20under what circumstances under what12:23conditions and under what policies was12:25in fact12:27maybe the only thing that mattered at12:29the end of the day12:30right now in 2010 that was a little bit12:32of a harder case to make12:34by 2016 not so much after we saw the uh12:38the ascent of donald trump in the united12:40states we saw brexit12:42we saw uh this the rise of bolsonaro and12:45in brazil largely12:46uh attributed to um12:49social media campaigns there and kind of12:52discontinued sustained12:54support through those channels uh and12:57here we are in 2020 where uh13:00we might argue or we might claim that13:02misinformation and disinformation online13:04is one of the primary13:06concerns of civil society today13:09and i would put front and center13:13in those all of those discussions13:16the fact that social media companies13:18have this incredible immense power13:20to decide what stays up and what doesn't13:24and how they do it and who they engage13:27to do it13:28should actually be part of the13:30conversation if not13:31i would argue that it's a very13:33incomplete conversation so when i talk13:35about like the13:36scholarly publishing cycle it took a13:39year to put the book out right but it13:40took eight years to amass the evidence13:44to um to do the to the interviews and13:47media that you mentioned13:48to converse with industry people at the13:51top levels eventually but13:52you know starting at the bottom with the13:54workers themselves to find workers who13:56are willing13:56to talk to me and break those13:58non-disclosure agreements that they were14:00under um and to kind of create also14:04a a locus of activity for other14:07researchers and scholars and activists14:09who are also interested in in uncovering14:12uh this area and really sort of create14:14co-create a field of study so that's14:17what took eight years it took a year to14:18get the book out14:19um but all that legwork of proving in a14:22way14:23that this mattered took a lot longer i14:25don't have to make that same case14:27anymore14:27as i'm sure you you can imagine um14:30people people are interested they're14:33concerned14:34and um they want to know more they're14:36demanding a lot more14:38um from firms as users14:41you know as people who are now engaged14:43in social media in some aspect14:45of their lives every day need i say more14:48about zooming14:49constantly which is now our you know our14:52primary14:53medium of connection for so many of us14:55in our work lives even14:57yeah hey we already have a question from15:00our buddy drp david ryden-polgar let me15:04uh15:04put this against the background we can15:06actually see it here uh15:08he says sarah would love to hear your15:10thoughts on section 2315:12230 and how any potential changes would15:15impact content moderation15:16so we're going right in right deep yeah15:19really15:20so um let me try to flush that out a15:22little bit15:24for others who aren't um you know inside15:26quite as as deep15:28um section 230 is15:31a part of the uh communications decency15:34act which goes back to 1996 but15:36effectively what what anyone needs to15:38know about section 230 is that15:40it's the it it's sort of the legal15:42framework15:43that informs social media companies15:48rights and responsibilities around15:51content15:52when we think about legacy media um15:55so-called uh broadcast television for15:58example or other other forms of of media16:01that we consume16:02you know i always bring up the the16:04example of george carlin who16:06famously um uh16:10you know made a career out of the seven16:12dirty words that you couldn't say16:13on radio right so there are all kinds16:16of governing uh16:19legal and other kinds of norms about16:22what is allowed and disallowed in some16:24of these legacy media16:26when it comes to social media however16:30there is a pretty16:35drastically contrasted permissiveness16:38that is in place uh that16:41seeds the power of the decision-making16:44around16:45what is allowable and what is not16:46allowable to the platforms themselves so16:49this is a really different kind of16:50paradigm right16:52and it's section 230 that allows that16:54that's the16:55that's the precedent that's the that's16:57the guidance uh16:58legally that uh that provides that kind17:01of17:02uh both responsibility and discretion17:05and what it does is it allows the17:07companies17:08um to make their own decisions17:12effectively17:13about what policies they will follow17:15internally now this doesn't go for17:17every single piece of content you know17:18one of the the biggest examples that17:21uh that this does not cover is child17:24sexual exploitation material which is17:25just illegal full stop it doesn't matter17:28if platforms wanted to traffic in that17:30material or not it's illegal17:32but beyond that just to certain to a17:35certain extent what section 230 allows17:38is for platforms to redistribute17:42effectively material that other people17:44submit17:45uh without being held liable for that17:47material17:48and so if we think about that that's17:50actually the business model of social17:51media17:52the business model of social media is to17:54get other people to create content17:56upload it circulate it and engage with17:59it download it18:00and effectively the platforms have um18:03you know argued and claimed that they18:04are really18:05you know don't kill the messenger right18:07like they're just like the18:08the the apparatus by which this material18:10gets shared18:12i think that um18:15you know at one time that really made18:16sense particularly when the18:18when this uh when the communications18:20decency act was passed and this goes18:22back in18:23into the mid 90s when what was18:26kind of imagined as needing this this18:29uh reprieve from liability was an isp an18:33internet service provider18:35which at that time uh i guess the most18:38imaginative version of that you could18:40think of would be america online for18:41those of you who18:42remember that on the program shout out18:45to the aol days yeah18:47right aol like all the you know the18:49discs and cd-roms you got and used as18:51coasters18:52um but you know back in that time but an18:55internet service provider really was a18:57pass-through in some cases you know i18:58knew a guy who ran an isp locally19:01he really just had a room with a with a19:03huge internet pipe coming in19:06and a wall of modems and you would dial19:08up through your modem and connect19:10through and then be on the internet to19:11some other service19:12so that was the model then but the model19:15now19:15uh is you know multi-billion dollar19:19transnational corporations19:21uh who have immense power in decision19:24making around content19:26and yet are are uh19:29in the american context at least largely19:32not liable for those decisions19:34uh legally or or otherwise um19:38making incredibly powerful19:42decisions about what kind of material we19:45all see and engage in19:47and what is permissible and what is not19:49online and they do that at their19:50discretion well if they're doing that at19:52their discretion19:54do you think that they're largely going19:56to um19:58fall into a mode of altruism and like20:01what's best20:01for civil society are they going to look20:03at their bottom line20:05and their shareholder demands and20:07respond to that i mean20:09the audience yeah i mean frankly20:12publicly traded companies20:13have a legal mandate to respond to their20:15shareholders and to generate revenue for20:17them so20:18um when those things are at odds when20:20when those things are aligned with20:22what's good for you know20:23america is good for uh facebook's20:26internal policies around content20:28moderation that works out great20:29but if there's you know if ever those20:32two pathways should diverge20:34we know which one they're going to fall20:35under and there's just there's very20:37little20:37um legal consequence or legal uh20:41expectation for uh reporting out on how20:46uh these decisions get made the way that20:48that20:49we have seen more decisions getting uh20:52publicly20:53unveiled through things like um20:56the publication of of what had been21:00previously kind of closely held secret21:03policies internally is through public21:06pressure21:06through the pressure of civil society21:08groups and advocacy groups through the21:10pressure21:11of the public through the pressure and21:13the constant threat of21:15you know things like reform to section21:17230 or other kinds of21:19regulation so it's a very interesting21:23moment and it's interesting to bring up21:24section 230 because21:26again a couple of years ago i had21:28colleagues um21:30who are in uh legal studies and who are21:34you know law professors essentially tell21:36me that 230 would soon be rendered21:38moot anyway because it's just it's it's21:41you know based on um on21:45well it should be solely relevant in the21:47united states right in the jurisdiction21:49of the united states21:50and so because these platforms were21:52going worldwide21:54uh you know there21:57it would be rendered mood well i would21:59say it's actually been the opposite22:00that's right that what is happening is22:02that section 230 is getting bundled up22:04as the norm22:06and is now being promulgated either just22:09through uh through the process of these22:13platforms going global but kind of22:14keeping their americanness and22:16keeping their um their response their22:20you know business practices largely22:22responsible to american laws first and22:24foremost22:25but also even to the point that uh you22:28know it recently22:29has become known i think more and more22:32to people like me who aren't legal22:34scholars but who have a great interest22:36in how this stuff goes down that section22:39230 like language22:41is being bundled up and put into trade22:44agreements22:45uh at the nation state level or22:48you know region level with the united22:50states and trading partners and we know22:52that22:53you know these these trade agreements22:56which have been you know huge hugely22:57politically22:59uh problematic and were a major issue in23:03fact of the 2016 election23:05uh you know they're they're they're23:07anti-democratic i mean how do you even23:09know what's in a trade agreement they're23:10totally secret23:12uh but i i learned while watching a uh23:15uh house uh subcommittee23:19uh convening about section 230 from23:22a highly placed google executive23:26that in fact their their lobbyists are23:28pushing for this kind of language in23:31in these trade agreements so we see that23:33instead of 230 becoming less relevant23:35because of the globalization23:37of american social media platforms it's23:39actually becoming a norm that is now23:42being23:43first of all it was sort of like softly23:45reproduced just because of the spread of23:47these american platforms and23:49how they were doing business but now23:50it's actually becoming codified23:52through other means means like like23:55trade agreements that the public has23:57really no23:58mechanism to intervene upon and i think24:00that's really worrisome24:02what about those mechanisms where the24:04sorry what were you gonna say24:06no okay i was just gonna say that's one24:07of my short and concise professorial24:09answers24:11let me drink a coffee well david24:14uh thanks you for that uh great24:17historical overview and i'm sure24:18the rest of our viewers and listeners do24:20too i i wonder about the ones24:22the the examples that don't have that24:25kind of24:26uh consumer involvement so i'm wondering24:28about for example24:29you know youtube and it's kids content24:32and24:33and so there have been a lot of changes24:35it seems like24:36with regard to that that platform and24:38that subject over the24:40over the last few years so can you maybe24:42give us an overview of24:43how that has gone down um24:46well i think that you know youtube is24:49such an interesting example24:51to talk about for for many reasons uh24:53for its reach and pervasiveness you know24:56it's a24:56market leader for sure it's globality i24:59would also say that youtube is25:01particularly interesting because when we25:04think about25:05uh social media content as being25:10monetized there is no greater25:13and more direct example than youtube25:15where it actually pays people who are25:17really highly successful on the platform25:19for content right25:20so like when there's no kind of like a25:23metaphor there about monetization it is25:25literally monetized right25:27um and this you know just to kind of tie25:30this back to the section 23025:31conversation25:32when we imagined isps as just path25:35pass-throughs you know that was one25:37thing but here we have25:39these huge companies like youtube and25:40others involved actively25:43in production so that kind of like25:46firewall between just being an25:48intermediary and actually being actively25:50engaged in producing media25:51has gone but the there's like a legacy25:54legal environment that it still25:56informs it so youtube you know they pay25:58producers they have these like26:01uh pretty extraordinary studios in26:05in major uh in major26:08cities around the world including la26:10where i live26:12uh they you know they are kind of the26:15go-to outlet and people26:18want to participate in youtube for all26:20sorts of reasons but there's certainly26:21you know a dollar sign reason that26:24people get involved26:25and you bring up this issue of kids26:27content26:28um again here's where we see sort of26:31like the softening and the eroding of26:33regulation too it26:35started it's it's not just youtube i26:36have to confess it's not just26:38social media companies that have eroded26:40uh you know child protections around26:42um media that that goes back to the you26:45know 40 years ago in the reagan26:47administration when there used to be26:48very stringent rules around26:50uh saturday morning cartoons for example26:52and advertising to children that could26:54go on26:55during that time uh shout out to my26:58colleague molly neeson who has worked27:00extensively on that27:01on that particular topic and that27:02erosion so27:05i see uh on on youtube again27:08a lot of the pressure to kind of reform27:11and27:11i think when you're talking about kids27:13content you're talking about27:15some of like some like really disturbing27:17and weird content that was showing up27:20um you know kind of like cheaply made27:22unknown27:23weird creepy sometimes not really27:25clearly27:27necessarily uh27:30benevolently made like you know27:33sometimes creepy sexual undertones27:36uh other kinds of stuff going on you27:38know really and really no way to know27:40that's part of the problem no way to27:42know right um27:43and then uh the massive problem of27:46trying to27:48moderate that material right um you know27:51i think of it27:52as like the the classic story of the the27:55whole27:56springing through the the dyke holding27:58the water back you know27:59you plug one hole another one springs28:02open28:02so it's a little bit falls down so the28:05whole wall28:06and then your inundated that's right28:07that's right and so28:09you know that is a good metaphor to28:10think about the problem of these like28:12kind of isolated28:14uh hot spots that explode on platforms28:17as a new social issue or maybe a new28:21uh a geopolitical conflict erupts28:25somewhere in the world it's you know28:26gets meted out and replicated on social28:28media and attention gets drawn to it28:31and so i think this issue of child28:34content and its kind of exploitive28:35nature and28:36strange nature in some cases was28:38something that advocacy groups and28:40others brought attention to28:41and the platform had to reconfigure and28:44focus on it28:45now i mentioned earlier that you know28:47back in 2010 it really was humans who28:49were doing this work almost exclusively28:50but by 202028:52we are using computational tools28:55to try to deal with content as well28:57although i28:58i'll repeat the quote that i once heard29:00from a reporter29:02who who heard it from a an engineer at a29:05company that shall not be named but it29:06might sound like29:08um you know boo-boob let's say might29:10rhyme with that29:11uh and the quote was uh whatever the29:14algorithm is doing it's29:15not watching the video so you know29:17they're using these computational29:19mechanisms to do all kinds of other29:21stuff but it's not like29:22an algorithm can watch and sense make29:25out of a video it has to look at other29:26stuff29:28so that's an interesting point though29:30too and i want to follow up on that with29:31a question about29:32you know do you do you personally29:34advocate for more29:35ai in the mix of con of content29:38moderation such as you know facebook29:39recently made an announcement that they29:40were using29:41ai to simulate bad actors so that they29:44could train their moderation29:45systems automated moderation systems to29:47more effectively recognize it do you29:49think that that ultimately29:50will work and will benefit the humans29:52who are part of this ecosystem or29:54is it likely to produce unintended ill29:56effects so i mean that's a really great29:59question because that's sort of like the30:0164 000 question about my work if30:04you know one would one would think if my30:05concern is the welfare of workers30:08which has always kind of been my cut in30:10on this topic and where i start and30:11where i come back to an end30:13um then hey wouldn't it be great if30:15tomorrow we could just flip that switch30:16and go30:17to those uh purely computational means i30:20think that30:21in theory right in theory but i think30:24there are a lot of red flags there30:26you know one red flag is that if it's30:29been this difficult30:30as and i kind of laid the groundwork for30:32that at the at the front end of the show30:34to unpack and uncover uh30:37the ecosystem involving humans and i30:39have to say30:40the majority of my work has been30:43reliant upon the willingness of human30:46beings involved in the system30:48to leak essentially to break30:51their non-disclosure agreements and to30:54you know essentially snitch on what they30:56felt was30:58problematic also sometimes what they31:00felt was good about the work they did31:02how do you get uh an algorithm or a31:04machine learning based tool31:06to call a journalist or31:09uh you know do an interview with a31:11researcher31:13i don't know how to do that you know the31:14closest thing we could come to is31:16getting access to it and looking31:18at code but that's not easy to do and31:20it's much harder to do31:22than finding uh and i cannot stress the31:25difficulty of what it was like31:27in the early days to find people willing31:29to talk to me so31:30you know you can't do that with ai how31:32do we how do we audit those tools how do31:34we31:35how do we you know what's the check on31:37power that the firms have with those31:39tools31:40in terms of how they're set up and what31:42they keep in and what they keep31:43out it also sounds like a potentially31:46even greater violation31:47of uh that non-disclosure if someone31:50leaks a bit of code31:51rather than just tell their own personal31:53story oh for sure i mean and and31:56you know the the other thing too that31:58that comes to mind for me is32:00the nature of how these tools work32:03and you know a great worry and i think a32:05legitimate worry of many people in the32:07space32:07is that uh they32:11the tendency to use those tools would be32:13to32:14uh calibrate them32:17to be even uh less permissive let's say32:21or to you know because of their nature32:23they would have less of an32:24ability to look at a given piece of32:27content32:28and you know see that it violates abc32:31policy but understand it in the context32:34of you know again32:35a cultural expression or um32:38you know an advocacy piece around a32:41conflict zone32:42and then make an exception so what we32:44would see32:45is uh more conservative and greater32:49false positives around material that32:52quote unquote is disallowed right32:55again all of this adjudicating to the32:58logic that the firms themselves create33:00which for um for many years itself was33:03opaque33:05uh so this is you know it's not as easy33:08to say unfortunately if we could just33:10get those darn algorithms right if we33:11could just get33:12you know machine learning to get33:13sophisticated enough we could33:16take out the human element and and33:18basically33:19you know save people from having to do33:21this work33:23unfortunately i think it's more33:24complicated than that and i would say33:26that33:26you know bringing up the idea of33:29training machine learning tools as you33:30did33:31one of the gross ironies of this whole33:33thing that i've been33:34monitoring is that uh33:38content moderation commercial content33:40moderation for these major platforms33:42is its own kind of self-fulfilling uh33:46industry that begets uh sub industries33:49in and of itself33:49so that when machine learning tools have33:52come on what needs to happen33:54is that people need to sort data sets to33:56create data sets for the machine33:58learning tools to train on33:59and they need to be themselves trainers34:02and classifiers for the machine learning34:04tools so now we have a whole new stratum34:06of people34:07working to train machine learning34:09algorithms which has them essentially34:11doing a certain kind of content34:12moderation34:13it's a lot easier that cottage industry34:14of evil ai34:16spawn it's like anything like34:19how are we gonna make the ai bad enough34:21to train our ai34:23uh automation systems to recognize that34:25so that we can keep a good environment34:27but then you've got this whole cottage34:29industry around the bad34:30ai seems like a very awkward way of34:32going34:33so you know as someone who monitors like34:36like hiring trends and things like that34:37too34:38i was i was watching companies looking34:41for people to to come be34:42classifiers on data sets which is just34:44moderation before the fact right34:46yeah you know you talked about that in34:48the book too you have34:50you presented a taxonomy of sorts of34:52labor arrangements from34:53in-house moderators to what you call34:56micro labor you know looking at34:58mechanical turk and things like that can34:59you walk us through that a little bit so35:01that we can become familiar with what35:02the35:02the human issues relative to each level35:06yeah one of the one of the early35:07insights i had when i was trying to35:09figure out the contours of this industry35:11from35:11you know the outside and it reminds me35:13of that parable of you know35:15people feeling different parts of the35:16elephant without really being35:18being able to see it and they don't35:19really they don't really get the big35:21picture35:22um was that you know what i was35:24considering as being kind of like a35:26monolithic35:27practice really wasn't it was happening35:28in all kinds of different places and in35:30different guises35:32including using different names like35:33there was no kind of cohesive name to35:35call35:36this this work practice so i started out35:38kind of knowing about these workers35:40in in iowa that i reference in the book35:42and i referenced today35:44who were working in a call center and it35:46turned out that call centers were really35:48a prevalent way35:50that this work was going that it was um35:53you know kind of at somewhat of a remove35:55geographically and organizationally so35:57it'd be kind of like a third party35:59contracted out group of workers36:00somewhere in the world36:02when i started out i knew about the36:03workers in places like iowa florida etc36:06but i soon came to know about workers in36:08places like india36:09or in malaysia or of course key to the36:12book in the philippines36:13so that um that that call center36:16environment for content moderation work36:18is really prevalent36:20and it's global but there are also36:23workers who36:24uh prior to covid we're going every day36:26for example in the bay area down from36:28san francisco on the36:30company buses um and going on site to36:33companies36:34that i describe in the book one that has36:36the you know36:37pseudonym of megatech and is a stand-in36:40for36:40any number of companies in fact i'll36:42just tell you a little anecdote that36:44i've met a lot of people from industry36:46who like over cocktails after meetings36:48will come up to me36:49all from different companies and say36:52we're mega tech aren't we and it's like36:54you know like at least six different36:56corporations think they're making36:57answers36:58yes yes sounds right yeah that tells you37:01something37:02so um you know these people were on site37:05workers they were37:06in you know the belly of the beast37:07essentially they were working37:09in places where there was also uh37:11engineering product development37:13marketing37:14uh communications you know soup to nuts37:16uh37:17although interestingly enough they were37:20also contractors in the case of the37:21books so37:22they still had this differential and37:24lesser status even though they were37:26going on site37:27to the corporate hq you know it still37:31wasn't quite the right badge caller as37:33they described it to me although they37:35thought about the people who were37:36working as contractors and call centers37:38as another kind of worker37:40even though they were essentially very37:43very similar37:44then we had people that i encountered37:47who were37:48you know very entrepreneurial and37:50especially in in sort of the early days37:52were37:52developing a model that looks almost37:56like an ad agency they were37:58independent companies that were starting38:00to specialize in providing content38:02moderation services38:03to other companies and it was a boutique38:05kind of service38:06a specialty service and they would often38:09offer38:10social media management across the board38:13so not only were they offering38:14the removal of content in some cases but38:16they would even38:18offer again in that advertising model38:20the generation of content38:22because believe it or not sometimes you38:24know your auto parts company's facebook38:26page just doesn't38:27generate a lot of organic interest and38:29so you hire a company to come post about38:31how awesome your auto parts company is38:34um likewise if there's a you know as38:37somebody once38:38told me and it's in the book too if you38:40open a hole on the internet it gets38:41filled with38:43bleep with uh you know if you have38:46a web page or you have a facebook page38:48and there's no activity38:49that's like organic or really about what38:51it's supposed to be about i guarantee38:52you that somebody will be posting38:54invective racist comments and so on38:56these boutique firms said38:58to usually to smaller companies hey39:00we'll manage the whole thing we'll39:01delete that stuff39:02we'll generate new stuff for you it'll39:04look organic nobody will really know39:06that that's what we're doing39:07and they were having great success when39:09i talked to them was that generally39:11filed under this sort of banner of user39:12generated content39:14or was it called other things generally39:16um39:17you know it was kind of like a social39:19media management is how they would call39:21couch that and how they would pitch it39:25and uh you know it was like uh hey39:28company x you your business has nothing39:31really to do with social media that's39:33not39:33you know your primary business let us39:35handle it for you39:36and a lot of companies jumped at the39:38chance to kind of outsource that and not39:40deal with it39:41an interesting thing in that kind of39:43bucket of39:44of the taxonomy that you mentioned is39:46that those companies39:48uh in some cases got bought up by39:52ad firms or ad firms have started doing39:54this service as well39:56or they become really really big and39:58successful so there's like a few that40:00kind of40:01uh uh rose to the top and have survived40:05and then you already mentioned this40:07really interesting and and kind of40:09worry some arena where this work goes on40:12which is in the micro labor realm40:14the amazon mechanical turk model40:17uh which is effectively you know digital40:19piece work it's people40:21adjudicating a bit of content here40:23they're often40:25paid a per view or per decision40:28uh and then they try to aggregate enough40:30to make that make sense for them40:31financially40:33and it it turns out although that's40:36supposed to be an anonymous relationship40:38you know savvy mechanical turkers they40:40can figure out who they're working for40:42because a lot of times40:43you know they'd receive a set of of40:46images or other content to adjudicate40:48and like you know the interface was40:50obvious41:00[Music]41:02before and you get those guidelines41:04again then you know yeah41:06that's right so you know i i came to41:09know some folks who were41:10uh you know who themselves sort of began41:13to specialize within41:14mechanical turk and other platforms on41:17this kind of thing and they would seek41:18out this work because they got good at41:20it like you said41:21and they got good at knowing the41:22internal policies and juggling them for41:24all these different firms and41:26began to specialize in this work on that41:28platform41:29i was wondering you know when thinking41:31about this as you mentioned earlier41:33about the41:34the consequences of misinformation41:36especially as we41:37are deep in the process of the us41:40presidential election cycle and41:42i say the u.s because i want to be41:43sensitive to the fact that there are41:44global viewers but i feel like everyone41:46in the world is kind of41:48you know hooked into the u.s41:49presidential election right now41:51and we're all like yeah aren't they41:53right and we're all being subject to41:55you know all of this uh well the the41:58dumpster fire of it all but also the42:00misinformation that accompanies it42:02and so i wonder how should people think42:04and understand the difference between42:07content on social media and content in42:09news media42:10and what are some of the differences in42:12approaches to moderating42:14harmful content and you know kind of42:16just thinking about42:18the access to you know free access to42:21information you know this is kind of a42:23big42:24muddy question i'm not sure i'm42:26articulating very well but42:27hopefully you see the direction of of42:29the um42:30the question that i'm asking her yeah i42:34i'll i'll do my best to respond and we42:36can42:36you know we can you can offer guidance42:40yeah as i go i mean i i think your42:43question in essence is what the hell42:45right yeah42:48information misinformation42:50disinformation the election42:52what the hell and so i think you speak42:54for a global audience when you pose that42:56question and42:58you're right about the u.s election i43:00know uh friends and colleagues who were43:02up early in australia watching it and43:04you know as mortified as we were by the43:06the behavior on display43:08and the other night yes the debate and43:11the kind of the nadir43:12of uh you know american politics in my43:15lifetime is how i described it43:17um you know i i often43:20bring up the the rise of social media43:24as a force in again in american civic43:27life43:29that it's important to not think about43:31it having happened in a vacuum or having43:33happened43:34uh without without43:37um other forces at play and in the other43:40part of my life i43:42am a professor in a program that trains43:44and prepares43:45people for careers and information43:47professions primarily in librarianship43:50and so i know something about the way43:53in which we've seen a gross43:57erosion of the american44:00public sphere and opportunities for44:03people to become informed44:06in places that traditionally have been44:10more transparent more committed to the44:13public good44:13not-for-profit i'm thinking about44:16institutions like public schools44:18and institutions like public libraries44:21so if we were to take44:24you know uh funding a funding graph or44:28something like that and put them44:29together about expenditures or44:31where where money goes in our society we44:34would see44:35you know that off the cliff kind of44:37defunding44:38of of these uh institutions that i just44:41mentioned44:42while we see a rise in social media44:46and what i think that suggests at least44:49to me is that44:50it's not that the american public44:51doesn't have a desire to be informed44:54or to have information sources and i44:56would add to that by the way44:57it's not necessarily in the public44:59sphere in the same way45:00but we have seen total erosion in45:04regional and local journalism too right45:06during the same time right45:08into mega media that's right mega media45:11which45:12you know came about by the shuttering of45:14local news45:15and it there was a time when you know45:17cities like mine i come from madison45:19wisconsin 25045:21000 people yeah they yeah they might45:24have had a a45:25a reporter in dc you know what i mean45:28for our local paper the capitol times45:30which went the way of the dodo some45:33some years ago and that that local paper45:35no longer exists in a print form45:38so there's a whole i mean we could do a45:40whole show on this and you probably45:42shouldn't have me on for the show so45:44apologies to to the users that this45:46isn't my total area of expertise but i'm45:48just trying to connect some dots here45:50for people to make sense of it right45:52right and you know when we think about45:53the differences between social media45:55information circulation and something45:58like journalism46:00agree or disagree with what you read in46:02in in the newspaper or you hear on the46:05news46:06of your choice but there are things46:09there that are not present46:10in the same way in the social media46:12ecosystem uh46:13you know an author name a set of46:16principles by which46:18uh the journalists46:21at least pay lip service to but most of46:24them46:25live by you know that they have been46:27educated46:28to uh to serve and then do so46:31in their work there's editorial control46:34that before stories go to print they46:37have to go through a number of eyes46:38there's fact checking if you've ever you46:41know i've been on the46:42the the side of having been interviewed46:44for journalistic pieces and i get phone46:46calls from fact checkers to make sure46:48that the journalists got46:49right what i think yeah right46:52you think that did you really say xyz46:55yes i did that doesn't exist and you46:57know46:58your your your racist uncle47:00recirculating47:01um god knows what from whatever outlet47:04that is just go those those47:08what we might think of barriers to entry47:10but we also might think of as safeguards47:11are just gone47:13and with all of the other institutions47:16eroded that i mentioned47:17you know public schooling library public47:20libraries and so on the mechanisms that47:22people might use to47:24vet material to understand what it means47:27when they look at a paper of record47:29versus47:32a dubious outlet let's say a dubious47:34internet based outlet47:36and how those uh sources differ those47:39mechanisms to to learn about those47:41things have been eroded as well47:43um is there even a civics class anymore47:45in pu

    The Tech Humanist Show: Episode 12 – Sarah T. Roberts

    Play Episode Listen Later Oct 9, 2020 55:34


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Sarah T. Roberts, who is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, Ontario for three years. On the internet since 1993, she was previously an information technology professional for 15 years, and, as such, her research interests focus on information work and workers and on the social, economic and political impact of the widespread adoption of the internet in everyday life. Since 2010, the main focus of her research has been to uncover the ecosystem - made up of people, practices and politics - of content moderation of major social media platforms, news media companies, and corporate brands. She served as consultant to and is featured in the award-winning documentary The Cleaners, which debuted at Sundance 2018 and aired on PBS in the United States in November 2018. Roberts is frequently consulted by the press and others on issues related to commercial content moderation and to social media, society and culture, in general. She has been interviewed on these topics in print, on radio and on television worldwide including: The New York Times, Associated Press, NPR, Le Monde, The Atlantic, The Economist, BBC Nightly News, the CBC, The Los Angeles Times, Rolling Stone, Wired, The Washington Post, Australian Broadcasting Corporation SPIEGEL Online, and CNN, among many others. She is a 2018 Carnegie Fellow and a 2018 recipient of the EFF Barlow Pioneer Award for her groundbreaking research on content moderation of social media. She tweets as @ubiquity75. This episode streamed live on Thursday, October 1, 2020.

    The Tech Humanist Show: Episode 12 – Dr. Sarah T. Roberts

    Play Episode Listen Later Oct 9, 2020 55:34


    About this episode's guest: Sarah T. Roberts is an Assistant Professor in the Department of Information Studies, Graduate School of Education & Information Studies, at UCLA. She holds a Ph.D. from the iSchool at the University of Illinois at Urbana-Champaign. Prior to joining UCLA in 2016, she was an Assistant Professor in the Faculty of […]

    The Tech Humanist Show: Episode 11 – Marcus Whitney

    Play Episode Listen Later Oct 2, 2020 67:14


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Marcus Whitney is Founding Partner of Jumpstart Health Investors, the most active venture capital firm in America focused on innovative, healthcare companies with a portfolio of over 100 companies. He is also co-founder and minority owner of Major League Soccer team, Nashville Soccer Club. Marcus is the author of the best-selling book Create and Orchestrate, about claiming your Creative Power through entrepreneurship. He is also the producer and host of Marcus Whitney LIVE, an interview show live-streamed M-F 12 Central on Facebook, YouTube, LinkedIn, Twitter and Twitch, and Marcus Whitney’s Audio Universe, a podcast on all major platforms. Marcus is a member of the board of the Country Music Hall of Fame® and Museum, the Nashville Convention and Visitors Corporation, Instruction Partners and an Arts Commissioner for the city of Nashville. He has been listed in the Upstart 100 by Upstart Business Journal, Power 100 by Nashville Business Journal, and has been featured in Inc., Techcrunch, Fast Company, and The Atlantic. He tweets as @MarcusWhitney. This episode streamed live on Thursday, September 24, 2020.

    The Tech Humanist Show: Episode 11 – Marcus Whitney

    Play Episode Listen Later Oct 2, 2020 67:14


    About this episode's guest: Marcus Whitney is Founding Partner of Jumpstart Health Investors, the most active venture capital firm in America focused on innovative, healthcare companies with a portfolio of over 100 companies. He is also co-founder and minority owner of Major League Soccer team, Nashville Soccer Club. Marcus is the author of the best-selling […]

    The Tech Humanist Show: Episode 10 – Renée Cummings

    Play Episode Listen Later Sep 25, 2020 58:48


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance. Foreseeing trends and anticipating disruptions, she’s committed to diverse and inclusive AI strategy development; using AI to empower and transform communities and cultures; securing diverse and inclusive participation in the 4IR, helping companies navigate the AI landscape and developing future AI leaders. A multicultural cross-connector of multiple fields and an innovative collaborator, her passion is forming connections and unifying people and technologies; enhancing quality of life and economic prosperity. She’s also a criminal psychologist, therapeutic jurisprudence and rehabilitation specialist, substance abuse therapist, crisis intelligence, crisis communication and media specialist, creative science communicator and journalist. She has a solid background in government relations, public affairs, reputation management and litigation PR. A sought after thought-leader, inspirational motivational speaker and mentor, Ms. Cummings is also a Columbia University community scholar. She tweets as @CummingsRenee. This episode streamed live on Thursday, September 17, 2020.

    The Tech Humanist Show: Episode 10 – Renée Cummings

    Play Episode Listen Later Sep 25, 2020 58:48


    About this episode's guest: Renée Cummings is a criminologist and international criminal justice consultant who specializes in Artificial Intelligence (AI); ethical AI, bias in AI, diversity and inclusion in AI, algorithmic authenticity and accountability, data integrity and equity, AI for social good and social justice in AI policy and governance.Foreseeing trends and anticipating disruptions, she's […]

    The Tech Humanist Show: Episode 9 – Rahaf Harfoush

    Play Episode Listen Later Sep 18, 2020 59:27


    The Tech Humanist Show explores how data and technology shape the human experience. It's recorded live each week in a live-streamed video program before it's made available in audio format. Hosted by Kate O’Neill. About this episode's guest: Rahaf Harfoush is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. She is the Executive Director of the Red Thread Institute of Digital Culture and teaches “Innovation & Emerging Business Models” at Sciences Politique’s school of Management and Innovation in Paris. She is currently working on her fourth book. Her third book, entitled “Hustle & Float: Reclaim Your Creativity and Thrive in a World Obsessed with Work,” was released in 2019. She has been featured by Bloomberg, The CBC, CTV, and Forbes for her work on workplace culture. Formerly, Rahaf was the Associate Director of the Technology Pioneer Program at the World Economic Forum in Geneva where she helped identify disruptive-startups that were improving the state of the world. Rahaf is the co-author of “The Decoded Company: Know Your Talent Better Than You Know your Customers” Her first book, “Yes We Did: An Insider’s Look at How Social Media Built the Obama Brand,”chronicled her experiences as a member of Barack Obama’s digital media team during the 2008 Presidential elections and explored how social networking revolutionized political campaign strategy. Rahaf has been named "one of the most innovative women in France,” "one of the top future thinkers to shape the world,” "a Young Global Changer,” and a “Canadian Arab to Watch.” Rahaf’s writing has been featured in HBR, Wired, The Globe and Mail, Fast Company, and many more. She is a frequent commentator on France24 and the CBC. In her spare time, Rahaf enjoys instagramming too many pictures of her dog Pixel, learning how to play the ukulele and working on her first novel. She tweets as @RahafHarfoush. This episode streamed live on Thursday, September 10, 2020.

    The Tech Humanist Show: Episode 9 – Rahaf Harfoush

    Play Episode Listen Later Sep 18, 2020 59:27


    About this episode's guest: Rahaf Harfoush is a Strategist, Digital Anthropologist, and Best-Selling Author who focuses on the intersections between emerging technology, innovation, and digital culture. She is the Executive Director of the Red Thread Institute of Digital Culture and teaches “Innovation & Emerging Business Models” at Sciences Politique's school of Management and Innovation in Paris. […]

    Claim The Tech Humanist Show

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel