Trust in Tech: an Integrity Institute Member Podcast

Follow Trust in Tech: an Integrity Institute Member Podcast
Share on
Copy link to clipboard

The Trust in Tech podcast is a project by the Integrity Institute — a community driven think tank advancing the theory and practice of protecting the social internet, powered by our community of integrity professionals.

Integrity Institute


    • Feb 12, 2025 LATEST EPISODE
    • infrequent NEW EPISODES
    • 48m AVG DURATION
    • 44 EPISODES

    Ivy Insights

    The Trust in Tech: an Integrity Institute Member Podcast is a truly remarkable podcast that delves into the crucial topics of trust and safety in the digital age. Hosted by experts in the field, this podcast brings together insightful guests who provide valuable perspectives and knowledge on these important issues. As someone who values trust and safety, I have been thoroughly impressed by the caliber of guests featured on this show and the engaging conversations that ensue.

    One of the best aspects of The Trust in Tech podcast is the high quality of its guests. The host has done an excellent job in selecting individuals who are experts in their respective fields and can provide deep insights into trust and safety issues. From industry leaders to academics, each guest brings a unique perspective and expertise to the discussions, ensuring that listeners gain a well-rounded understanding of the topic at hand.

    Another standout feature of this podcast is how easily it captures your attention with its natural conversation flow. The hosts create an environment where guests feel comfortable sharing their thoughts, resulting in conversations that are engaging, thought-provoking, and often inspiring. The ability to maintain a conversational tone while discussing complex topics is truly commendable and makes for a highly enjoyable listening experience.

    Furthermore, the production quality of The Trust in Tech podcast is top-notch. The clarity of audio and well-paced episodes contribute to making it accessible and easy to follow along. Whether you are a tech-savvy individual or just starting to explore the realm of trust and safety in technology, this podcast ensures that you can fully comprehend the discussions without feeling overwhelmed.

    While it is challenging to find any significant flaws with this podcast, there could be room for more varied perspectives on certain topics. Although the guests selected thus far have provided valuable insights, incorporating diverse opinions from different backgrounds could enhance the overall breadth of knowledge presented on each episode.

    In conclusion, if you are looking to expand your understanding of trust and safety in technology, The Trust in Tech: an Integrity Institute Member Podcast is a must-listen. The podcast excels in its selection of guests, natural conversation flow, and production quality. It has become a valuable resource for me in learning about these important topics, and I highly recommend it to anyone seeking to deepen their knowledge and engage with the issues surrounding trust and safety in the digital world.



    Search for episodes from Trust in Tech: an Integrity Institute Member Podcast with a specific topic:

    Latest episodes from Trust in Tech: an Integrity Institute Member Podcast

    The Future of Trust & Safety: Navigating Challenges in a Shifting Industry

    Play Episode Listen Later Feb 12, 2025 46:40


    In this episode, Alice and Integrity Institute co-founder Jeff Allen, discuss the ever-evolving T&S landscape. With recent policy changes at Meta and increasing scrutiny on content moderation, they discuss T&S's evolving future, the critical role of integrity professionals, and the business case for safer online spaces. How do trust & safety teams continue to create value in an environment of growing political and company pressure? Listen in for a thoughtful conversation on the commitment to integrity work through turbulent times.Further readingThe ethical and practical flaws in Meta's policy overhaul by Alice Hunsberger

    Burnout in Trust & Safety: The Evolution of Wellness and Resilience in Integrity work

    Play Episode Listen Later Jan 28, 2025 49:44


    Hey, it's been a while but we are kicking off 2025 with new Trust in Tech episodes!Trust and safety work is tough, but someone has to do it. In this episode, we're joined by Cathryn Weems, T&S veteran and fellow II member, and wellness coach and consultant Stefania Pifer, to discuss the daily exposure to toxicity and trauma that workers face. We dive into less generic and more trauma-informed, community-focused approaches to building resilience—not just focusing on how we heal, but also how we think holistically about our working conditions.Further reading- Find out more about Stefania's work here!

    It's My Job To Yell At People - how social media can better protect the LGBTQ community

    Play Episode Listen Later Jun 17, 2024 46:02


    Happy Pride! In this episode we talk to Jenni Olsen, Senior Director of the Social Media Safety Program at GLAAD, about their work in promoting LGBTQ safety, privacy, and expression online.We talk about the recently released Social Media Safety Index, which evaluates social media platforms on their LGBTQ-inclusive policies. Some suggestions from their report include training moderators to understand and differentiate legitimate LGBTQ speech and content from harmful content, and creating policies against conversion therapy content and targeted misgendering and deadnaming.Finally, we talk about the challenges of balancing free speech with protecting marginalized communities and offer suggestions for individuals working at social media platforms to advocate for change.Further readingGLAAD's reports, including the Accelerating Acceptance report we discussTrust in Tech - let's talk about protecting the LGBTQ+ Community Online (last year's Pride Podcast)Alice's practical Guide to Protecting LGBTQ+ Users Online

    You really wanted us to answer more career questions (Featuring Cathryn Weems)

    Play Episode Listen Later Jun 10, 2024 62:00


    This episode is a listener Q&A featuring Cathryn Weems, seasoned Trust & Safety leader. We opened up our inbox to any possible question that anyone had aaaaand it was all about careers and job searching. It's rough out there, and folks need help! We discuss networking as an introvert, how to market yourself on a resume and in an interview, how to job search, and more. Mentioned in this episode:Trust & Safety TycoonLinkedIn people to follow: Alice Hunsberger, Jeff Dunn, Leslie TaylorIntegrity Institute, All Tech is Human, TSPAPrevious job search episode

    Bonus: Trust in Tech x Safety is Sexy (Policy edition)

    Play Episode Listen Later May 23, 2024 52:07


    Episode Description: This episode features a discussion on the importance of trust and safety policies in online platforms, between Alice Hunsbger (VP of Trust & Safety and Content Moderation at PartnerHero) and Matt Soeth (Head of Trust & Safety at All Tech is Human and Senior advisor at Tremau). Alice shares her experiences working on policy enforcement and emphasizes the need for clear communication with users and shares the impacts of policy on community. Additionally, Hunsberger provides insights about engaging with external stakeholders, updating policies for new technologies, and the role of policy in platform differentiation.

    Workplace ethics and activism with Nadah Feteih

    Play Episode Listen Later Apr 5, 2024 42:53


    Many of us working at tech companies are having to make moral and ethical decisions when it comes to where we work, what we work on, and what we speak up about. In this episode, we have a conversation with Nadah Feteih around how tech workers (specifically folks working in integrity and trust & safety teams) can speak up about ethical issues at their workplace. We discuss activism from within the industry, compelled identity labor, balancing speaking up and staying silent, thinking ethically in tech, and the limitations and harms of technology.TakeawaysBalancing speaking up and staying silent can be difficult for tech workers, as some topics may be divisive or risky to address.Compelled identity labor is a challenge faced by underrepresented and marginalized tech workers, who may feel pressure to speak on behalf of their communities.Thinking ethically in tech is crucial, and there is a growing need for resources and education on tech ethics.Tech employees have the power to take a stand and advocate for change within their companies.Engaging on social issues in the workplace requires a balance between different approaches, including staying within the system and speaking up from the outside.Listening to moderators and incorporating local perspectives is crucial for creating inclusive and equitable tech platforms.Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.Mentioned in this episode: Breaking the Silence: Marginalized Tech Workers' Experiences and Community SolidarityBlack in ModerationTech Worker HandbookNo Tech For ApartheidTech Workers CoalitionCreditsToday's episode was produced, edited, and hosted by Alice Hunsberger.You can reach myself and Talha Baig, the other half of the Trust in Tech team, at podcast@integrityinstitute.org. Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.

    Careers in T&S: Job search special (you ask, we answer)

    Play Episode Listen Later Feb 13, 2024 31:36


    You asked, we answered! It's a rough time out there in the tech industry as so many people in Trust & Safety are job searching or thinking about their career and what it all means. In this episode, Alice Hunsberger shares her recent job search experience and gives advice on job searching and career development in the Trust and Safety industry. Listener questions that are answered include: How do I figure out what to do next in my career?What helps a resume or cover letter stand out?What are good interviewing tips?What advice do leaders wish they had when they were first starting out?Do T&S Leaders really believe we will have an internet free (or at least drastically) reduced of harm?Resources and links mentioned in this episode:Personal Safety for Integrity workersHiring and growing trust & safety teams at small companiesKatie Harbath's career advice postsAlice Links Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.Today's episode was produced, edited, and hosted by Alice Hunsberger. You can reach myself and Talha Baig, the other half of the Trust in Tech team, at podcast@integrityinstitute.org. Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.

    Building the Wikipedia of Integrity w/ Grady Ward

    Play Episode Listen Later Feb 6, 2024 55:46


    Integrity workers are missing a shared resource where they can easily point to a taxonomy of harms and specific interventions to mitigate those harms. Enter, Grady Ward, a visiting fellow of the Integrity Institute, who discusses how he is creating a Wikipedia for and by integrity workers.In typical Trust in Tech fashion, we also discuss the tensions and synergies between integrity and privacy, and if you stick around to the end, you can hear about some musings on the interplay of art and nature.Links:The Wikipedia of Trust and SafetyGrady's personal websiteDisclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.Today's episode was produced, edited, and hosted by Talha Baig. You can reach myself and Alice Hunsberger, the other half of the Trust in Tech team, at podcast@integrityinstitute.org. Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.

    Child Safety on Online Platforms w/ Vaishnavi J.

    Play Episode Listen Later Jan 30, 2024 62:34


    With the Senate Child Safety Hearing on the horizon, we sit down with Vaishnavi, former head of Youth Policy at Meta and chat about the specific problems and current policy landscape regarding child safety. Now, Vaishnavi now works an as advisor to tech companies and policymakers on youth policy issues!Some of the questions answered on today's show include: - What are the different buckets of problems for Child Safety? - How we can think about age appropriate designs? - What are common misconceptions in the tension between privacy and child safety? - What is the current regulation for child safety? - What does she expect to hear at the judicial hearing?Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.

    Personal Safety for Integrity workers

    Play Episode Listen Later Jan 20, 2024 34:10


    Listen to this episode to learn how to stay safe as an Integrity workerLinks: - Tall Poppy (available through employers only at the moment) - DeleteMe - PEN America Online Harassment Field Manual - Assessing Online Threats - Want a security starter pack? | Surveillance Self-Defense - Yoel Roth on being targeted: Trump Attacked Me. Then Musk Did. It Wasn't an Accident. - Crash override network: What To Do If Your Employee Is Being Targeted By Online AbusePractical tips:If you're a manager - Train your team on what credible threats looks like - make sure you have a plan in place for dealing with threats to your office or employees - Allow pseudonyms; don't require public photos - Invest in services that can help your employee scrub public data.Individuals- Keep your personal social media private/ friends-only. - Use different photos on LinkedIn than your personal social media. - Consider hiding your location online, not using your full name, etc. Credits:Today's episode was produced, edited, and hosted by Alice Hunsberger. You can reach myself and Talha Baig, the other half of the Trust in Tech team, at podcast@integrityinstitute.org. Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.

    Dark Patterns and Photography w/ Caroline Sinders

    Play Episode Listen Later Jan 11, 2024 55:17


    Caroline Sinders is a ML design researcher, online harassment expert, and artist. We chat about common dark tech patterns, how to prevent them in your company, a novel way to think about your career and how photography is related to generative AI.Sinders has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, We answer the following questions on today's show:1. What are dark tech patterns… and how to prevent them2. How to navigate multi stakeholder groups to prevent baking in these dark patterns?3. What is a public person?4.. What is a framework to approach data visualization?5. How is photography an analogue to generative AI?This episode goes in lots of directions to cover Caroline's varied interests - hope you enjoy it!

    Holiday Special: Alice and Talha Mailbag Episode!

    Play Episode Listen Later Jan 1, 2024 62:13


    Alice and Talha answer some listener question, recap the year, the podcast, and perhaps where they want to take it!

    Introduction to Generative AI

    Play Episode Listen Later Dec 22, 2023 32:45


    An Introduction to Generative AIIn this episode, Alice Hunsberger talks with Numa Dhamani and Maggie Engler, who recently co-authored a book about the power and limitations of AI tools and their impact on society, the economy, and the law.In this conversation, they deep dive into some of the topics in the book, and discuss what writing a book was like, as well as what the process was to get to publication.You can preoder the book here, and follow Maggie and Numa on LinkedIn.

    How to build a Movement w/ David Jay

    Play Episode Listen Later Dec 15, 2023 46:23


    It seems everyday we are pulled in different directions on social media. However, what we are feeling seldom resonates. Enter David Jay! A master in building movements including leading it for the Center Humane Technology. In this episode, we will learn precisely how to build a movement, and why communities are perpetually underfunded.David Jay is an advisor of the Integrity Institute and played a pivotal role in the early days of the Institute. He is also currently the founder of Relationality Labs which hopes to make the impact of relational organizing visible so that organizers can be resourced for the strategic value that they create. In the past, he has had a diverse range of experiences, including founding asexuality.org, and as chief mobilization officer for the Center for Humane Technology.Here are some of the questions we answer on today's show:1. How do you create, scale, and align relationships to create a movement?2. How to structure stories to resonate?3. How to keep your nose on the edge for new movements?4. How to identify leaders for the future?5. Why David Jay is excited by the Integrity Institute and the future of integrity workers?6. Why community based initiatives don't get funded at the same rate as non-community based initiatives. Check out David Jay's Relationality Lab!Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent any other entity's views.

    The Ultimate Guide to Election Integrity Part II

    Play Episode Listen Later Dec 10, 2023 48:03


    Elections matter, and history has demonstrated online platforms will find themselves grappling with these challenges whether they want to be or not. The two key questions facing online platforms now, as they stare down the tsunami of global elections heading their way, are: Have they initiated an internal elections integrity program? And if so, how do they ensure the best possible preparation to safeguard democracies globally?The Integrity Institute launched an elections integrity best practices guide on “Defining and Achieving Success in Elections Integrity.” This latest guide extends the first and provides companies – large or small, established or new-on-the-block – concrete details as they fully implement an elections integrity program. Today on the podcast, we talk to four contributors about this guide: Glenn Ellingson, Diane Chang, Swapneel Mehta, and Eric Davis.Also check out our first episode on elections!

    Creeps, Consulting, and Creating Trust

    Play Episode Listen Later Oct 4, 2023 26:38


    Alice Hunsberger talks to Heather Grunkemeier again, this time covering Heather's solution for dealing with creeps at Rover from a policy and operational lens, measuring trust, and what it's been like for her to strike out on her own as a consultant.Also check out our first episode with Heather, How to Find Your Place in Trust & Safety: A Story of Career Pivoting.

    How to Find Your Place in Trust & Safety: A Story of Career Pivoting

    Play Episode Listen Later Sep 27, 2023 18:43


    Alice Hunsberger talks to Heather Grunkemeier (former Program Owner of Trust & Safety at Rover, and current owner of consultancy firm Twinkle LLC) and discusses how Heather finally broke into the field of Trust & Safety after years of trying, what it was actually like for her, and what her advice is for other people in the midst of career pivots. We also touch on mental health, identity, self worth, and how working in Trust & Safety has unique challenges (and rewards). If you liked our Burnout Episode, you may enjoy this one too. (And if you haven't listened to it yet or read our Burnout resource guide, please check it out).CreditsThis episode of Trust in Tech was hosted, edited, and produced by Alice Hunsberger.Music by Zhao Shen. Special thanks to the staff and members of the Integrity Institute for their continued support.

    The Future of AI Regulation with James Alexander

    Play Episode Listen Later Aug 2, 2023 83:35


    On today's episode, our host Talha Baig is joined by guest James Alexander to discuss all things AI liability. The episode begins with a discussion on liability legislation, as well as some of the unique situations that copyright law has created. Later in the episode, the conversation shifts to James's experience as the first member of Wikipedia's Trust and Safety team.Here are some of the questions we answer in today's episode.Who is liable for AI-generated content?How does section 230 affect AI?Why does AI have no copyright?How will negotiations play out between platforms and the companies building AI models?Why do the Spiderman multiverse movies exist?What did it look like to be the first trust and safety worker at Wikipedia?What does fact-checking look like at Wikipedia?

    Should We Have Open-Sourced Llama 2?

    Play Episode Listen Later Jul 24, 2023 73:43


    On today's episode, our host Talha Baig is joined by guest David Harris, who has been writing about Llama since the initial leak. The two of them begin by discussing all things Llama, from the leak to the open-sourcing of Llama 2. Later in the episode, they dive deeper into policy ideas seeking to improve AI safety and ethics.Show Links: David's Guardian ArticleCNN Article Quoting DavidLlama 2 release Article

    Pig Butchering: Not Your Grandma's Romance Scam

    Play Episode Listen Later Jul 11, 2023 53:34


    Assaf Kipnis has spent the last decade fighting e-crime and scams. Today, he's on the podcast with fellow Integrity Institute member Alice Hunsberger to tell us about Pig Butchering Scams and Coordinated Inauthentic Behavior, and how they are more sophisticated scams than you might think. If you take away one thing from this, it's this: don't follow investing advice from random people you meet online!Show Links: Pig Butchering Scam Victim Journey and AnalysisThe Anatomy of a Pig Butchering Scam Fraudology Podcast with Karisse HendrickPig Butchering Scams Are Evolving Fast | WIREDExample of educational guide for users: Grindr Scam awareness guideWhat's the deal with all those weird wrong-number texts?I've been getting tons of ‘wrong number' spam texts, and I don't hate it? - The VergeFacebook shuts down ‘the BL' Removing Coordinated Inauthentic Behavior From Georgia, Vietnam and the USA Former Fox News Executive Divides Americans Using Russian TacticsMeta October 2020 Inauthentic Behavior Report

    Happy Pride! Let's talk about protecting the LGBTQ+ community online

    Play Episode Listen Later Jun 15, 2023 41:26


    What can companies do to support the LGBTQ+ community during this pride season, beyond slapping a rainbow logo on everything? Integrity Institute members Alex Leavitt and Alice Hunsberger discuss the state of LGBTQ+ safety online and off, how the queer community is unique and faces disproportionate risks, and what are some concrete actions that platforms should be taking.Show Links:Human Rights Campaign declares LGBTQ state of emergency in the USSocial Media Safety IndexDigital Civility Index & Our Challenge | Microsoft Online SafetyBest Practices for Gender-Inclusive Content Moderation — Grindr BlogTinder - travel alertAssessing and Mitigating Risk for the Global Grindr CommunityStrengthening our policies to promote safety, security, and well-being on TikTokMeta's LGBTQ+ Safety centerData collection for queer minorities

    Civic Integrity at Twitter Pre- and Post-Elon Musk: w/ Rebecca Thein and Theodora Skeadas

    Play Episode Listen Later Jun 2, 2023 70:04


    The acquisition of Twitter broke, well, Twitter. Around 90% of the workforce left the company leaving shells of former teams to handle the same responsibility. Today, we welcome two guests from Twitter's civic integrity team. We welcome new guest Rebecca Thein. Rebecca, was a senior engineering technical program manager for Twitter's Information Integrity team. She is also a Digital Sherlock for the Atlantic Council's Digital Forensic Research Lab (DFRLab).Theodora Skeadas is a returning guest from our previous episode! She managed public policy at Twitter and was recently elected as an Elected Director of the Harvard Alumni Association.We answer the following questions on today's episode:How much was the civic integrity team hurt by the acquisition?What are candidate labels?How did Twitter prioritize its elections?What did the org structure of Twitter look like pre and post acquisition?And finally, what is this famous Halloween party that all the ex-Twitter folks are talking about?

    Tech Policy 101: “It's complicated!” with Pearlé Nwaezeigwe, the Yoncé of Tech Policy

    Play Episode Listen Later May 25, 2023 40:46


    This episode is a bit different – instead of getting deep into the weeds with a guest, we're starting from the beginning. Our guest today, Pearlé Nwaezeigwe, aka the Yoncé of Tech Policy, chats with me about Tech Policy 101. I get a lot of questions from people who are fascinated by Trust & Safety and Integrity work in tech, and they want to know – what does it look like? How can I do it too? What kinds of jobs are out there? So, I thought we'd tackle some of those questions here on the podcast. Today's episode covers the exciting topics of nipples, Lizzo, weed, and much more. And as any of us who have worked in policy would tell you, “it's complicated.” Let me know what you think (if you want to see more of these, or less) – this is an experiment. (You can reach me here on LinkedIn). — Alice HunsbergerLinks:Pearlé's newsletterLizzo talks about censorship and body shamingOversight board on nipples and nudityGrindr's Best Practices for Gender-Inclusive Content ModerationTSPA curriculum: creating and enforcing policyAll Tech is Human - Tech Policy HubCredits:Hosted and edited by Alice HunsbergerProduced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support

    The Ultimate Guide to Election Integrity! with Katie Harbath and Glenn Ellingson

    Play Episode Listen Later May 17, 2023 85:29


    It might be May 2023, but it's never too early to start worrying about elections! 2024 is slated to be the biggest year of elections in platform history. In this episode Katie Harbath and Glenn Ellingson join the show to prepare you for the storm of elections coming in 2024.You may recognize Katie as the inaugural guest of Trust in Tech. Katie is an Integrity Institute Fellow and global leader at the intersection of elections, democracy, and technology. She is Chief Executive of Anchor Change where she helps clients think through tech policy issues. Before that she worked at Meta for 10 years where she built and led a 30 person team managing elections. Glenn is an Integrity Institute member who was previously an engineering manager for Meta's civic integrity team and before that Head of Product Engineering for Hustle - a company which helped progressive political organizations and other nonprofit and for-profit groups forge personal relationships at scale.Glenn and Katie led the development of the Elections Best Practices deck the Integrity Institute just shared on their website, which we discuss in the episode. We also answer some of the following questions:How to prioritize different elections across the world?What principles to adhere to when working on election integrity?What are the challenges of dealing with political harassment?How to map out the landscape of election integrity work?What was Cambridge Analytica, and did the scandal actually make platforms less transparent?And how your company can learn best practices and responsibly deal with electionsLinks:Election Integrity best practices deckAnchor Change A Brief History of Tech and Elections: A 26-Year JourneyDemystifying the Cambridge Analytica Scandal Five Years LaterDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta's or any other entity's views.

    Transparency, Trade-offs, and Free Speech with Brandon Silverman

    Play Episode Listen Later May 12, 2023 77:19


    We live in a world where platforms influence the digital and real lives of billions of people across the world, perhaps with more influence than many governments. However, the decision making processes around the platform are generally opaque and obscure. This is why today's guest — Integrity Institute Fellow Brandon Silverman — transitioned to policy advocacy for platform transparency, data sharing, and an open internet helping regulators, lawmakers and advocacy groups think through the best ways to set-up online transparency regimes.Brandon is the former CEO and co-founder of Crowdtangle. a social analytics tool that is used by tens of thousands of newsrooms, academics, researchers, fact-checkers, civil society and more to help monitor public content in real-time.Some questions we answer on today's episode:What tradeoffs exist between free speech and transparency?How did Crowdtangle partner with civic actors across the world?Brandon's thoughts on the leaking of the Twitter AlgorithmWhat principle Crowdtangle used when sharing access to governments?What metric did the Crowdtangle team optimize for?What Brandon wished he could have done differently at Meta?And of course how you the listener can help in this fight for platform transparency.Links:The United States' Approach to 'Platform' Regulation by Eric GoldmanState Abuse of Transparency Laws and How to Stop It by Daphne KellerThe Impression of Influence: Legislator Communication, Representation, and Democratic Accountability by Solomon MessingGarbage Day by Ryan BroderickAs a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.

    GPT4: Eldritch abomination or intern? A discussion with OpenAI

    Play Episode Listen Later May 4, 2023 78:15


    OpenAI, creators of ChatGPT, join the show! In November 2022, ChatGPT upended the tech (and larger) world with a chatbot that passes not only the Turing test, but the bar exam. In this episode, we talk with Dave Willner and Todor Markov, integrity professionals at OpenAI, about how they make large language models safer for all. Dave Willner is the Head of Trust and Safety at OpenAI. He previously was Head of Community Policy at both Airbnb and Facebook, where he built the teams that wrote the community guidelines and oversaw the internal policies to enforce them. Todor Markov is a deep learning researcher at OpenAI. He builds content moderation tools for ChatGPT and GPT4. He graduated from Stanford with a Master's in Statistics and a Bachelor's in Symbolic Systems. Alice Hunsberger hosts the episode. She is the VP of Customer Experience at Grindr. She leads Customer support, insights and trust and safety. Previously, she worked at OKCupid as Director & Global Head of Customer Experience. Sahar Massachi is a visiting host today. He is the co-founder and Executive Director of the Integrity Institute. A past fellow of the Berkman Klein Center, Sahar is currently an advisory committee member for the Louis D. Brandeis Legacy Fund for Social Justice, a StartingBloc fellow, and a Roddenbery Fellow.They discuss what content moderation looks like for ChatGPT, why T&S stands for Tradeoffs and Sadness, and how integrity workers can help OpenAI.They also chat about the red-teaming process for GPT4, overlaps between platform integrity and AI integrity, their favorite GPT jailbreaks and how moderating GPTs is basically like teaching an Eldritch Abomination. Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta's or any other entity's views.

    Turkey's complicated relationship with social media: how authoritarianism and platforms can clash

    Play Episode Listen Later Apr 27, 2023 67:28


    Turkey's election is only 3 weeks away and, during the recording of this episode, the Turkish regime has detained 150 people including activists, lawyers, and journalists. Gürkan Özturan talks about this context and Turkey's fraught relationship with the media. Gürkan Özturan is the coordinator of Media Freedom Rapid Response at the European Centre for Press and Media Freedom. He is the former executive manager of rights-focused independent grassroots journalism platform dokuz8news.Talha Baig returns as host and talks to Gürkan regarding different intersections regarding social media and activism. For example, Gurkan mentions how he despises the WhatsApp 5 reshare limit, and he also mentions how the removal of a government sanctioned troll army made his life easier on Twitter.On top of this we learn how the Turkish government controlled the media landscape, and what happens if a social media algorithm has to comply with authoritarian regions. If you enjoy the episode, please subscribe and share with your friends! If you have feedback, feel free to email Talha at tbaig6@gmail.com. His linkedin is: https://www.linkedin.com/in/talha-baig/. Links:Arushi Saxena's episode on Digital Media LiteracyDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode.

    Trust in Tech Ep 17 (Special): A window into navigating emerging technology innovation with Nick Reese

    Play Episode Listen Later Apr 21, 2023 0:25


    Today we are joined by professor and strategy expert Nick Reese, Deputy Director of emerging technology policy at The Department of Homeland Security. He is the Author of the DHS Artificial Intelligence Strategy, The Space Policy, and the Post Quantum Cryptography Roadmap. He's also DHS's representative at various interagency Policy Coordination Committee meetings at the white house, chaired by the National Security Council, Office of Science and Technology Policy, and National Space Council. We discuss challenges involving new and emerging technologies and the complexities of navigating policy environments between the private and public sectors. Specifically they discuss the challenges of navigating unknown technologies, trying not to stifle innovation and how to improve private-public partnerships.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.

    Trust in Tech, Episode 16: Auntie, WHAT did you just send me?! with Arushi Saxena

    Play Episode Listen Later Apr 12, 2023 34:00


    Arushi Saxena was frustrated by seeing and hearing about misinformation memes in large family WhatsApp groups, so she set out to do something about it. Arushi is the Head of Policy, Partnerships, Product Marketing at DynamoFL, and former Senior Product Marketing Manager at Twitter. She was also a graduate fellow at Berkman Klein Center for Internet & Society at Harvard University, focusing on Disinformation. In this episode, Alice Hunsberger chats with Arushi about what she learned while trying to combat her loved ones' accidental misinfo sharing, and what methods work (especially in an Indian cultural context). Come away with some specific learnings about intergenerational understanding, whether people respond better to comedy or serious posts, and what inoculation theory is. Plus, we have an internal debate about whether people are basically good or not. What do you think?Disclaimer: The views in this episode only represent the views of the individuals involved in the recording of the episode, and do not represent any company's views. Further reading: Arushi's blog post on the EkMinute ProjectLearning to Detect Fake News: A Field Experiment to Inoculate Against Misinformation in India. Guest Post by Naman GargMisinformation surges amid India's COVID-19 calamity | AP NewsPsychological inoculation improves resilience against misinformation on social media | Science Advances

    Trust in Tech, Episode 15: Gaming the Algorithm with Hallie Stern

    Play Episode Listen Later Apr 7, 2023 59:22


    What is the difference between a Hollywood actor and a trust and safety professional? Not much!In this episode Talha Baig, an ML Engineer, interviews Hallie Stern on how Hollywood actors game the algorithm, and how the mass surveillance ecosystem incentives niche targeting which leads to the spread of misinformation.Hallie is a former Hollywood actor turned Integrity professional. She received her MS from NYU in Global Security, Conflict & Cybercrime where she studied the human side of global cyber conflict and digital disorder. She now runs her own Trust and Safety consulting firm Mad Mirror Media. We discuss how to go viral on social media, the difference between data and tech literacy, and why it can feel like platforms are listening to you. We also have a huge announcement in this episode, so be sure to tune in to find out!Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode.Credits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support

    Trust in Tech, Episode 14: Addressing Systemic Bias in Tech w/ Meredith Broussard

    Play Episode Listen Later Mar 31, 2023 51:52


    Bias exists all around us, and unfortunately it is present in the technologies we use today.Today we are joined by Professor Meredith Broussard, a data journalism professor at NYU, and research director at the NYU Alliance for Public Interest Technology. She is the author of a new book More Than A Glitch: Confronting Race, gender, and Ability Bias in Tech.We discuss the root cause of systemic biases in tech and why the current paradigm of establishing and optimizing metrics leads to misaligned incentives in both technology companies and journalism.Along the way Meredith explains techno-chauvanism, its prevalence and why computer science students are taught to think this way. We also discuss how Mid-Journey maps from text to image, the purpose of science fiction, and how algorithmic audits can help mitigate bias for technology companies.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.

    Trust in Tech, Episode 13: Transparency Reports, Theatre and Power with Nima Mozhgani

    Play Episode Listen Later Mar 24, 2023 46:18


    Transparency is at the forefront of the discourse today with Tik Tok CEO Shou Zi Chew testifying in front of congress and GPT4 releasing a system card. Nima Mozhgani, an expert in transparency reports and Talha Baig, a former Content Moderation Engineer, discuss one of the mechanisms of transparency – transparency reports: what they are, and how they can hold the powerful accountable.Tik Tok CEO Shou Zi Chew testified in front of Congress yesterday (March 23, 2023). However, it seemed both Tik Tok and Congress were running theatre for their own self-interests. Nima Mozhgani works at Snap on the policy team running our transparency reporting function. He holds a Bachelor's Degree in Economics and Political Science from Columbia University with specialization in Middle Eastern, South Asian, and African Studies. He also serves as the Vice-Chair of the Transparency & Accountability Working Group of the Tech Coalition - an alliance of global tech companies who are working together to combat child sexual exploitation and abuse online.Nima and Talha Baig, a former content moderation engineer, discuss transparency. They talk about the importance of transparency, and the different forms that transparency comes in including the traditional transparency reports, Tik Tok's transparency center and GPT-4's system card.They also discuss algorithmic transparency, transparency in a global setting and why advertisers love transparency reports.Links:TRUST framework from the Tech CoalitionCredits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued supportDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Snap or any other entity's views.

    Trust in Tech, Episode 12: Deepfakes, Biases and AI Hegemony with Claire Boine

    Play Episode Listen Later Mar 18, 2023 66:18


    Deepfakes have gained steam on video platforms including Tik Tok and Reels. For example, we hear Obama, Trump and Biden ranking their favorite rappers and even playing dungeons and dragons. Does this technology have potential harmful effects?This episode features Claire Boine, an expert in AI law, in conversation with Integrity Institute member Talha Baig, a Machine Learning (ML) Engineer. Claire is a PhD candidate in AI Law at the University of Ottawa, and a Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at UOttawa. Claire also runs a nonprofit organization whose goal is to help senior professionals motivated by evidence and reason transition into high impact fields including AI. We discuss how deep fakes present an asymmetrical power dynamic and some mitigations we can put in place including data trusts - a collective to put the data back in the hands of users. We also ponder the use of simulacras to replace dead actors and discuss whether we can resurrect dead philosophers by the use of deep learning. Towards the end of the episode, we surmise how chatbots develop bias, and even discuss if AI is sentient and whether that matters.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta's or any other entity's views. Links:Sabelo Mhlambi: From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance [link]Kevin Roose: Bing's A.I. Chat: ‘I Want to Be Alive.

    Trust in Tech, Episode 11: The Impact of Social Media on the Past and Present: History, Hate, and Techno-imperialism

    Play Episode Listen Later Mar 9, 2023 60:05


    This episode features Jason Steinhauer, author of "History Disrupted: How Social Media and the World Wide Web Have Changed the Past", and Integrity Institute member Theodora Skeadas, public policy professional with 10 years of experience at the intersection of technology, society, and safety. Theo has worked in Twitter, Booz Allen Hamilton, and is currently president of Harvard W3D: Women in Defense, Diplomacy, and Development.In recent years, social media has been a breeding ground for disinformation, hate speech, and the spread of harmful ideologies.Jason argues that social media has birthed a new genre of historical communication that he calls “e-history,” a user-centric, instantly-gratifying version of history that often avoids the true complexity of the past. Theo retorts that social media and wikipedia are non-gate-kept institutions that have allowed for the democratization of history - so both the winners and the losers write the past.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta's or any other entity's views. Links:Jason's book: History, Disrupted: How Social Media and the World Wide Web Have Changed the PastJason's substack: History ClubHarvard's W3D: Women in Defense, Diplomacy and Development newsletter: ThreoAll Tech is Human: websiteCredits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support

    Trust in Tech, Episode 10: Counter-terrorism on Tech Platforms w/ GIFCT Director of Technology Tom Thorley

    Play Episode Listen Later Mar 2, 2023 61:32


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Institute member Talha Baig is in conversation with Tom Thorley of the Global Internet Forum to Counter Terrorism (GIFCT). The Forum was established to foster technical collaboration among member companies, advance relevant research, and share knowledge with smaller platforms. Tom Thorley is the Director of Technology at GIFCT and delivers cross-platform technical solutions for GIFCT members. He worked for over a decade at the British government's signals intelligence agency, GCHQ, where Tom specialized in issues at the nexus of technology and human behavior. As a passionate advocate for responsible technology, Tom is a member of the board of the SF Bay Area Internet Society Chapter; is a mentor with All Tech Is Human and Coding It Forward; and also volunteers with America On Tech and DataKind.Tom and Talha discuss the historical context behind founding GIFCT, the difficulties of cross-platform content moderation, and fighting terrorism over encrypted networks while maintaining human rights.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.Credits:If you enjoyed today's conversation please share this episode to your friends, so we can continue making episodes like this.Today's episode was produced by Talha Baig Music by Zhao ShenSpecial thanks to Sahar, Cass, Rachel and Sean for their continued support

    Trust in Tech, Episode 9: Positioning Generative AI to Empower Artists

    Play Episode Listen Later Feb 22, 2023 0:46


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Institute co-founder Jeff Allen and Institute member Derek Slater discuss the Creative Commons statement in favor of generative AI. Derek is a founding partner at Proteus Strategies, and, among his various hats, was formerly Google's Global Director of Information Policy. As context: on Feb 6, 2023, the Creative Commons came out with a statement in favor of generative AI, claiming “Just as people learn from past works, generative AI is trained on previous works, analyzing past materials in order to extract underlying ideas and other information in order to build new works”Jeff and Derek reflect on this statement: discussing how past platforms have failed and succeeded at working with creators, and musing on what the future of work could look like.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.Credits:If you enjoyed today's conversation please share this episode to your friends, so we can continue making episodes like this.Today's episode was produced by Talha Baig Music by Zhao ShenSpecial thanks to Sahar, Cass, Rachel and Sean for their continued support

    Trust in Tech, Episode 8: Hiring and growing trust & safety teams at small companies

    Play Episode Listen Later Feb 15, 2023 0:35


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, two Trust & Safety leaders discuss what it's really like to build teams at small companies. We discuss the pros and cons of working at a small company, what hiring managers look for, how small teams are structured, and career growth opportunities.Alice Hunsberger, VP CX, Grindr interviews Colleen Mearn, who currently leads Trust & Safety at Clubhouse. Previously she was the Global Vertical Lead at YouTube for Harmful and Dangerous policies. In both of these roles, Colleen has loved figuring out how to scale global policies and building high-performing teams.Timestamps:0:30 - Intro/ Colleen's background2:30 - Tech policy jobs5:26 - Downsides of Big Tech6:30- Collaborating cross-functionally, working with product teams9:45 - Building teams at small companies12:30 - Types of people who succeed at small companies16:00 - Career growth17:00 - Growing a team, which roles to prioritize20:45 - The hiring process at small companies23:15- What hiring managers at small companies look for24:45 - Cover letter controversy27:20 - Pivoting to Trust and Safety mid-career, vs. starting as a content moderator34:30 - OutroTrust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.

    Trust in Tech, Episode 7: XCheck — Policing the Elite of Facebook Users

    Play Episode Listen Later Jan 30, 2023 39:39


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Integrity Institute member Lauren Wagner and fellow Karan Lala discuss Meta's cross-check program and the Oversight Board's policy advisory opinion. They cover how Meta treats its most influential and important users, the history and technical details of the cross-check program, the public response to its leak, what the Oversight Board found with respect to Meta's scaled content moderation, and what the company could do to address its gaps going forward. Lauren Wagner is a venture capitalist and fellow at the Berggruen Institute researching trust and safety. She previously worked at Meta, where she developed product strategy to tackle misinformation at scale and built privacy-protected data sharing products. Karan Lala is currently a J.D. Candidate at the University of Chicago Law School working at the intersection of policy and technology. He was a software engineer on Facebook's Civic Integrity team, where he led efforts to detect and enforce against abusive assets and sensitive entities in the civic space.Timestamps:0:00: Intro1:36: Overview of the XCheck program7:53: Data-sharing with the Oversight Board11:01: XCheck around the world12:59: The Oversight Board's findings19:25: Public response to the leak22:40: Recommendations and fixes 34:02: What should the future of XCheck look like? Credits:Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.

    Trust in Tech, Episode 6: Reconciling Capitalism & Community

    Play Episode Listen Later Jan 25, 2023 55:01


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this sixth episode, Integrity Institute member Alice Hunsberger and Community Advisor Cassandra Marketos discuss digital spaces and community building. They discuss how to live in a world where community is not the default; whether being anonymous in online spaces is a good thing; and how product design and perception can influence the legitimacy of the content and community of a product. Cassandra “Cass” Marketos has a varied background and a diverse range of skills. She started out as a product manager for the music label Insound. Then she was the first employee at Kickstarter, where she worked on everything related to editorial and community. After her time there, Cass was deputy director of digital outbound during the Obama administration. And now she serves as Community Advisor on the Integrity Institute staff, making our community at the Integrity Institute feel like home. Cass has launched several non-profits including Dollar a Day, and now builds her local community with compost.Timestamps:0:00: Intro0:50 What is community2:30: Business and Community11:00: Being Idealistic and Realistic12:50: Is Discord the future?19:50: Anonymity in Online Spaces25:00: Universal ToS is Impossible31:20: Social Media as Road Rage34:10: Building Community in Real and Online life46:00: Urban Dictionary and Product Design Legitimizing Content51:20: Having a Community Advocate on your TeamCredits:Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.

    Trust in Tech, Episode 5: Keeping the Metaverse Safe

    Play Episode Listen Later Jan 18, 2023 45:59


    In this fifth episode, Integrity Institute members Talha Baig and Lizzy Donahue talk Integrity in the metaverse. The conversation ranges from defining what the metaverse is to discussing whether it should even exist! We also discuss other fun topics, such as: integrity issues with augmented reality and dating in the metaverse. Lizzy is an experienced integrity professional who worked at Meta for 7 years where she pioneered machine learning to proactively detect suicidal intent, worked on Integrity at Oculus Rift home, and kept us safe on Horizon worlds. On top of that Lizzy was a “Global Social Benefit” fellow at SCU, where she won the top prize at her senior design conference for building a tool to aid Social Enterprises with training employees and customers. She is now working as a Trust and Safety engineer at Clubhouse.Talha steps in for Alice Hunsberger as host. Talha worked at Meta for the past 3 years as an ML engineer on Marketplace Integrity and is currently acting as producer for this podcast.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They DO NOT represent Meta's or any other entity's views. Timestamps:0:00 Intro1:45 What is the metaverse3:50 Integrity in the metaverse6:15 Privacy in the metaverse9:50 Should children be allowed in the metaverse14:30 Overwatch18:50 Body language in the metaverse24:50 Self-governance in the metaverse27:45 Decentralized recording29:45 Is the metaverse good for society?38:10 Dating in the metaverse40:50 Integrity for Augmented Reality44:55 CreditsTrust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi and Cassandra Marketos for their continued support, and to all the members of the Integrity Institute.

    Trust in Tech, Episode 4: Preventing and Reacting to Burnout

    Play Episode Listen Later Dec 9, 2022 71:00


    In this third episode, Integrity Institute member Alice Hunsberger talks with Institute cofounders Sahar Massachi and Jeff Allen about the issues around integrity in tech and why the Integrity Institute was founded, how to define integrity work, and why integrity teams are the true long-term growth teams of tech companies. We have a bit of a deep dive into hate speech and talk about several reasons why it's important to remove it, and the dreaded death spiral that can happen when platforms don't invest in integrity properly. We also discuss why building social media companies is an ethical endeavor, and the work the Integrity Institute has done to establish a code of ethics and a hippocratic oath for integrity workers. And we touch on Jeff and Sahar's thoughts on safety regulation for the industry, the importance of initial members to define a group's norms, the benefit to growing slowly, and why integrity workers are heroes. Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi and Cassandra Marketos for their continued support, and to all the members of the Integrity Institute.

    Trust in Tech, Episode 3: Founding Episode with Sahar Massachi & Jeff Allen

    Play Episode Listen Later Nov 29, 2022 42:48


    In this third episode, Integrity Institute member Alice Hunsberger talks with Institute cofounders Sahar Massachi and Jeff Allen about the issues around integrity in tech and why the Integrity Institute was founded, how to define integrity work, and why integrity teams are the true long-term growth teams of tech companies. We have a bit of a deep dive into hate speech and talk about several reasons why it's important to remove it, and the dreaded death spiral that can happen when platforms don't invest in integrity properly. We also discuss why building social media companies is an ethical endeavor, and the work the Integrity Institute has done to establish a code of ethics and a hippocratic oath for integrity workers. And we touch on Jeff and Sahar's thoughts on safety regulation for the industry, the importance of initial members to define a group's norms, the benefit to growing slowly, and why integrity workers are heroes. Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi and Cassandra Marketos for their continued support, and to all the members of the Integrity Institute.

    Trust in Tech, Episode 2: Global Threat Analysis with Zara Perumal

    Play Episode Listen Later Nov 14, 2022 28:24


    Integrity Institute members Alice Hunsberger and Zara Perumal talk about mis- and disinformation: how to recognize it and how to contextualize it, both individually and at scale. Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Jao Shen. Special thanks to Sahar Massachi and C assandra Marketos for their continued support, and to all the members of the Integrity Institute.

    Trust in Tech, Episode 1: Elections with Katie Harbath

    Play Episode Listen Later Nov 8, 2022 29:48


    Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals. In this first episode, Integrity Institute members Alice Hunsberger and Katie Harbath talk on the eve of the US midterms about the issues surrounding civic integrity and elections online; what life after working in tech looks like; and how scared Katie thinks we should be about the 2024 elections. Links:Integrity Institute's Elections Integrity ProgramBipartisan Policy Center: New Survey Data on Who Americans Look To For Election Information (Nov 2, 2022)Katie Harbath's newsletter: https://anchorchange.substack.com/

    Claim Trust in Tech: an Integrity Institute Member Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel